Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nearest Neighbors
sklearn.neighbors provides functionality for unsupervised and supervised neighbors-based learning methods. Supervised neighbors-based learning comes in two flavors
Step1: A First Application
Step2: Measuring Success
Step3: First things first
Step4: From the plots, we can see that the three classes seem to be relatively well separated using the sepal and petal measurements. This means that a machine learning model will likely be able to learn to separate them quite well.
Building your model
Step5: The knn object encapsulates the algorithm that will be used to build the model from the training data, as well the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data. In the case of KNeighborsClassifier, it will just store the training set.
To build the model on the training set, we call the fit method of the tree object, which takes as arguments the NumPy array X_train containing the training data and the NumPy array y_train of the corresponding training labels.
Step6: Making predictions
We can now make predictions using this model on new data for which we might not know the correct labels. Image we found an iris in the wild with a sepal length of 5 cm, a sepal width of 2.9 cm, a petal length of 1 cm, and a petal width of 0.2cm. What species of iris would this be? We can put this data into a NumPy array, which in this case will be of shape 1 x 4 (1 row/sample x 4 features).
Note
Step7: Evaluating the model
How do we know whether we can trust our model? This is where the test set that we created earlier comes in. This data was not used to build the model, but we do know what the correct speciies is for each iris in the test set.
Therefore, we can make a prediction for each iris in the test data and compare it against its lable (the known species). We can measure how well the model works by computing the accuracy, which is the fraction of flowers for which the right species was predicted.
We can also use the score method of the tree object, which will compute the test set accuracy for us.
Step8: For this model, the test set accuracy is about 0.97, which means we made the right prediction for 97% of the irises in the test set. Under some mathematical assumptions, this means that we can expect our model to be correct about 97% of the time for new irises.
A more advanced model may be able to do a better job, but with an overlapping dataset like this, it is unlikely that we would ever be able to achieve 100% accuracy.
Summary
Here is a summary of the code needed for the whole training and evaluation procedure (just 4 lines!).
This snippet contains the core code for applying any machine learning algorithm using scikit-learn. The fit, predict, and score methods are the common interface to supervised models in scikit-learn.
Step11: Effect of k
Let's Investigate the effect of varying k. But the iris dataset isn't the ideal dataset for this purpose. So let's create a synthetic dataset which this would be good for.
Step12: Here, we added three new data points, shown as stars. For each of them, we marked the closest point in the training set. The prediction of the one-nearest-neighbor algorithm is the label of that point (shown by the color of the cross).
Instead of considering only the closest neighbor, we can also consider an arbitrary number, k, of neighbors. This is where the name of the k-nearest neighbors algorithm comes from. When considering more than one neighbor, we use voting to assign a label. This means that for each test point, we count how many neighbors belong to class 0 and how many neighbors belong to class 1. We then assign the class that is more frequent
Step13: Again, the prediction is shown as the color of the cross. You can see that the prediction for the new data point at the top left is not the same as the prediction when we used only one neighbor.
While this illustration is for a binary classification problem, this method can be applied to datasets with any number of classes. For more classes, we count how many neighbors belong to each class and again predict the most common class.
Analyzing Decision Boundary as k varies
For two-dimensional datasets, we can also illustrate the prediction for all possible test points in the xy-plane. We color the plane according to the class that would be assigned to a point in this region. This lets us view the decision boundary, which is the divide between where the algorithm assigns class 0 versus where it assigns class 1. The following code produces the visualizations of the decision boundaries for one, three, and nine neighbors.
Step14: As you can see on the left in the figure, using a single neighbor results in a decision boundary that follows the training data closely. Considering more and more neighbors leads to a smoother decision boundary. A smoother boundary corresponds to a simpler model. In other words, using few neighbors corresponds to high model complexity, and using many neighbors corresponds to low model complexity. If you consider the extreme case where the number of neighbors is the number of all data points in the training set, each test point would have exactly the same neighbors (all training points) and all predictions would be the same | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Nearest Neighbors
sklearn.neighbors provides functionality for unsupervised and supervised neighbors-based learning methods. Supervised neighbors-based learning comes in two flavors: classification for data with discrete labels, and regression for data with continuous labels.
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard Euclidean distance is the most common choice. Neighbors-based methods are known as non-generalizing machine learning methods, since they simply “remember” all of its training data (possibly transformed into a fast indexing structure such as a Ball Tree or KD Tree.).
Despite its simplicity, nearest neighbors has been successful in a large number of classification and regression problems, including handwritten digits or satellite image scenes. Being a non-parametric method, it is often successful in classification situations where the decision boundary is very irregular.
Nearest Neighbors Classification
Neighbors-based classification is a type of instance-based learning or non-generalizing learning: it does not attempt to construct a general internal model, but simply stores instances of the training data. Classification is computed from a simple majority vote of the nearest neighbors of each point: a query point is assigned the data class which has the most representatives within the nearest neighbors of the point.
scikit-learn implements two different nearest neighbors classifiers: KNeighborsClassifier implements learning based on the k nearest neighbors of each query point, where k is an integer value specified by the user. RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius r of each training point, where r is a floating-point value specified by the user.
The k-neighbors classification in KNeighborsClassifier is the more commonly used of the two techniques. The optimal choice of the value k is highly data-dependent: in general a larger k suppresses the effects of noise, but makes the classification boundaries less distinct.
Advantages of k-Nearest Neighbors
Easy to understand
Fastest learning algorithm
Building this model only consists of storing the training set
Often gives reasonable performance without a lot of adjustments
Disadvantages of k-Nearest Neighbors
When your training set is very large (either in number of features or in number of samples) prediction can be slow
Important to preprocess your data
A funamental assumption of kNN is that all dimensions are "equal", so scales should be similar.
Does not perform well on datasets with many features
Curse of Dimensionality
Does particularly badly with datasets where most features are 0 most of the time (so-called sparse datasets)
Disclaimer: Some of the code in this notebook was lifted from the excellent book Introduction to Machine Learning with Python by Andreas Muller and Sarah Guido.
End of explanation
from sklearn.datasets import load_iris
iris_dataset = load_iris()
print("Keys of iris_dataset: {}".format(iris_dataset.keys()))
# The value of the key DESCR is a short description of the dataset. Here we show the beinning of the description.
print(iris_dataset['DESCR'][:193] + "\n...")
# The value of the key target_names is an array of strings, containing the species of flower that we want to predict
print("Target names: {}".format(iris_dataset['target_names']))
# The value of feature_names is a list of strings, giving the description of each feature
print("Feature names: {}".format(iris_dataset['feature_names']))
# The data itself is contained in the target and data fields.
# data contains the numeric measurements of sepal length, sepal width, petal length, and petal width in a NumPy array
print("Type of data: {}".format(type(iris_dataset['data'])))
# The rows in the data array correspond to flowers, while the columns represent the four measurements for each flower.
print("Shape of data: {}".format(iris_dataset['data'].shape))
# We see that the array contains measurements for 150 different flowers (samples). Here are values for the first 5.
print("First five columns of data:\n{}".format(iris_dataset['data'][:5]))
# The target array contains the species of each of the flowers that were measured, also as a NumPy array
print("Type of target: {}".format(type(iris_dataset['target'])))
# target is a one-dimensional array, with one entry per flower
print("Shape of target: {}".format(iris_dataset['target'].shape))
# The species are encoded as integers from 0 to 2. The meanings of the numbers are given by the target_names key.
print("Target:\n{}".format(iris_dataset['target']))
Explanation: A First Application: Classifying iris species
One of the most famous datasets for classification in a supervised learning setting is the Iris flower data set. It is a multivariate dataset introduced in a 1936 paper which records sepal length, sepal width, petal length, and petal width for three species of iris.
scikit-learn has a number of small toy datasets included with it which makes it quick and easy to experiment with different machine learning algorithms on these datasets.
The sklearn.datasets.load_iris() method can be used to load the iris dataset.
Meet the data
The iris object that is returned by load_iris is a Bunch object, which is very similar to a dictionary. It contains keys and values.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'], iris_dataset['target'], random_state=0)
print("X_train shape: {}".format(X_train.shape))
print("y_train shape: {}".format(y_train.shape))
print("X_test shape: {}".format(X_test.shape))
print("y_test shape: {}".format(y_test.shape))
Explanation: Measuring Success: Training and testing data
We want to build a machine learning model from this data that can predict the species of iris for a new set of measurements. But before we can apply our model to new measurements, we need to know whether it actually works -- that is, whether we should trust its predictions.
Unfortunately, we cannot use the data we used to build the model to evaluate it. This is because our model can always simply remember the whole training set, and will therefore always predict the correct label for any point in the training set. This "remembering" does not indicate to us whether the model will generalize well (in other words, whether it will also perform well on new data).
To assess the model's performance, we show it new data (data that it hasn't seen before) for which we have labels. This is usually done by splitting the labeled data we have collected (here, our 150 flower measurements) into two parts. One part of the data is used to build our machine learning model, and is called the training data or training set. The rest of the data will be used to assess how well the model works; this is called the test data, test set, or hold-out set.
scikit-learn contains a function that shuffles the dataset and splits it for you: the train_test_split function. This function extracts 75% of the rows in the data as the training set, together with the corresponding labels for this data. The remaining 25% of the data, together with the remaining labels, is declared as the test set. Deciding how much data you want to put into the training and the test set respectively is somewhat arbitrary, but scikit-learn's default 75/25 split is a reasonable starting point.
In scikit-learn, data is usually denoted with a capital X, while labels are denoted by a lowercase y. This is inspired by the standard formulation f(x)=y in mathematics, where x is the input to a function and y is the output. Following more conventions from mathematics, we use a capital X because the data is a two-dimensional array (a matrix) and a lowercase y because the target is a one-dimensional array (a vector).
Before making the split, the train_test_split function shuffles the dataset using a pseudorandom number generator. If we just took the last 25% of the data as a test set, all the data points would have the label 2, as the data points are sorted by the label.
To make sure this example code will always get the same output if run multiple times, we provide the pseudorandom number generator with a fixed seed using the random_state parameter.
The output of the train_test_split function is X_train, X_test, y_train, and y_test, which are all NumPy arrays. X_train contains 75% of the rows of the dataset, and X_test contains the remaining 25%.
End of explanation
# create dataframe from data in X_train
# label the columns using the strings in iris_dataset.feature_names
iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names)
# create a scatter matrix from the dataframe, color by y_train
grr = pd.scatter_matrix(iris_dataframe, c=y_train, figsize=(15, 15), marker='o',
hist_kwds={'bins': 20}, s=60, alpha=.8)
Explanation: First things first: Look at your data
Before building a machine learning model, it is often a good idea to inspect the data, to see if the task is easily solvable without machine learning, or if the desired information might not be contained in the data.
Additionally, inspecting the data is a good way to find abnormalities and peculiarities. Maybe some of your irises were measured using inches and not centimeters, for example. In the real world, inconsistencies in the data and unexpected measurements are very common, as are missing data and not-a-number (NaN) or infinite values.
One of the best ways to inspect data is to visualize it. One way to do this is by using a scatter plot. A scatter plot of the data puts one feature along the x-axis and another along the y-axis, and draws a dot for each data point. Unfortunately, computer screens have only two dimensions, which allows us to plot only two (or maybe three) features at a time. It is difficult to plot datasets with more than three features this way. One way around this problem is to do a pair plot, which looks at all possible pairs of features. If you have a small number of features, such as the four we have here, this is quite reasonable. You should keep in mind, however, that a pair plot does not show the interaction of all of the features at once, so some interesting aspects of the data may not be revealed when visualizing it this way.
In Python, the pandas library has a convenient function called scatter_matrix for creating pair plots for a DataFrame.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
Explanation: From the plots, we can see that the three classes seem to be relatively well separated using the sepal and petal measurements. This means that a machine learning model will likely be able to learn to separate them quite well.
Building your model: k-Nearest Neighbors
Now we can start building the actual machine learning model. There are many classification algorithms in scikit-learn that we could use. Here we will use a k-nearest neighbors classifier, which is easy to understand.
The k in k-nearest neighbors signifies that instead of using only the closest neighbor to the new data point, we can consider any fixed number k of neighbors in the training (for example, the closest three or five neighbors). Then, we can make a prediction using the majority class among these neighbors. For starters, we’ll use only a single neighbor.
All machine learning models in scikit-learn are implemented in their own classes, which are called Estimator classes. The k-nearest neighbors classification algorithm is implemented in the KNeighborsClassifier class in the neighbors module. Before we can use the model, we need to instantiate the class into an object. This is when we will set any parameters of the model. The most important parameter of KNeighborsClassifier is n_neighbors , the number of neighbors, which we will set to 1. The default value for n_neighbors is 5.
End of explanation
knn.fit(X_train, y_train)
Explanation: The knn object encapsulates the algorithm that will be used to build the model from the training data, as well the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data. In the case of KNeighborsClassifier, it will just store the training set.
To build the model on the training set, we call the fit method of the tree object, which takes as arguments the NumPy array X_train containing the training data and the NumPy array y_train of the corresponding training labels.
End of explanation
X_new = np.array([[5, 2.9, 1, 0.2]])
print("X_new.shape: {}".format(X_new.shape))
prediction = knn.predict(X_new)
print("Prediction: {}".format(prediction))
print("Predicted target name: {}".format(iris_dataset['target_names'][prediction]))
Explanation: Making predictions
We can now make predictions using this model on new data for which we might not know the correct labels. Image we found an iris in the wild with a sepal length of 5 cm, a sepal width of 2.9 cm, a petal length of 1 cm, and a petal width of 0.2cm. What species of iris would this be? We can put this data into a NumPy array, which in this case will be of shape 1 x 4 (1 row/sample x 4 features).
Note: Even though we made the measurements of this single flower, scikit-learn always expects two-dimensional arrays for the data.
To make a prediction, we call the predict method of the tree object.
End of explanation
y_pred = knn.predict(X_test)
print("Test set predictions:\n {}".format(y_pred))
print("Test set score: {:.2f}".format(np.mean(y_pred == y_test)))
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
Explanation: Evaluating the model
How do we know whether we can trust our model? This is where the test set that we created earlier comes in. This data was not used to build the model, but we do know what the correct speciies is for each iris in the test set.
Therefore, we can make a prediction for each iris in the test data and compare it against its lable (the known species). We can measure how well the model works by computing the accuracy, which is the fraction of flowers for which the right species was predicted.
We can also use the score method of the tree object, which will compute the test set accuracy for us.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'], iris_dataset['target'], random_state=0)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
Explanation: For this model, the test set accuracy is about 0.97, which means we made the right prediction for 97% of the irises in the test set. Under some mathematical assumptions, this means that we can expect our model to be correct about 97% of the time for new irises.
A more advanced model may be able to do a better job, but with an overlapping dataset like this, it is unlikely that we would ever be able to achieve 100% accuracy.
Summary
Here is a summary of the code needed for the whole training and evaluation procedure (just 4 lines!).
This snippet contains the core code for applying any machine learning algorithm using scikit-learn. The fit, predict, and score methods are the common interface to supervised models in scikit-learn.
End of explanation
import numbers
import numpy as np
from sklearn.utils import check_array, check_random_state
from sklearn.utils import shuffle as shuffle_
def make_blobs(n_samples=100, n_features=2, centers=2, cluster_std=1.0,
center_box=(-10.0, 10.0), shuffle=True, random_state=None):
Generate isotropic Gaussian blobs for clustering.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, or tuple, optional (default=100)
The total number of points equally divided among clusters.
n_features : int, optional (default=2)
The number of features for each sample.
centers : int or array of shape [n_centers, n_features], optional
(default=3)
The number of centers to generate, or the fixed center locations.
cluster_std: float or sequence of floats, optional (default=1.0)
The standard deviation of the clusters.
center_box: pair of floats (min, max), optional (default=(-10.0, 10.0))
The bounding box for each cluster center when centers are
generated at random.
shuffle : boolean, optional (default=True)
Shuffle the samples.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The generated samples.
y : array of shape [n_samples]
The integer labels for cluster membership of each sample.
Examples
--------
>>> from sklearn.datasets.samples_generator import make_blobs
>>> X, y = make_blobs(n_samples=10, centers=3, n_features=2,
... random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 0, 1, 0, 2, 2, 2, 1, 1, 0])
See also
--------
make_classification: a more intricate variant
generator = check_random_state(random_state)
if isinstance(centers, numbers.Integral):
centers = generator.uniform(center_box[0], center_box[1],
size=(centers, n_features))
else:
centers = check_array(centers)
n_features = centers.shape[1]
if isinstance(cluster_std, numbers.Real):
cluster_std = np.ones(len(centers)) * cluster_std
X = []
y = []
n_centers = centers.shape[0]
if isinstance(n_samples, numbers.Integral):
n_samples_per_center = [int(n_samples // n_centers)] * n_centers
for i in range(n_samples % n_centers):
n_samples_per_center[i] += 1
else:
n_samples_per_center = n_samples
for i, (n, std) in enumerate(zip(n_samples_per_center, cluster_std)):
X.append(centers[i] + generator.normal(scale=std,
size=(n, n_features)))
y += [i] * n
X = np.concatenate(X)
y = np.array(y)
if shuffle:
X, y = shuffle_(X, y, random_state=generator)
return X, y
def make_forge():
# a carefully hand-designed dataset lol
X, y = make_blobs(centers=2, random_state=4, n_samples=30)
y[np.array([7, 27])] = 0
mask = np.ones(len(X), dtype=np.bool)
mask[np.array([0, 1, 5, 26])] = 0
X, y = X[mask], y[mask]
return X, y
from sklearn.model_selection import train_test_split
X, y = make_forge()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
import matplotlib as mpl
from matplotlib.colors import colorConverter
def discrete_scatter(x1, x2, y=None, markers=None, s=10, ax=None,
labels=None, padding=.2, alpha=1, c=None, markeredgewidth=None):
Adaption of matplotlib.pyplot.scatter to plot classes or clusters.
Parameters
----------
x1 : nd-array
input data, first axis
x2 : nd-array
input data, second axis
y : nd-array
input data, discrete labels
cmap : colormap
Colormap to use.
markers : list of string
List of markers to use, or None (which defaults to 'o').
s : int or float
Size of the marker
padding : float
Fraction of the dataset range to use for padding the axes.
alpha : float
Alpha value for all points.
if ax is None:
ax = plt.gca()
if y is None:
y = np.zeros(len(x1))
unique_y = np.unique(y)
if markers is None:
markers = ['o', '^', 'v', 'D', 's', '*', 'p', 'h', 'H', '8', '<', '>'] * 10
if len(markers) == 1:
markers = markers * len(unique_y)
if labels is None:
labels = unique_y
# lines in the matplotlib sense, not actual lines
lines = []
current_cycler = mpl.rcParams['axes.prop_cycle']
for i, (yy, cycle) in enumerate(zip(unique_y, current_cycler())):
mask = y == yy
# if c is none, use color cycle
if c is None:
color = cycle['color']
elif len(c) > 1:
color = c[i]
else:
color = c
# use light edge for dark markers
if np.mean(colorConverter.to_rgb(color)) < .4:
markeredgecolor = "grey"
else:
markeredgecolor = "black"
lines.append(ax.plot(x1[mask], x2[mask], markers[i], markersize=s,
label=labels[i], alpha=alpha, c=color,
markeredgewidth=markeredgewidth,
markeredgecolor=markeredgecolor)[0])
if padding != 0:
pad1 = x1.std() * padding
pad2 = x2.std() * padding
xlim = ax.get_xlim()
ylim = ax.get_ylim()
ax.set_xlim(min(x1.min() - pad1, xlim[0]), max(x1.max() + pad1, xlim[1]))
ax.set_ylim(min(x2.min() - pad2, ylim[0]), max(x2.max() + pad2, ylim[1]))
return lines
plt.figure(figsize=(10,6))
discrete_scatter(X[:, 0], X[:, 1], y)
plt.legend(["training class 0", "training class 1"])
from sklearn.metrics import euclidean_distances
def plot_knn_classification(n_neighbors=1):
X, y = make_forge()
X_test = np.array([[8.2, 3.66214339], [9.9, 3.2], [11.2, .5]])
dist = euclidean_distances(X, X_test)
closest = np.argsort(dist, axis=0)
for x, neighbors in zip(X_test, closest.T):
for neighbor in neighbors[:n_neighbors]:
plt.arrow(x[0], x[1], X[neighbor, 0] - x[0],
X[neighbor, 1] - x[1], head_width=0, fc='k', ec='k')
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)
test_points = discrete_scatter(X_test[:, 0], X_test[:, 1], clf.predict(X_test), markers="*")
training_points = discrete_scatter(X[:, 0], X[:, 1], y)
plt.legend(training_points + test_points, ["training class 0", "training class 1",
"test pred 0", "test pred 1"])
# k = 1
plot_knn_classification(n_neighbors=1)
Explanation: Effect of k
Let's Investigate the effect of varying k. But the iris dataset isn't the ideal dataset for this purpose. So let's create a synthetic dataset which this would be good for.
End of explanation
# k = 3
plot_knn_classification(n_neighbors=3)
Explanation: Here, we added three new data points, shown as stars. For each of them, we marked the closest point in the training set. The prediction of the one-nearest-neighbor algorithm is the label of that point (shown by the color of the cross).
Instead of considering only the closest neighbor, we can also consider an arbitrary number, k, of neighbors. This is where the name of the k-nearest neighbors algorithm comes from. When considering more than one neighbor, we use voting to assign a label. This means that for each test point, we count how many neighbors belong to class 0 and how many neighbors belong to class 1. We then assign the class that is more frequent: in other words, the majority class among the k-nearest neighbors. The following example uses the three closest neighbors.
End of explanation
from matplotlib.colors import ListedColormap
cm2 = ListedColormap(['#0000aa', '#ff2020'])
def plot_2d_separator(classifier, X, fill=False, ax=None, eps=None, alpha=1,
cm=cm2, linewidth=None, threshold=None, linestyle="solid"):
# binary?
if eps is None:
eps = X.std() / 2.
if ax is None:
ax = plt.gca()
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 100)
yy = np.linspace(y_min, y_max, 100)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
try:
decision_values = classifier.decision_function(X_grid)
levels = [0] if threshold is None else [threshold]
fill_levels = [decision_values.min()] + levels + [decision_values.max()]
except AttributeError:
# no decision_function
decision_values = classifier.predict_proba(X_grid)[:, 1]
levels = [.5] if threshold is None else [threshold]
fill_levels = [0] + levels + [1]
if fill:
ax.contourf(X1, X2, decision_values.reshape(X1.shape),
levels=fill_levels, alpha=alpha, cmap=cm)
else:
ax.contour(X1, X2, decision_values.reshape(X1.shape), levels=levels,
colors="black", alpha=alpha, linewidths=linewidth,
linestyles=linestyle, zorder=5)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
fig, axes = plt.subplots(1, 3, figsize=(10, 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate
# and fit in one line
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)
plot_2d_separator(clf, X, fill=True, eps=0.5, ax=ax, alpha=.4)
discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title("{} neighbor(s)".format(n_neighbors))
ax.set_xlabel("feature 0")
ax.set_ylabel("feature 1")
axes[0].legend(loc=3)
Explanation: Again, the prediction is shown as the color of the cross. You can see that the prediction for the new data point at the top left is not the same as the prediction when we used only one neighbor.
While this illustration is for a binary classification problem, this method can be applied to datasets with any number of classes. For more classes, we count how many neighbors belong to each class and again predict the most common class.
Analyzing Decision Boundary as k varies
For two-dimensional datasets, we can also illustrate the prediction for all possible test points in the xy-plane. We color the plane according to the class that would be assigned to a point in this region. This lets us view the decision boundary, which is the divide between where the algorithm assigns class 0 versus where it assigns class 1. The following code produces the visualizations of the decision boundaries for one, three, and nine neighbors.
End of explanation
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=66)
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
# build the model
clf = KNeighborsClassifier(n_neighbors=n_neighbors)
clf.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(clf.score(X_train, y_train))
# record generalization accuracy
test_accuracy.append(clf.score(X_test, y_test))
plt.figure(figsize=(10,6))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
Explanation: As you can see on the left in the figure, using a single neighbor results in a decision boundary that follows the training data closely. Considering more and more neighbors leads to a smoother decision boundary. A smoother boundary corresponds to a simpler model. In other words, using few neighbors corresponds to high model complexity, and using many neighbors corresponds to low model complexity. If you consider the extreme case where the number of neighbors is the number of all data points in the training set, each test point would have exactly the same neighbors (all training points) and all predictions would be the same: the class that is most frequent in the training set.
Let’s investigate whether we can confirm the connection between model complexity and generalization. We will do this on the real-world Breast Cancer dataset. We begin by splitting the dataset into a training and a test set. Then we evaluate training and test set performance with different numbers of neighbors.
End of explanation |
14,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the dispersion relation for slab geometry
We will here calculate the real and imaginary part of the dispersion relation given in
Pécseli, H -Low Frequency Waves and Turbulence in Magnetized Laboratory Plasmas and in the Ionosphere, 2016
Step1: The pure drift wave
We are here checking if the dispersion relation in 6.4.1 - Pure drift-wave is easy to work with
Step2: The difference in the two solutions are just the sign of the square-root
Step3: This is cumbersome to work with.
Using something easier.
Resistive drift waves with $T_i=0$
We are here checking if the dispersion relation in 5.5 - Dispersion relation, is easy to work with.
Step4: The difference in the two solutions are just the sign of the square-root.
The first solution gives the largest growth rate
Step5: This also gives quite the mess...
Splitting to $\omega_R$ and $\omega_I$ | Python Code:
from sympy import init_printing
from sympy import Eq, I
from sympy import re, im
from sympy import symbols
from sympy.solvers import solve
from IPython.display import display
from sympy import latex
om = symbols('omega')
omI = symbols('omega_i', real=True)
omStar = symbols('omega_S', real=True)
sigmaPar = symbols('sigma', positive=True)
b = symbols('b', real=True)
init_printing()
Explanation: Get the dispersion relation for slab geometry
We will here calculate the real and imaginary part of the dispersion relation given in
Pécseli, H -Low Frequency Waves and Turbulence in Magnetized Laboratory Plasmas and in the Ionosphere, 2016
End of explanation
LHS = om*(om-omI)+I*sigmaPar*(om-omStar + b*(om-omI))
RHS = 0
eq = Eq(LHS, RHS)
display(eq)
sol1, sol2 = solve(eq, om)
display(sol1)
display(sol2)
Explanation: The pure drift wave
We are here checking if the dispersion relation in 6.4.1 - Pure drift-wave is easy to work with
End of explanation
sol1Re = re(sol1)
sol1Im = im(sol1)
display(sol1Re)
display(sol1Im)
Explanation: The difference in the two solutions are just the sign of the square-root
End of explanation
LHS = om**2 + I*sigmaPar*(om*(1+b)-omStar)
RHS = 0
eq = Eq(LHS, RHS)
display(eq)
sol1, sol2 = solve(eq, om)
display(sol1)
display(sol2)
Explanation: This is cumbersome to work with.
Using something easier.
Resistive drift waves with $T_i=0$
We are here checking if the dispersion relation in 5.5 - Dispersion relation, is easy to work with.
End of explanation
sol1Re = re(sol1.expand())
sol1Im = im(sol1.expand())
real = Eq(symbols("I"),sol1Im.simplify())
imag = Eq(symbols("R"),sol1Re.simplify())
display(real)
display(imag)
print(latex(real))
print(latex(imag))
sol2Re = re(sol2.expand())
sol2Im = im(sol2.expand())
display(Eq(symbols("I"),sol2Im.simplify()))
display(Eq(symbols("R"),sol2Re.simplify()))
Explanation: The difference in the two solutions are just the sign of the square-root.
The first solution gives the largest growth rate
End of explanation
# NOTE: Do not confuse om_I with om_i
om_I = symbols('omega_I', real=True)
om_R = symbols('omega_R', real=True)
LHSSplit = LHS.subs(om, om_R + I*om_I)
display(Eq(LHS, re(LHSSplit)+I*im(LHSSplit)))
Explanation: This also gives quite the mess...
Splitting to $\omega_R$ and $\omega_I$
End of explanation |
14,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 3 - Expanded Query Builder Functions
Step1: More Query Builders
match_exists
match_exists() matches entries where there is any value in the field. As long as the field exists in the entry, it's a match.
Step2: match_not_exists
match_not_exists() is the opposite of match_exists(). Any entry without the given field is a match.
Step3: match_range
match_range() is the same as match_field() except that the value is a range. Strings are allowed as ranges at your own risk (they're evaluated based on alphabetical order, but can sometimes have unexpected results).
Step4: exclude_range
Similarly, you can exclude a range with exclude_range().
Step5: exclusive_match
If you want a key to have exactly a certain value, use exclusive_match(). This is most useful in the case of lists. | Python Code:
from mdf_forge.forge import Forge
mdf = Forge()
Explanation: Part 3 - Expanded Query Builder Functions
End of explanation
mdf.match_exists("services.globus_publish")
mdf.search(limit=10)
Explanation: More Query Builders
match_exists
match_exists() matches entries where there is any value in the field. As long as the field exists in the entry, it's a match.
End of explanation
mdf.match_not_exists("services.mrr")
mdf.search(limit=10)
Explanation: match_not_exists
match_not_exists() is the opposite of match_exists(). Any entry without the given field is a match.
End of explanation
mdf.match_range("mdf.scroll_id", 0, 10)
Explanation: match_range
match_range() is the same as match_field() except that the value is a range. Strings are allowed as ranges at your own risk (they're evaluated based on alphabetical order, but can sometimes have unexpected results).
End of explanation
mdf.exclude_range("mdf.scroll_id", 100, 199)
mdf.search(limit=10)
Explanation: exclude_range
Similarly, you can exclude a range with exclude_range().
End of explanation
mdf.exclusive_match("material.elements", "O").search(limit=10)
Explanation: exclusive_match
If you want a key to have exactly a certain value, use exclusive_match(). This is most useful in the case of lists.
End of explanation |
14,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transpressional deformation
Step1: Here we will examine strain evolution during transpression deformation. Transpression (Sanderson and Marchini, 1984) is considered as a wrench or transcurrent shear accompanied by horizontal shortening across, and vertical lengthening along, the shear plane.
<img width="65%" src="images/trasnpression.png">
Velocity gradient associated with transpressional deformation is defined as
Step2: Here we define some constants including bulk strain rate.
Step3: We define 2D arrays of angles and times to be examined...
Step4: and l;oop over to calculate symmetry and intensity for each combination
Step5: Now we can plot results. | Python Code:
%pylab inline
from scipy import linalg as la
Explanation: Transpressional deformation
End of explanation
def KDparams(F):
u, s, v = svd(F)
Rxy = s[0]/s[1]
Ryz = s[1]/s[2]
K = (Rxy-1)/(Ryz-1)
D = sqrt((Rxy-1)**2 + (Ryz-1)**2)
return K, D
Explanation: Here we will examine strain evolution during transpression deformation. Transpression (Sanderson and Marchini, 1984) is considered as a wrench or transcurrent shear accompanied by horizontal shortening across, and vertical lengthening along, the shear plane.
<img width="65%" src="images/trasnpression.png">
Velocity gradient associated with transpressional deformation is defined as: $ \mathbf{L} = \begin{bmatrix} 0 & \dot{\gamma} & 0 \ 0 & -\dot{\varepsilon} & 0 \ 0 & 0 & \dot{\varepsilon} \end{bmatrix} $, where $\dot{\gamma}$ and $\dot{\varepsilon}$ are components of bulk strain rate in direction of convergence.
At first, we will define function to calculate symmetry and intensity of deformation from defoirmation gradient.
End of explanation
yearsec = 365.25*24*3600
sr = 3e-15
Explanation: Here we define some constants including bulk strain rate.
End of explanation
times = linspace(0.00000001,10,20)
alphas = linspace(0,90,20)
time, alpha = meshgrid(times, alphas)
K = zeros_like(alpha)
D = zeros_like(alpha)
Explanation: We define 2D arrays of angles and times to be examined...
End of explanation
for (r,c) in np.ndindex(alpha.shape):
a = deg2rad(alpha[r,c])
t = time[r,c]*1e6*yearsec
edot = sr*sin(a)
gdot = sr*cos(a)
L = array([[0, gdot, 0], [0, -edot, 0],[0, 0, edot]])
F = la.expm(L*t)
K[r,c], D[r,c] = KDparams(F)
Explanation: and l;oop over to calculate symmetry and intensity for each combination
End of explanation
contourf(time, alpha, K, linspace(0, 1, 11))
colorbar()
contourf(time, alpha, D, linspace(0, 2.5, 11))
colorbar()
from IPython.core.display import HTML
def css_styling():
styles = open("./css/sg2.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Now we can plot results.
End of explanation |
14,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
2 images are produced. One with a good channel and one with a channel
that does not see any evoked field.
It is also demonstrated how to reorder the epochs using a 1D spectral
embedding as described in
Step1: Set parameters
Step2: Show event related fields images | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
2 images are produced. One with a good channel and one with a channel
that does not see any evoked field.
It is also demonstrated how to reorder the epochs using a 1D spectral
embedding as described in:
Graph-based variability estimation in single-trial event-related neural
responses A. Gramfort, R. Keriven, M. Clerc, 2010,
Biomedical Engineering, IEEE Trans. on, vol. 57 (5), 1051-1061
https://hal.inria.fr/inria-00497023
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.4
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
Explanation: Set parameters
End of explanation
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.cluster.spectral import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
# We'll also plot a sample time onset for each trial
plt_times = np.linspace(0, .2, len(epochs))
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,
order=order_func, vmin=-250, vmax=250,
overlay_times=plt_times, show=True)
Explanation: Show event related fields images
End of explanation |
14,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 12
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Time series analysis
NOTE
Step3: The following function takes a DataFrame of transactions and compute daily averages.
Step5: The following function returns a map from quality name to a DataFrame of daily averages.
Step6: dailies is the map from quality name to DataFrame.
Step7: The following plots the daily average price for each quality.
Step8: We can use statsmodels to run a linear model of price as a function of time.
Step9: Here's what the results look like.
Step11: Now let's plot the fitted model with the data.
Step13: The following function plots the original data and the fitted curve.
Step14: Here are results for the high quality category
Step15: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
Step16: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
Step18: The following function plots the rolling mean.
Step19: Here's what it looks like for the high quality category.
Step21: The exponentially-weighted moving average gives more weight to more recent points.
Step24: We can use resampling to generate missing values with the right amount of noise.
Step25: Here's what the EWMA model looks like with missing values filled.
Step26: Serial correlation
The following function computes serial correlation with the given lag.
Step27: Before computing correlations, we'll fill missing values.
Step28: Here are the serial correlations for raw price data.
Step29: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
Step30: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
Step31: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
Step33: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
Step35: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
Step37: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
Step38: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
Step39: Here's what it would look like if there were a weekly cycle.
Step41: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
Step42: Here's what the prediction looks like for the high quality category, using the linear model.
Step44: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
Step46: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
Step48: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
Step49: Here are the results for the high quality category.
Step51: But there is one more source of uncertainty
Step53: And this function plots the results.
Step54: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
Step56: Exercises
Exercise
Step60: Exercise
Step61: Bonus Example | Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
Explanation: Chapter 12
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/mj-clean.csv")
transactions = pd.read_csv("mj-clean.csv", parse_dates=[5])
transactions.head()
Explanation: Time series analysis
NOTE: Some of the example in this chapter have been updated to work with more recent versions of the libraries.
Load the data from "Price of Weed".
End of explanation
def GroupByDay(transactions, func=np.mean):
Groups transactions by day and compute the daily mean ppg.
transactions: DataFrame of transactions
returns: DataFrame of daily prices
grouped = transactions[["date", "ppg"]].groupby("date")
daily = grouped.aggregate(func)
daily["date"] = daily.index
start = daily.date[0]
one_year = np.timedelta64(1, "Y")
daily["years"] = (daily.date - start) / one_year
return daily
Explanation: The following function takes a DataFrame of transactions and compute daily averages.
End of explanation
def GroupByQualityAndDay(transactions):
Divides transactions by quality and computes mean daily price.
transaction: DataFrame of transactions
returns: map from quality to time series of ppg
groups = transactions.groupby("quality")
dailies = {}
for name, group in groups:
dailies[name] = GroupByDay(group)
return dailies
Explanation: The following function returns a map from quality name to a DataFrame of daily averages.
End of explanation
dailies = GroupByQualityAndDay(transactions)
Explanation: dailies is the map from quality name to DataFrame.
End of explanation
import matplotlib.pyplot as plt
thinkplot.PrePlot(rows=3)
for i, (name, daily) in enumerate(dailies.items()):
thinkplot.SubPlot(i + 1)
title = "Price per gram ($)" if i == 0 else ""
thinkplot.Config(ylim=[0, 20], title=title)
thinkplot.Scatter(daily.ppg, s=10, label=name)
if i == 2:
plt.xticks(rotation=30)
thinkplot.Config()
else:
thinkplot.Config(xticks=[])
Explanation: The following plots the daily average price for each quality.
End of explanation
import statsmodels.formula.api as smf
def RunLinearModel(daily):
model = smf.ols("ppg ~ years", data=daily)
results = model.fit()
return model, results
Explanation: We can use statsmodels to run a linear model of price as a function of time.
End of explanation
from IPython.display import display
for name, daily in dailies.items():
model, results = RunLinearModel(daily)
print(name)
display(results.summary())
Explanation: Here's what the results look like.
End of explanation
def PlotFittedValues(model, results, label=""):
Plots original data and fitted values.
model: StatsModel model object
results: StatsModel results object
years = model.exog[:, 1]
values = model.endog
thinkplot.Scatter(years, values, s=15, label=label)
thinkplot.Plot(years, results.fittedvalues, label="model", color="#ff7f00")
Explanation: Now let's plot the fitted model with the data.
End of explanation
def PlotLinearModel(daily, name):
Plots a linear fit to a sequence of prices, and the residuals.
daily: DataFrame of daily prices
name: string
model, results = RunLinearModel(daily)
PlotFittedValues(model, results, label=name)
thinkplot.Config(
title="Fitted values",
xlabel="Years",
xlim=[-0.1, 3.8],
ylabel="Price per gram ($)",
)
Explanation: The following function plots the original data and the fitted curve.
End of explanation
name = "high"
daily = dailies[name]
PlotLinearModel(daily, name)
Explanation: Here are results for the high quality category:
End of explanation
array = np.arange(10)
Explanation: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
End of explanation
series = pd.Series(array)
series.rolling(3).mean()
Explanation: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
End of explanation
def PlotRollingMean(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.Series(reindexed.ppg).rolling(30).mean()
thinkplot.Plot(roll_mean, label="rolling mean", color="#ff7f00")
plt.xticks(rotation=30)
thinkplot.Config(ylabel="price per gram ($)")
Explanation: The following function plots the rolling mean.
End of explanation
PlotRollingMean(daily, name)
Explanation: Here's what it looks like for the high quality category.
End of explanation
def PlotEWMA(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = reindexed.ppg.ewm(30).mean()
thinkplot.Plot(roll_mean, label="EWMA", color="#ff7f00")
plt.xticks(rotation=30)
thinkplot.Config(ylabel="price per gram ($)")
PlotEWMA(daily, name)
Explanation: The exponentially-weighted moving average gives more weight to more recent points.
End of explanation
def FillMissing(daily, span=30):
Fills missing values with an exponentially weighted moving average.
Resulting DataFrame has new columns 'ewma' and 'resid'.
daily: DataFrame of daily prices
span: window size (sort of) passed to ewma
returns: new DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pd.Series(reindexed.ppg).ewm(span=span).mean()
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
reindexed["ewma"] = ewma
reindexed["resid"] = reindexed.ppg - ewma
return reindexed
def PlotFilled(daily, name):
Plots the EWMA and filled data.
daily: DataFrame of daily prices
filled = FillMissing(daily, span=30)
thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)
thinkplot.Plot(filled.ewma, label="EWMA", color="#ff7f00")
plt.xticks(rotation=30)
thinkplot.Config(ylabel="Price per gram ($)")
Explanation: We can use resampling to generate missing values with the right amount of noise.
End of explanation
PlotFilled(daily, name)
Explanation: Here's what the EWMA model looks like with missing values filled.
End of explanation
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
Explanation: Serial correlation
The following function computes serial correlation with the given lag.
End of explanation
filled_dailies = {}
for name, daily in dailies.items():
filled_dailies[name] = FillMissing(daily, span=30)
Explanation: Before computing correlations, we'll fill missing values.
End of explanation
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.ppg, lag=1)
print(name, corr)
Explanation: Here are the serial correlations for raw price data.
End of explanation
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.resid, lag=1)
print(name, corr)
Explanation: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
End of explanation
rows = []
for lag in [1, 7, 30, 365]:
print(lag, end="\t")
for name, filled in filled_dailies.items():
corr = SerialCorr(filled.resid, lag)
print("%.2g" % corr, end="\t")
print()
Explanation: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
End of explanation
# NOTE: acf throws a FutureWarning because we need to replace `unbiased` with `adjusted`,
# just as soon as Colab gets updated :)
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
import statsmodels.tsa.stattools as smtsa
filled = filled_dailies["high"]
acf = smtsa.acf(filled.resid, nlags=365, unbiased=True, fft=False)
print("%0.2g, %.2g, %0.2g, %0.2g, %0.2g" % (acf[0], acf[1], acf[7], acf[30], acf[365]))
Explanation: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
End of explanation
def SimulateAutocorrelation(daily, iters=1001, nlags=40):
Resample residuals, compute autocorrelation, and plot percentiles.
daily: DataFrame
iters: number of simulations to run
nlags: maximum lags to compute autocorrelation
# run simulations
t = []
for _ in range(iters):
filled = FillMissing(daily, span=30)
resid = thinkstats2.Resample(filled.resid)
acf = smtsa.acf(resid, nlags=nlags, unbiased=True, fft=False)[1:]
t.append(np.abs(acf))
high = thinkstats2.PercentileRows(t, [97.5])[0]
low = -high
lags = range(1, nlags + 1)
thinkplot.FillBetween(lags, low, high, alpha=0.2, color="gray")
Explanation: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
End of explanation
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):
Plots autocorrelation functions.
dailies: map from category name to DataFrame of daily prices
nlags: number of lags to compute
add_weekly: boolean, whether to add a simulated weekly pattern
thinkplot.PrePlot(3)
daily = dailies["high"]
SimulateAutocorrelation(daily)
for name, daily in dailies.items():
if add_weekly:
daily.ppg = AddWeeklySeasonality(daily)
filled = FillMissing(daily, span=30)
acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True, fft=False)
lags = np.arange(len(acf))
thinkplot.Plot(lags[1:], acf[1:], label=name)
Explanation: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
End of explanation
def AddWeeklySeasonality(daily):
Adds a weekly pattern.
daily: DataFrame of daily prices
returns: new DataFrame of daily prices
fri_or_sat = (daily.index.dayofweek == 4) | (daily.index.dayofweek == 5)
fake = daily.ppg.copy()
fake[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())
return fake
Explanation: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
End of explanation
axis = [0, 41, -0.2, 0.2]
PlotAutoCorrelation(dailies, add_weekly=False)
thinkplot.Config(axis=axis, loc="lower right", ylabel="correlation", xlabel="lag (day)")
Explanation: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
End of explanation
PlotAutoCorrelation(dailies, add_weekly=True)
thinkplot.Config(axis=axis, loc="lower right", xlabel="lag (days)")
Explanation: Here's what it would look like if there were a weekly cycle.
End of explanation
def GenerateSimplePrediction(results, years):
Generates a simple prediction.
results: results object
years: sequence of times (in years) to make predictions for
returns: sequence of predicted values
n = len(years)
inter = np.ones(n)
d = dict(Intercept=inter, years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict = results.predict(predict_df)
return predict
def PlotSimplePrediction(results, years):
predict = GenerateSimplePrediction(results, years)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)
thinkplot.plot(years, predict, color="#ff7f00")
xlim = years[0] - 0.1, years[-1] + 0.1
thinkplot.Config(
title="Predictions",
xlabel="Years",
xlim=xlim,
ylabel="Price per gram ($)",
loc="upper right",
)
Explanation: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
End of explanation
name = "high"
daily = dailies[name]
_, results = RunLinearModel(daily)
years = np.linspace(0, 5, 101)
PlotSimplePrediction(results, years)
Explanation: Here's what the prediction looks like for the high quality category, using the linear model.
End of explanation
def SimulateResults(daily, iters=101, func=RunLinearModel):
Run simulations based on resampling residuals.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
_, results = func(daily)
fake = daily.copy()
result_seq = []
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
Explanation: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
End of explanation
def GeneratePredictions(result_seq, years, add_resid=False):
Generates an array of predicted values from a list of model results.
When add_resid is False, predictions represent sampling error only.
When add_resid is True, they also include residual error (which is
more relevant to prediction).
result_seq: list of model results
years: sequence of times (in years) to make predictions for
add_resid: boolean, whether to add in resampled residuals
returns: sequence of predictions
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
Explanation: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
End of explanation
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100 - p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color="gray")
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color="gray")
Explanation: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
End of explanation
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years)
xlim = years[0] - 0.1, years[-1] + 0.1
thinkplot.Config(
title="Predictions", xlabel="Years", xlim=xlim, ylabel="Price per gram ($)"
)
Explanation: Here are the results for the high quality category.
End of explanation
def SimulateIntervals(daily, iters=101, func=RunLinearModel):
Run simulations based on different subsets of the data.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
result_seq = []
starts = np.linspace(0, len(daily), iters).astype(int)
for start in starts[:-2]:
subset = daily[start:]
_, results = func(subset)
fake = subset.copy()
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
Explanation: But there is one more source of uncertainty: how much past data should we use to build the model?
The following function generates a sequence of models based on different amounts of past data.
End of explanation
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions based on different intervals.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateIntervals(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100 - p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.2, color="gray")
Explanation: And this function plots the results.
End of explanation
name = "high"
daily = dailies[name]
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotIntervals(daily, years)
PlotPredictions(daily, years)
xlim = years[0] - 0.1, years[-1] + 0.1
thinkplot.Config(
title="Predictions", xlabel="Years", xlim=xlim, ylabel="Price per gram ($)"
)
Explanation: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
End of explanation
# Solution
def RunQuadraticModel(daily):
Runs a linear model of prices versus years.
daily: DataFrame of daily prices
returns: model, results
daily["years2"] = daily.years**2
model = smf.ols("ppg ~ years + years2", data=daily)
results = model.fit()
return model, results
# Solution
name = "high"
daily = dailies[name]
model, results = RunQuadraticModel(daily)
results.summary()
# Solution
PlotFittedValues(model, results, label=name)
thinkplot.Config(
title="Fitted values", xlabel="Years", xlim=[-0.1, 3.8], ylabel="price per gram ($)"
)
# Solution
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years, func=RunQuadraticModel)
thinkplot.Config(
title="predictions",
xlabel="Years",
xlim=[years[0] - 0.1, years[-1] + 0.1],
ylabel="Price per gram ($)",
)
Explanation: Exercises
Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3.
Use a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.
End of explanation
# Solution
class SerialCorrelationTest(thinkstats2.HypothesisTest):
Tests serial correlations by permutation.
def TestStatistic(self, data):
Computes the test statistic.
data: tuple of xs and ys
series, lag = data
test_stat = abs(SerialCorr(series, lag))
return test_stat
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
series, lag = self.data
permutation = series.reindex(np.random.permutation(series.index))
return permutation, lag
# Solution
# test the correlation between consecutive prices
name = "high"
daily = dailies[name]
series = daily.ppg
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
# Solution
# test for serial correlation in residuals of the linear model
_, results = RunLinearModel(daily)
series = results.resid
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
# Solution
# test for serial correlation in residuals of the quadratic model
_, results = RunQuadraticModel(daily)
series = results.resid
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
Explanation: Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation.
Use this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.
End of explanation
name = "high"
daily = dailies[name]
filled = FillMissing(daily)
diffs = filled.ppg.diff()
thinkplot.plot(diffs)
plt.xticks(rotation=30)
thinkplot.Config(ylabel="Daily change in price per gram ($)")
filled["slope"] = diffs.ewm(span=365).mean()
thinkplot.plot(filled.slope[-365:])
plt.xticks(rotation=30)
thinkplot.Config(ylabel="EWMA of diff ($)")
# extract the last inter and the mean of the last 30 slopes
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-30:].mean()
start, inter, slope
# reindex the DataFrame, adding a year to the end
dates = pd.date_range(filled.index.min(), filled.index.max() + np.timedelta64(365, "D"))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted["date"] = predicted.index
one_day = np.timedelta64(1, "D")
predicted["days"] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma, color="#ff7f00")
Explanation: Bonus Example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this:
Compute the EWMA of the time series and use the last point as an intercept, inter.
Compute the EWMA of differences between successive elements in the time series and use the last point as a slope, slope.
To predict values at future times, compute inter + slope * dt, where dt is the difference between the time of the prediction and the time of the last observation.
End of explanation |
14,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Likelihood Functions and Confidence Intervals
by Alex Drlica-Wagner
Introduction
This notebook attempts to pragmatically address several questions about deriving uncertainty intervals from a likelihood analysis.
Step1: 1D Likelihood
As a simple and straightforward starting example, we begin with a 1D Gaussian likelihood function.
Step2: For this simple likelihood function, we could analytically compute the maximum likelihood estimate and confidence intervals. However, for more complicated likelihoods an analytic solution may not be possible. As an introduction to these cases it is informative to proceed numerically.
Step3: To find the 68% confidence intervals, we can calculate the delta-log-likelihood. The test statisitcs (TS) is defined as ${\rm TS} = -2\Delta \log \mathcal{L}$ and is $\chi^2$-distributed. Therefore, the confidence intervals on a single parameter can be read off of a $\chi^2$ table with 1 degree of freedom (dof).
| 2-sided Interval | p-value | $\chi^2_{1}$ | Gaussian $\sigma$ |
|------|------|------|------|
| 68% | 32% | 1.000 | 1.00 |
| 90% | 10% | 2.706 | 1.64 |
| 95% | 5% | 3.841 | 1.96 |
| 99% | 1% | 6.635 | 2.05 |
Step4: These numbers might look familiar. They are the number of standard deviations that you need to go out in the standard normal distribution to contain the requested fraction of the distribution (i.e., 68%, 90%, 95%).
Step5: 2D Likelihood
Now we extend the example above to a 2D likelihood function. We define the likelihood with the same multivariat_normal function, but now add a second dimension and a covariance between the two dimensions. These parameters are adjustable if would like to play around with them.
Step6: The case now becomes a bit more complicated. If you want to set a confidence interval on a single parameter, you cannot simply projected the likelihood onto the dimension of interest. Doing so would ignore the correlation between the two parameters.
Step7: In the plot above we are showing two different 1D projections of the 2D likelihood function. The red curve shows the projected likelihood scanning in values of $x$ and always assuming the value of $y$ that maximized the likelihood. On the other hand, the black curve shows the 1D likelihood derived by scanning in values of $x$ and at each value of $x$ maximizing the value of the likelihood with respect to the $y$-parameter. In other words, the red curve is ignoring the correlation between the two parameters while the black curve is accounting for it. As you can see from the values printed above the plot, the intervals derived from the red curve understimate the analytically derived values, while the intervals on the black curve properly reproduce the analytic estimate.
Just to verify the result quoted above, we derive intervals on $x$ at several different confidence levels. We start with the projected likelihood with $y$ fixed at $y_{\rm max}$.
Step8: Below are the confidence intervals in $x$ derived from the profile likelihood technique. As you can see, these values match the analytically derived values.
Step9: By plotting the likelihood contours, it is easy to see why the profile likelihood technique performs correctly while naively slicing through the likelihood plane does not. The profile likelihood is essentially tracing the ridgeline of the 2D likelihood function, thus intersecting the countour of delta-log-likelihood at it's most distant point. This can be seen from the black lines in the 2D likelihood plot below.
Step10: MCMC Posterior Sampling
One way to explore the posterior distribution is through MCMC sampling. This gives an alternative method for deriving confidence intervals. Now, rather than maximizing the likelihood as a function of the other parameter, we marginalize (integrate) over that parameter. This is more computationally intensive, but is more robust in the case of complex likelihood functions.
Step11: These results aren't perfect since they are suspect to random variations in the sampling, but they are pretty close. Plotting the distribution of samples, we see something very similar to the plots we generated for the likelihood alone (which is good since out prior was flat). | Python Code:
%matplotlib inline
import numpy as np
import pylab as plt
import scipy.stats as stats
from scipy.stats import multivariate_normal as mvn
try:
import emcee
got_emcee = True
except ImportError:
got_emcee = False
try:
import corner
got_corner = True
except ImportError:
got_corner = False
plt.rcParams['axes.labelsize'] = 16
Explanation: Likelihood Functions and Confidence Intervals
by Alex Drlica-Wagner
Introduction
This notebook attempts to pragmatically address several questions about deriving uncertainty intervals from a likelihood analysis.
End of explanation
mean = 2.0; cov = 1.0
rv = mvn(mean,cov)
lnlfn = lambda x: rv.logpdf(x)
x = np.linspace(-2,6,5000)
lnlike = lnlfn(x)
plt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\log \mathcal{L}$');
Explanation: 1D Likelihood
As a simple and straightforward starting example, we begin with a 1D Gaussian likelihood function.
End of explanation
# You can use any complicate optimizer that you want (i.e. scipy.optimize)
# but for this application we just do a simple array operation
maxlike = np.max(lnlike)
mle = x[np.argmax(lnlike)]
print "Maximum Likelihood Estimate: %.2f"%mle
print "Maximum Likelihood Value: %.2f"%maxlike
Explanation: For this simple likelihood function, we could analytically compute the maximum likelihood estimate and confidence intervals. However, for more complicated likelihoods an analytic solution may not be possible. As an introduction to these cases it is informative to proceed numerically.
End of explanation
def interval(x, lnlike, delta=1.0):
maxlike = np.max(lnlike)
ts = -2 * (lnlike - maxlike)
lower = x[np.argmax(ts < delta)]
upper = x[len(ts) - np.argmax((ts < delta)[::-1]) - 1]
return lower, upper
intervals = [(68,1.0),
(90,2.706),
(95,3.841)]
plt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\log \mathcal{L}$');
kwargs = dict(ls='--',color='k')
plt.axhline(maxlike - intervals[0][1]/2.,**kwargs)
print "Confidence Intervals:"
for cl,delta in intervals:
lower,upper = interval(x,lnlike,delta)
print " %i%% CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle,lower-mle,upper-mle)
plt.axvline(lower,**kwargs); plt.axvline(upper,**kwargs);
Explanation: To find the 68% confidence intervals, we can calculate the delta-log-likelihood. The test statisitcs (TS) is defined as ${\rm TS} = -2\Delta \log \mathcal{L}$ and is $\chi^2$-distributed. Therefore, the confidence intervals on a single parameter can be read off of a $\chi^2$ table with 1 degree of freedom (dof).
| 2-sided Interval | p-value | $\chi^2_{1}$ | Gaussian $\sigma$ |
|------|------|------|------|
| 68% | 32% | 1.000 | 1.00 |
| 90% | 10% | 2.706 | 1.64 |
| 95% | 5% | 3.841 | 1.96 |
| 99% | 1% | 6.635 | 2.05 |
End of explanation
for cl, d in intervals:
sigma = stats.norm.isf((100.-cl)/2./100.)
print " %i%% = %.2f sigma"%(cl,sigma)
Explanation: These numbers might look familiar. They are the number of standard deviations that you need to go out in the standard normal distribution to contain the requested fraction of the distribution (i.e., 68%, 90%, 95%).
End of explanation
mean = [2.0,1.0]
cov = [[1,1],[1,2]]
rv = stats.multivariate_normal(mean,cov)
lnlfn = lambda x: rv.logpdf(x)
print "Mean:",rv.mean.tolist()
print "Covariance",rv.cov.tolist()
xx, yy = np.mgrid[-4:6:.01, -4:6:.01]
values = np.dstack((xx, yy))
lnlike = lnlfn(values)
fig2 = plt.figure(figsize=(8,6))
ax2 = fig2.add_subplot(111)
im = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\log \mathcal{L}$')
plt.xlabel('$x$'); plt.ylabel('$y$');
plt.show()
# You can use any complicate optimizer that you want (i.e. scipy.optimize)
# but for this application we just do a simple array operation
maxlike = np.max(lnlike)
maxidx = np.unravel_index(np.argmax(lnlike),lnlike.shape)
mle_x, mle_y = mle = values[maxidx]
print "Maximum Likelihood Estimate:",mle
print "Maximum Likelihood Value:",maxlike
Explanation: 2D Likelihood
Now we extend the example above to a 2D likelihood function. We define the likelihood with the same multivariat_normal function, but now add a second dimension and a covariance between the two dimensions. These parameters are adjustable if would like to play around with them.
End of explanation
lnlike -= maxlike
x = xx[:,maxidx[1]]
delta = 2.706
# This is the loglike projected at y = mle[1] = 0.25
plt.plot(x, lnlike[:,maxidx[1]],'-r');
lower,upper = max_lower,max_upper = interval(x,lnlike[:,maxidx[1]],delta)
plt.axvline(lower,ls='--',c='r'); plt.axvline(upper,ls='--',c='r')
y_max = yy[:,maxidx[1]]
# This is the profile likelihood where we maximize over the y-dimension
plt.plot(x, lnlike.max(axis=1),'-k')
lower,upper = profile_lower,profile_upper = interval(x,lnlike.max(axis=1),delta)
plt.axvline(lower,ls='--',c='k'); plt.axvline(upper,ls='--',c='k')
plt.xlabel('$x$'); plt.ylabel('$\log \mathcal{L}$')
y_profile = yy[lnlike.argmax(axis=0),lnlike.argmax(axis=1)]
print "Projected Likelihood (red):\t %.1f [%+.2f,%+.2f]"%(mle[0],max_lower-mle[0],max_upper-mle[0])
print "Profile Likelihood (black):\t %.1f [%+.2f,%+.2f]"%(mle[0],profile_lower-mle[0],profile_upper-mle[0])
Explanation: The case now becomes a bit more complicated. If you want to set a confidence interval on a single parameter, you cannot simply projected the likelihood onto the dimension of interest. Doing so would ignore the correlation between the two parameters.
End of explanation
for cl, d in intervals:
lower,upper = interval(x,lnlike[:,maxidx[1]],d)
print " %s CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle[0],lower-mle[0],upper-mle[0])
Explanation: In the plot above we are showing two different 1D projections of the 2D likelihood function. The red curve shows the projected likelihood scanning in values of $x$ and always assuming the value of $y$ that maximized the likelihood. On the other hand, the black curve shows the 1D likelihood derived by scanning in values of $x$ and at each value of $x$ maximizing the value of the likelihood with respect to the $y$-parameter. In other words, the red curve is ignoring the correlation between the two parameters while the black curve is accounting for it. As you can see from the values printed above the plot, the intervals derived from the red curve understimate the analytically derived values, while the intervals on the black curve properly reproduce the analytic estimate.
Just to verify the result quoted above, we derive intervals on $x$ at several different confidence levels. We start with the projected likelihood with $y$ fixed at $y_{\rm max}$.
End of explanation
for cl, d in intervals:
lower,upper = interval(x,lnlike.max(axis=1),d)
print " %s CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle[0],lower-mle[0],upper-mle[0])
Explanation: Below are the confidence intervals in $x$ derived from the profile likelihood technique. As you can see, these values match the analytically derived values.
End of explanation
fig2 = plt.figure(figsize=(8,6))
ax2 = fig2.add_subplot(111)
im = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\log \mathcal{L}$')
im = ax2.contour(values[:,:,0], values[:,:,1], lnlike , levels=[-delta/2], colors=['k'], aspect='auto', zorder=10,lw=2);
plt.axvline(mle[0],ls='--',c='k'); plt.axhline(mle[1],ls='--',c='k');
plt.axvline(max_lower,ls='--',c='r'); plt.axvline(max_upper,ls='--',c='r')
plt.axvline(profile_lower,ls='--',c='k'); plt.axvline(profile_upper,ls='--',c='k')
plt.plot(x,y_max,'-r'); plt.plot(x,y_profile,'-k')
plt.xlabel('$x$'); plt.ylabel('$y$');
plt.show()
Explanation: By plotting the likelihood contours, it is easy to see why the profile likelihood technique performs correctly while naively slicing through the likelihood plane does not. The profile likelihood is essentially tracing the ridgeline of the 2D likelihood function, thus intersecting the countour of delta-log-likelihood at it's most distant point. This can be seen from the black lines in the 2D likelihood plot below.
End of explanation
# Remember, the posterior probability is the likelihood times the prior
lnprior = lambda x: 0
def lnprob(x):
return lnlfn(x) + lnprior(x)
if got_emcee:
nwalkers=100
ndim, nwalkers = len(mle), 100
pos0 = [np.random.rand(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, threads=2)
# This takes a while...
sampler.run_mcmc(pos0, 5000)
samples = sampler.chain[:, 100:, :].reshape((-1, ndim))
x_samples,y_samples = samples.T
for cl in [68,90,95]:
x_lower,x_mle,x_upper = np.percentile(x_samples,q=[(100-cl)/2.,50,100-(100-cl)/2.])
print " %i%% CL:"%cl, "x = %.2f [%+.2f,%+.2f]"%(x_mle,x_lower-x_mle,x_upper-x_mle)
Explanation: MCMC Posterior Sampling
One way to explore the posterior distribution is through MCMC sampling. This gives an alternative method for deriving confidence intervals. Now, rather than maximizing the likelihood as a function of the other parameter, we marginalize (integrate) over that parameter. This is more computationally intensive, but is more robust in the case of complex likelihood functions.
End of explanation
if got_corner:
fig = corner.corner(samples, labels=["$x$","$y$"],truths=mle,quantiles=[0.05, 0.5, 0.95],range=[[-4,6],[-4,6]])
Explanation: These results aren't perfect since they are suspect to random variations in the sampling, but they are pretty close. Plotting the distribution of samples, we see something very similar to the plots we generated for the likelihood alone (which is good since out prior was flat).
End of explanation |
14,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SMC2017
Step1: II.1 Likelihood estimates for the stochastic volatility model
Consider the stochastic volatility model
$$
\begin{align}
x_t\,|\,x_{t - 1} &\sim \mathcal{N}\left(\phi \cdot x_{t - 1},\,\sigma^2\right) \
y_t\,|\,x_t &\sim \mathcal{N}\left(0,\,\beta^2 \exp(x_t)\right) \
x_0 &\sim \mathcal{N}\left(0,\,\sigma^2\right)
\end{align}
$$
with parameter vector $\theta = (\phi, \sigma, \beta)$.
Step2: a) Likelihood estimation for different values of $\beta$
Consider fixed values for $\phi = 0.98$ and $\sigma = 0.16$. $\beta$ is allowed to vary between 0 and 2.
Step3: Run the bootstrap particle filter to estimate the log-likelihood.
Step4: b) Study how $N$ and $T$ affect the variance of the log-likelihood estimate
Step5: Variance reduces exponentially with growing $N$.
Step6: Variance increases linearly with growing $T$.
c) Study the influence of resampling on the variance of the estimator
Step7: Without resampling the variance is larger and log-likelihood is generally lower.
II.2 Fully adapted particle filter
b) Implement the FAPF for model (ii) and compare the variance of the estimates of $\mathbb{E}(X_t\,|\,y_{1
Step8: Try to recover the simulated states from the measurements.
Step9: Holy shit
Step10: Comparison of variances
Step11: II.3 Likelihood estimator for the APF
This is a theoretical exercise. Look in exercises_on_paper.
II.4 Forgetting
Consider the linear state space model (SSM)
$$
\begin{array}{rcll}
X_t & = & 0.7 X_{t - 1} & \
Y_t & = & 0.5 X_t + E_t, & \qquad E_t \sim \mathcal{N}(0, 0.1)
\end{array}
$$
with $X_0 \sim \mathcal{N}(0, 1)$.
Simulate some data from the model. It is not quite clear from the exercise if $Q = 0$ already during data simulation.
Step12: Kalman filter, the exact solution to the filtering problem
Step13: Bootstrap particle filter for the problem
Step14: Testing both implementations. Bootstrap PF as well as the Kalman filter follow the states rather nicely.
Step15: If however no noise in the model is assumed, then the state recovery works a lot worse.
Step16: Looking at the mean-squared-error for the test function $\phi(x_t) = x_t$ | Python Code:
import numpy as np
from scipy import stats
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style()
path = '..\\..\\..\\..\\course_material\\exercise_sheets\\'
Explanation: SMC2017: Exercise set II
Setup
End of explanation
data = pd.read_csv(path + 'seOMXlogreturns2012to2014.csv',
header=None, names=['logreturn'])
y = data.logreturn.values
fig, ax = plt.subplots()
ax.plot(y)
Explanation: II.1 Likelihood estimates for the stochastic volatility model
Consider the stochastic volatility model
$$
\begin{align}
x_t\,|\,x_{t - 1} &\sim \mathcal{N}\left(\phi \cdot x_{t - 1},\,\sigma^2\right) \
y_t\,|\,x_t &\sim \mathcal{N}\left(0,\,\beta^2 \exp(x_t)\right) \
x_0 &\sim \mathcal{N}\left(0,\,\sigma^2\right)
\end{align}
$$
with parameter vector $\theta = (\phi, \sigma, \beta)$.
End of explanation
theta = [0.98, 0.16]
def likelihood_bootstrap_pf(N, y, beta=0.70, resample=True, logweights=True):
# Cumulatively build up log-likelihood
ll = 0.0
# Initialisation
samples = stats.norm.rvs(0, theta[1], N)
weights = 1 / N * np.ones((N,))
weights_normalized = weights
# Determine the number of time steps
T = len(y)
# Loop through all time steps
for t in range(T):
# Resample
if resample:
# Randomly choose ancestors
ancestors = np.random.choice(samples, size=N,
replace=True, p=weights_normalized)
else:
ancestors = samples
# Propagate
samples = stats.norm.rvs(0, 1, N) * theta[1] + theta[0] * ancestors
if logweights:
# Weight
weights = stats.norm.logpdf(y[t], loc=0,
scale=(beta * np.exp(samples / 2)))
# Calculate the max of the weights
max_weights = np.max(weights)
# Subtract the max
weights = weights - max_weights
# Update log-likelihood
ll += max_weights + np.log(np.sum(np.exp(weights))) - np.log(N)
# Normalize weights to be probabilities
weights_normalized = np.exp(weights) / np.sum(np.exp(weights))
else:
# Weight
weights = stats.norm.pdf(y[t], loc=0,
scale=(beta * np.exp(samples / 2)))
# Update log-likelihood
ll += np.log(np.sum(weights)) - np.log(N)
# Normalize weights to be probabilities
weights_normalized = weights / np.sum(weights)
return ll
Explanation: a) Likelihood estimation for different values of $\beta$
Consider fixed values for $\phi = 0.98$ and $\sigma = 0.16$. $\beta$ is allowed to vary between 0 and 2.
End of explanation
def simulate(N=500, T=500, resample=True):
ll = []
beta_count = len(np.arange(0.5, 2.25, 0.1))
for beta in np.arange(0.5, 2.25, 0.1):
for i in range(10):
ll.append(likelihood_bootstrap_pf(N, y[:T], beta, resample))
ll = np.transpose(np.reshape(ll, (beta_count, 10)))
return ll
fig, ax = plt.subplots(figsize=(10, 5))
ax.boxplot(simulate(500, 500), labels=np.arange(0.5, 2.25, 0.1));
Explanation: Run the bootstrap particle filter to estimate the log-likelihood.
End of explanation
variances = []
ns = [10, 15, 20, 25, 40, 50, 75, 100, 150, 200]
for N in ns:
lls = []
for i in range(50):
lls.append(likelihood_bootstrap_pf(N, y, beta=0.9))
# Calculate variance
variances.append(np.var(lls))
fig, ax = plt.subplots()
ax.plot(ns, variances, 'o-')
Explanation: b) Study how $N$ and $T$ affect the variance of the log-likelihood estimate
End of explanation
variances = []
ts = range(10, 501, 35)
for T in ts:
lls = []
for i in range(60):
lls.append(likelihood_bootstrap_pf(200, y[:T], beta=0.9))
# Calculate variance
variances.append(np.var(lls))
fig, ax = plt.subplots()
ax.plot(ts, variances, 'o-')
Explanation: Variance reduces exponentially with growing $N$.
End of explanation
lls = np.zeros((60, 2))
# With resampling
for i in range(60):
lls[i, 0] = likelihood_bootstrap_pf(200, y, beta=0.9)
# Without resampling
for i in range(60):
lls[i, 1] = likelihood_bootstrap_pf(200, y, beta=0.9, resample=False)
fig, ax = plt.subplots()
ax.boxplot(lls, labels=['Resampling', 'No resampling']);
Explanation: Variance increases linearly with growing $T$.
c) Study the influence of resampling on the variance of the estimator
End of explanation
T = 100
# Allocate arrays for results
ys = np.zeros((T,))
xs = np.zeros((T + 1,))
# Initial value for state
xs[0] = 0.1
# Walk through all time steps
for t in range(T):
xs[t + 1] = np.power(np.cos(xs[t]), 2) + stats.norm.rvs(0, 1, 1)
ys[t] = 2 * xs[t + 1] + stats.norm.rvs(0, 0.1, 1)
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
axs[0].plot(range(T + 1), xs, 'o-');
axs[1].plot(range(1, T + 1), ys, 'o-r');
def fully_adapted_PF(N, y):
# Save particles
xs = []
# Initialisation
samples = stats.norm.rvs(0, 1, N)
# Save initial data
xs.append(samples)
# Determine length of data
T = len(y)
for t in range(T):
# Calculate resampling weights in case of FAPF
resampling_weights = stats.norm.pdf(
y[t], loc=2*np.power(np.cos(samples), 2), scale=np.sqrt(4.01))
# Normalize the resampling weights
resampling_weights /= np.sum(resampling_weights)
# Resample
ancestors = np.random.choice(samples, size=N, replace=True,
p=resampling_weights)
# Propagate
samples = stats.norm.rvs(0, 1, N) * 0.1 / np.sqrt(4.01) + \
(2 / 4.01) * y[t] + (0.01 / 4.01) * np.power(np.cos(ancestors), 2)
# Save the new samples
xs.append(samples)
return np.array(xs)
Explanation: Without resampling the variance is larger and log-likelihood is generally lower.
II.2 Fully adapted particle filter
b) Implement the FAPF for model (ii) and compare the variance of the estimates of $\mathbb{E}(X_t\,|\,y_{1:t})$ to the estimates obtained by a bootstrap particle filter
The state-space model under consideration is (normal distribution parametrized with $\sigma^2$)
$$
\begin{array}{rll}
x_{t + 1} &= \cos(x_t)^2 + v_t, & v_t \sim N(0, 1) \
y_t &= 2 x_t + e_t, & e_t \sim N(0, 0.01)
\end{array}
$$
which leads to the probabilistic model
$$
\begin{align}
p(x_t\,|\,x_{t - 1}) &= N\left(x_t;\,\cos(x_t)^2,\,1\right) \
p(y_t\,|\,x_t) &= N\left(y_t;\,2 x_t,\,0.01\right)
\end{align}
$$
This admits the necessary pdfs
$$
\begin{align}
p(y_t\,|\,x_{t - 1}) &= N(y_t;\,2 \cos(x_{t - 1})^2,\,4.01) \
p(x_t\,|\,x_{t - 1},\,y_t) &= N\left(x_t;\,\frac{2 y_t + 0.01 \cos(x_{t - 1})^2}{4.01}, \frac{0.01}{4.01}\right)
\end{align}
$$
Simulate a trajectory to use for the particle filters.
End of explanation
xs_filtered = fully_adapted_PF(1000, ys)
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(xs, 'ok')
ax.plot(np.apply_along_axis(np.mean, 1, xs_filtered), 'o-')
ax.legend(['Simulated data', 'FAPF'])
Explanation: Try to recover the simulated states from the measurements.
End of explanation
def bootstrap_PF(N, y):
# Save the history
xs = []
ws = []
# Initialisation
samples = stats.norm.rvs(0, 1, N)
weights = 1 / N * np.ones((N,))
weights_normalized = weights
# Save weights and samples
ws.append(weights_normalized)
xs.append(samples)
# Determine the number of time steps
T = len(y)
# Loop through all time steps
for t in range(T):
# Resample
# Randomly choose ancestors
ancestors = np.random.choice(samples, size=N,
replace=True, p=weights_normalized)
# Propagate
samples = stats.norm.rvs(0, 1, N) + np.power(np.cos(ancestors), 2)
# Save the new x
xs.append(samples)
# Weight
weights = stats.norm.logpdf(y[t], loc=2 * samples, scale=0.1)
# Substract maximum
weights = weights - np.max(weights)
# Normalize weights to be probabilities
weights_normalized = np.exp(weights) / np.sum(np.exp(weights))
# Save the new normalized weights
ws.append(weights_normalized)
return np.array(xs), np.array(ws)
xs_filtered, ws = bootstrap_PF(300, ys)
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(np.apply_along_axis(np.sum, 1, xs_filtered * ws))
ax.plot(xs, '--')
Explanation: Holy shit :D
For comparison, here is the bootstrap particle filter for this model
End of explanation
M = 50
N = 20
fully_adapted_estimates = np.zeros((M, T + 1))
bootstrap_estimates = np.zeros((M, T + 1))
for k in range(M):
xs_filtered = fully_adapted_PF(N, ys)
fully_adapted_estimates[k, :] = np.apply_along_axis(np.mean, 1, xs_filtered)
xs_filtered, ws = bootstrap_PF(N, ys)
bootstrap_estimates[k, :] = np.apply_along_axis(np.sum, 1, xs_filtered * ws)
fully_adapted_variances = np.apply_along_axis(np.var, 0, fully_adapted_estimates)
bootstrap_variances = np.apply_along_axis(np.var, 0, bootstrap_estimates)
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(bootstrap_variances);
ax.plot(fully_adapted_variances);
Explanation: Comparison of variances
End of explanation
# Max. time steps
T = 2000
# Store the simulated measurements
xs_sim = np.zeros((T + 1,))
ys_sim = np.zeros((T,))
# Initial value
xs_sim[0] = stats.norm.rvs()
# Simulate the state and measurement process
for t in range(T):
xs_sim[t + 1] = 0.7 * xs_sim[t] + 0.1 * stats.norm.rvs()
ys_sim[t] = 0.5 * xs_sim[t + 1] + 0.1 * stats.norm.rvs()
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
axs[0].plot(xs_sim);
axs[0].set_title('Simulated states');
axs[0].set_xlabel('Time');
axs[0].set_ylabel('$x_t$');
axs[1].plot(range(1, T + 1), ys_sim, 'r');
axs[1].set_title('Simulated measurements');
axs[1].set_xlabel('Time');
axs[1].set_ylabel('$y_t$');
Explanation: II.3 Likelihood estimator for the APF
This is a theoretical exercise. Look in exercises_on_paper.
II.4 Forgetting
Consider the linear state space model (SSM)
$$
\begin{array}{rcll}
X_t & = & 0.7 X_{t - 1} & \
Y_t & = & 0.5 X_t + E_t, & \qquad E_t \sim \mathcal{N}(0, 0.1)
\end{array}
$$
with $X_0 \sim \mathcal{N}(0, 1)$.
Simulate some data from the model. It is not quite clear from the exercise if $Q = 0$ already during data simulation.
End of explanation
def kalman_filter(y, A=0.7, C=0.5, Q=0.0, R=0.1, P0=1):
# Determine length of data
T = len(y)
# Filtered means and standard deviations
means_filtered = np.zeros((T + 1,))
covs_filtered = np.zeros((T + 1,))
# Initialize with covariance of prior
covs_filtered[0] = P0
# Kalman recursion
for t in range(T):
# Time update
covs_time_upd = np.power(A, 2) * covs_filtered[t] + Q
# Kalman gain
kalman_gain = C * covs_time_upd / (np.power(C, 2) * covs_time_upd + R)
# Filter updates
means_filtered[t + 1] = A * means_filtered[t] + \
kalman_gain * (y[t] - C * A * means_filtered[t])
covs_filtered[t + 1] = covs_time_upd - kalman_gain * C * covs_time_upd
return means_filtered, covs_filtered
Explanation: Kalman filter, the exact solution to the filtering problem
End of explanation
def bootstrap_PF(y, N=100, A=0.7, C=0.5, Q=0.0, R=0.1, P0=1):
# Length of the data
T = len(y)
# Pre-allocate data storage
xs = np.zeros((N, T + 1))
ws = np.zeros((N, T + 1))
# Initialize
xs[:, 0] = stats.norm.rvs(0, P0, size=N)
ws[:, 0] = 1 / N * np.ones((N,))
for t in range(T):
# Resample
ancestors = np.random.choice(range(N), size=N,
replace=True, p=ws[:, t])
# Propagate
xs[:, t + 1] = A * xs[ancestors, t] + \
np.sqrt(Q) * stats.norm.rvs(size=N)
# Weight
# Use log weights
ws[:, t + 1] = stats.norm.logpdf(y[t], loc=C * xs[:, t + 1],
scale=np.sqrt(R))
# Find maximum and subtract from log weights
ws[:, t + 1] -= np.max(ws[:, t + 1])
# Normalize weights
ws[:, t + 1] = np.exp(ws[:, t + 1]) / np.sum(np.exp(ws[:, t + 1]))
return xs, ws
Explanation: Bootstrap particle filter for the problem
End of explanation
Tmax = 100
N = 50000
means_kf, stddevs_kf = kalman_filter(ys_sim[:Tmax], Q=0.1)
xs, ws = bootstrap_PF(ys_sim[:Tmax], N=N, Q=0.1)
means_bpf = np.sum(xs * ws, axis=0)
fig, ax = plt.subplots()
ax.plot(xs_sim[:Tmax], 'ok')
ax.plot(means_bpf, 'o-')
ax.plot(means_kf, 'x-')
ax.set_xlabel('Time')
ax.set_title("$N = {}$".format(N))
ax.legend(['Simulated state', 'BPF', 'Kalman']);
Explanation: Testing both implementations. Bootstrap PF as well as the Kalman filter follow the states rather nicely.
End of explanation
Tmax = 100
N = 50000
means_kf, stddevs_kf = kalman_filter(ys_sim[:Tmax], Q=0.0)
xs, ws = bootstrap_PF(ys_sim[:Tmax], N=N, Q=0.0)
means_bpf = np.sum(xs * ws, axis=0)
fig, ax = plt.subplots()
ax.plot(xs_sim[:Tmax], 'ok')
ax.plot(means_bpf, 'o-')
ax.plot(means_kf, 'x-')
ax.set_xlabel('Time')
ax.set_title("$N = {}$".format(N))
ax.legend(['Simulated state', 'BPF', 'Kalman']);
Explanation: If however no noise in the model is assumed, then the state recovery works a lot worse.
End of explanation
M = 100
Tmax = 50
mses = np.zeros((Tmax + 1,))
# Get the exact solution
means_kf, stddevs_kf = kalman_filter(ys_sim[:Tmax], Q=0.1)
# Iterate and repeatedly calculate approximation
for i in range(M):
xs, ws = bootstrap_PF(ys_sim[:Tmax], N=100, Q=0.1)
means_bpf = np.sum(xs * ws, axis=0)
# Add to mean squared errors
mses += np.power(means_bpf - means_kf, 2.0)
# Divide by number of repetitions
mses /= M
fig, ax = plt.subplots()
ax.plot(mses, 'o-')
ax.set_xlabel('Time')
ax.set_ylabel('MSE');
Explanation: Looking at the mean-squared-error for the test function $\phi(x_t) = x_t$
End of explanation |
14,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the female respondent file.
Step2: Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
Display the PMF.
Define <tt>BiasPmf</tt>. | Python Code:
%matplotlib inline
import chap01soln
resp = chap01soln.ReadFemResp()
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the female respondent file.
End of explanation
def BiasPmf(pmf, label=''):
Returns the Pmf with oversampling proportional to value.
If pmf is the distribution of true values, the result is the
distribution that would be seen if values are oversampled in
proportion to their values; for example, if you ask students
how big their classes are, large classes are oversampled in
proportion to their size.
Args:
pmf: Pmf object.
label: string label for the new Pmf.
Returns:
Pmf object
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
Explanation: Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
Display the PMF.
Define <tt>BiasPmf</tt>.
End of explanation |
14,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 14 (or so)
Step1: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.
Step2: So great, we have 702 of them. Now let's import them.
Step3: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.
Take a look at the contents of the first 5 speeches
Step4: Doing our analysis
Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns.
Be sure to include English-language stopwords
Step5: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
Step6: Now let's push all of that into a dataframe with nicely named columns.
Step7: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
Step8: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
Step9: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
Step10: Now what if I'm using a TfidfVectorizer?
Step11: What's the content of the speeches? Here's a way to get them
Step12: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
Step13: Enough of this garbage, let's cluster
Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Step14: Which one do you think works the best?
Not sure. The last one term frequency inverse I can't get to work. So I am going with number 2.
Harry Potter time
I have a scraped collection of Harry Potter fanfiction at https
Step15: Term Frequency Vectorizer
Step16: Simple Counting Vectorizer | Python Code:
# If you'd like to download it through the command line...
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz
# And then extract it through the command line...
!tar -zxf convote_v1.1.tar.gz
Explanation: Homework 14 (or so): TF-IDF text analysis and clustering
Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat.
No, just kidding, we're professionals now.
Investigating the Congressional Record
The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe?
Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here.
End of explanation
# glob finds files matching a certain filename pattern
import glob
# Give me all the text files
paths = glob.glob('convote_v1.1/data_stage_one/development_set/*')
paths[:5]
len(paths)
Explanation: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.
End of explanation
speeches = []
for path in paths:
with open(path) as speech_file:
speech = {
'pathname': path,
'filename': path.split('/')[-1],
'content': speech_file.read()
}
speeches.append(speech)
speeches_df = pd.DataFrame(speeches)
speeches_df.head()
Explanation: So great, we have 702 of them. Now let's import them.
End of explanation
All_speeches = speeches_df['content']
First_five_speeches = speeches_df['content'].head(5)
First_five_speeches
Explanation: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.
Take a look at the contents of the first 5 speeches
End of explanation
count_vectorizer = CountVectorizer(stop_words='english')
speech_tokens = count_vectorizer.fit_transform(All_speeches)
count_vectorizer.get_feature_names()
All_tokens = pd.DataFrame(speech_tokens.toarray(), columns=count_vectorizer.get_feature_names())
#All_tokens
Explanation: Doing our analysis
Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns.
Be sure to include English-language stopwords
End of explanation
count_vectorizer_100 = CountVectorizer(max_features=100, stop_words='english')
speech_tokens_top100 = count_vectorizer_100.fit_transform(speeches_df['content'])
Explanation: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
End of explanation
Top_100_tokens = pd.DataFrame(speech_tokens_top100.toarray(), columns=count_vectorizer_100.get_feature_names())
Top_100_tokens.head()
Explanation: Now let's push all of that into a dataframe with nicely named columns.
End of explanation
speeches_df.info()
Top_100_tokens['No_chairman'] = Top_100_tokens['chairman'] == 0
Top_100_tokens[Top_100_tokens['No_chairman'] == True].count().head(1)
Top_100_tokens['no_mr'] = Top_100_tokens['mr'] == 0
Top_100_tokens[Top_100_tokens['no_mr'] == True].count().head(1)
Explanation: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
End of explanation
Top_100_tokens['thank'].sort_values(ascending=False).head(1)
Explanation: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
End of explanation
Top_100_tokens['china trade'] = Top_100_tokens['china'] + Top_100_tokens['trade']
Top_100_tokens['china trade'].sort_values(ascending=False).head(3)
Explanation: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
End of explanation
idf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True)
Top_100_tokens_idf = idf_vectorizer.fit_transform(All_speeches)
idf_df = pd.DataFrame(Top_100_tokens_idf.toarray(), columns=idf_vectorizer.get_feature_names())
idf_df['china trade'] = idf_df['china'] + idf_df['trade']
idf_df['china trade'].sort_values(ascending=False).head(3)
Explanation: Now what if I'm using a TfidfVectorizer?
End of explanation
# index 0 is the first speech, which was the first one imported.
paths[402]
# Pass that into 'cat' using { } which lets you put variables in shell commands
# that way you can pass the path to cat
!cat {paths[577]}
Explanation: What's the content of the speeches? Here's a way to get them:
End of explanation
All_tokens['chaos'] = All_tokens['chaos'].sort_values(ascending=False) >= 1
All_tokens[All_tokens['chaos'] == True].count().head(1)
Explanation: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
End of explanation
#simple counting vectorizer,
from sklearn.cluster import KMeans
number_of_clusters = 8
km = KMeans(n_clusters=number_of_clusters)
count_vectorizer = CountVectorizer(stop_words='english')
X = count_vectorizer.fit_transform(All_speeches)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = count_vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
# term frequency vectorizer,
vectorizer = TfidfVectorizer(use_idf=True, stop_words='english')
X = vectorizer.fit_transform(All_speeches)
number_of_clusters = 8
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = count_vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :10]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
#term frequency inverse document frequency vectorizer
def oh_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
return words
l2_vectorizer = TfidfVectorizer(use_idf=True, stop_words='english', tokenizer=oh_tokenizer)
X = l2_vectorizer.fit_transform(speeches_df['content'])
l2_df = pd.DataFrame(X.toarray(), columns=l2_vectorizer.get_feature_names())
for i in range(number_of_clusters):
top_ten_words = [l2_df[ind] for ind in order_centroids[i, :9]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
Explanation: Enough of this garbage, let's cluster
Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
End of explanation
!curl -O https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip
!unzip hp.zip
import glob
paths = glob.glob('hp/*.txt')
paths[:5]
len(paths)
Harry_Potter_fiction = []
for path in paths:
with open(path) as Harry_file:
speech = {
'pathname': path,
'filename': path.split('/')[-1],
'content': Harry_file.read()
}
Harry_Potter_fiction.append(speech)
Harry_df = pd.DataFrame(Harry_Potter_fiction)
Harry_df.head()
All_of_Harry = Harry_df['content']
All_of_Harry.head()
Explanation: Which one do you think works the best?
Not sure. The last one term frequency inverse I can't get to work. So I am going with number 2.
Harry Potter time
I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip.
I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis?
End of explanation
vectorizer = TfidfVectorizer(use_idf=True, stop_words='english')
X = vectorizer.fit_transform(All_of_Harry)
# KMeans clustering is a method of clustering.
from sklearn.cluster import KMeans
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :10]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
#Cluster 1 is about Lily and James, whoever they are. Wait: His parents.
#Cluster 2 is about Harry and Hermione.
Explanation: Term Frequency Vectorizer
End of explanation
from sklearn.cluster import KMeans
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
count_vectorizer = CountVectorizer(stop_words='english')
X = count_vectorizer.fit_transform(All_of_Harry)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = count_vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :10]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
Explanation: Simple Counting Vectorizer
End of explanation |
14,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Super-Resolution using an Efficient Sub-Pixel CNN
Author
Step1: Load data
Step2: We create training and validation datasets via image_dataset_from_directory.
Step3: We rescale the images to take values in the range [0, 1].
Step4: Let's visualize a few sample images
Step5: We prepare a dataset of test image paths that we will use for
visual evaluation at the end of this example.
Step6: Crop and resize images
Let's process image data.
First, we convert our images from the RGB color space to the
YUV colour space.
For the input data (low-resolution images),
we crop the image, retrieve the y channel (luninance),
and resize it with the area method (use BICUBIC if you use PIL).
We only consider the luminance channel
in the YUV color space because humans are more sensitive to
luminance change.
For the target data (high-resolution images), we just crop the image
and retrieve the y channel.
Step7: Let's take a look at the input and target data.
Step8: Build a model
Compared to the paper, we add one more layer and we use the relu activation function
instead of tanh.
It achieves better performance even though we train the model for fewer epochs.
Step12: Define utility functions
We need to define several utility functions to monitor our results
Step13: Define callbacks to monitor training
The ESPCNCallback object will compute and display
the PSNR metric.
This is the main metric we use to evaluate super-resolution performance.
Step14: Define ModelCheckpoint and EarlyStopping callbacks.
Step15: Train the model
Step16: Run model prediction and plot the results
Let's compute the reconstructed version of a few images and save the results. | Python Code:
import tensorflow as tf
import os
import math
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import array_to_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing import image_dataset_from_directory
from IPython.display import display
Explanation: Image Super-Resolution using an Efficient Sub-Pixel CNN
Author: Xingyu Long<br>
Date created: 2020/07/28<br>
Last modified: 2020/08/27<br>
Description: Implementing Super-Resolution using Efficient sub-pixel model on BSDS500.
Introduction
ESPCN (Efficient Sub-Pixel CNN), proposed by Shi, 2016
is a model that reconstructs a high-resolution version of an image given a low-resolution version.
It leverages efficient "sub-pixel convolution" layers, which learns an array of
image upscaling filters.
In this code example, we will implement the model from the paper and train it on a small dataset,
BSDS500.
Setup
End of explanation
dataset_url = "http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz"
data_dir = keras.utils.get_file(origin=dataset_url, fname="BSR", untar=True)
root_dir = os.path.join(data_dir, "BSDS500/data")
Explanation: Load data: BSDS500 dataset
Download dataset
We use the built-in keras.utils.get_file utility to retrieve the dataset.
End of explanation
crop_size = 300
upscale_factor = 3
input_size = crop_size // upscale_factor
batch_size = 8
train_ds = image_dataset_from_directory(
root_dir,
batch_size=batch_size,
image_size=(crop_size, crop_size),
validation_split=0.2,
subset="training",
seed=1337,
label_mode=None,
)
valid_ds = image_dataset_from_directory(
root_dir,
batch_size=batch_size,
image_size=(crop_size, crop_size),
validation_split=0.2,
subset="validation",
seed=1337,
label_mode=None,
)
Explanation: We create training and validation datasets via image_dataset_from_directory.
End of explanation
def scaling(input_image):
input_image = input_image / 255.0
return input_image
# Scale from (0, 255) to (0, 1)
train_ds = train_ds.map(scaling)
valid_ds = valid_ds.map(scaling)
Explanation: We rescale the images to take values in the range [0, 1].
End of explanation
for batch in train_ds.take(1):
for img in batch:
display(array_to_img(img))
Explanation: Let's visualize a few sample images:
End of explanation
dataset = os.path.join(root_dir, "images")
test_path = os.path.join(dataset, "test")
test_img_paths = sorted(
[
os.path.join(test_path, fname)
for fname in os.listdir(test_path)
if fname.endswith(".jpg")
]
)
Explanation: We prepare a dataset of test image paths that we will use for
visual evaluation at the end of this example.
End of explanation
# Use TF Ops to process.
def process_input(input, input_size, upscale_factor):
input = tf.image.rgb_to_yuv(input)
last_dimension_axis = len(input.shape) - 1
y, u, v = tf.split(input, 3, axis=last_dimension_axis)
return tf.image.resize(y, [input_size, input_size], method="area")
def process_target(input):
input = tf.image.rgb_to_yuv(input)
last_dimension_axis = len(input.shape) - 1
y, u, v = tf.split(input, 3, axis=last_dimension_axis)
return y
train_ds = train_ds.map(
lambda x: (process_input(x, input_size, upscale_factor), process_target(x))
)
train_ds = train_ds.prefetch(buffer_size=32)
valid_ds = valid_ds.map(
lambda x: (process_input(x, input_size, upscale_factor), process_target(x))
)
valid_ds = valid_ds.prefetch(buffer_size=32)
Explanation: Crop and resize images
Let's process image data.
First, we convert our images from the RGB color space to the
YUV colour space.
For the input data (low-resolution images),
we crop the image, retrieve the y channel (luninance),
and resize it with the area method (use BICUBIC if you use PIL).
We only consider the luminance channel
in the YUV color space because humans are more sensitive to
luminance change.
For the target data (high-resolution images), we just crop the image
and retrieve the y channel.
End of explanation
for batch in train_ds.take(1):
for img in batch[0]:
display(array_to_img(img))
for img in batch[1]:
display(array_to_img(img))
Explanation: Let's take a look at the input and target data.
End of explanation
def get_model(upscale_factor=3, channels=1):
conv_args = {
"activation": "relu",
"kernel_initializer": "Orthogonal",
"padding": "same",
}
inputs = keras.Input(shape=(None, None, channels))
x = layers.Conv2D(64, 5, **conv_args)(inputs)
x = layers.Conv2D(64, 3, **conv_args)(x)
x = layers.Conv2D(32, 3, **conv_args)(x)
x = layers.Conv2D(channels * (upscale_factor ** 2), 3, **conv_args)(x)
outputs = tf.nn.depth_to_space(x, upscale_factor)
return keras.Model(inputs, outputs)
Explanation: Build a model
Compared to the paper, we add one more layer and we use the relu activation function
instead of tanh.
It achieves better performance even though we train the model for fewer epochs.
End of explanation
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
import PIL
def plot_results(img, prefix, title):
Plot the result with zoom-in area.
img_array = img_to_array(img)
img_array = img_array.astype("float32") / 255.0
# Create a new figure with a default 111 subplot.
fig, ax = plt.subplots()
im = ax.imshow(img_array[::-1], origin="lower")
plt.title(title)
# zoom-factor: 2.0, location: upper-left
axins = zoomed_inset_axes(ax, 2, loc=2)
axins.imshow(img_array[::-1], origin="lower")
# Specify the limits.
x1, x2, y1, y2 = 200, 300, 100, 200
# Apply the x-limits.
axins.set_xlim(x1, x2)
# Apply the y-limits.
axins.set_ylim(y1, y2)
plt.yticks(visible=False)
plt.xticks(visible=False)
# Make the line.
mark_inset(ax, axins, loc1=1, loc2=3, fc="none", ec="blue")
plt.savefig(str(prefix) + "-" + title + ".png")
plt.show()
def get_lowres_image(img, upscale_factor):
Return low-resolution image to use as model input.
return img.resize(
(img.size[0] // upscale_factor, img.size[1] // upscale_factor),
PIL.Image.BICUBIC,
)
def upscale_image(model, img):
Predict the result based on input image and restore the image as RGB.
ycbcr = img.convert("YCbCr")
y, cb, cr = ycbcr.split()
y = img_to_array(y)
y = y.astype("float32") / 255.0
input = np.expand_dims(y, axis=0)
out = model.predict(input)
out_img_y = out[0]
out_img_y *= 255.0
# Restore the image in RGB color space.
out_img_y = out_img_y.clip(0, 255)
out_img_y = out_img_y.reshape((np.shape(out_img_y)[0], np.shape(out_img_y)[1]))
out_img_y = PIL.Image.fromarray(np.uint8(out_img_y), mode="L")
out_img_cb = cb.resize(out_img_y.size, PIL.Image.BICUBIC)
out_img_cr = cr.resize(out_img_y.size, PIL.Image.BICUBIC)
out_img = PIL.Image.merge("YCbCr", (out_img_y, out_img_cb, out_img_cr)).convert(
"RGB"
)
return out_img
Explanation: Define utility functions
We need to define several utility functions to monitor our results:
plot_results to plot an save an image.
get_lowres_image to convert an image to its low-resolution version.
upscale_image to turn a low-resolution image to
a high-resolution version reconstructed by the model.
In this function, we use the y channel from the YUV color space
as input to the model and then combine the output with the
other channels to obtain an RGB image.
End of explanation
class ESPCNCallback(keras.callbacks.Callback):
def __init__(self):
super(ESPCNCallback, self).__init__()
self.test_img = get_lowres_image(load_img(test_img_paths[0]), upscale_factor)
# Store PSNR value in each epoch.
def on_epoch_begin(self, epoch, logs=None):
self.psnr = []
def on_epoch_end(self, epoch, logs=None):
print("Mean PSNR for epoch: %.2f" % (np.mean(self.psnr)))
if epoch % 20 == 0:
prediction = upscale_image(self.model, self.test_img)
plot_results(prediction, "epoch-" + str(epoch), "prediction")
def on_test_batch_end(self, batch, logs=None):
self.psnr.append(10 * math.log10(1 / logs["loss"]))
Explanation: Define callbacks to monitor training
The ESPCNCallback object will compute and display
the PSNR metric.
This is the main metric we use to evaluate super-resolution performance.
End of explanation
early_stopping_callback = keras.callbacks.EarlyStopping(monitor="loss", patience=10)
checkpoint_filepath = "/tmp/checkpoint"
model_checkpoint_callback = keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor="loss",
mode="min",
save_best_only=True,
)
model = get_model(upscale_factor=upscale_factor, channels=1)
model.summary()
callbacks = [ESPCNCallback(), early_stopping_callback, model_checkpoint_callback]
loss_fn = keras.losses.MeanSquaredError()
optimizer = keras.optimizers.Adam(learning_rate=0.001)
Explanation: Define ModelCheckpoint and EarlyStopping callbacks.
End of explanation
epochs = 100
model.compile(
optimizer=optimizer, loss=loss_fn,
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=valid_ds, verbose=2
)
# The model weights (that are considered the best) are loaded into the model.
model.load_weights(checkpoint_filepath)
Explanation: Train the model
End of explanation
total_bicubic_psnr = 0.0
total_test_psnr = 0.0
for index, test_img_path in enumerate(test_img_paths[50:60]):
img = load_img(test_img_path)
lowres_input = get_lowres_image(img, upscale_factor)
w = lowres_input.size[0] * upscale_factor
h = lowres_input.size[1] * upscale_factor
highres_img = img.resize((w, h))
prediction = upscale_image(model, lowres_input)
lowres_img = lowres_input.resize((w, h))
lowres_img_arr = img_to_array(lowres_img)
highres_img_arr = img_to_array(highres_img)
predict_img_arr = img_to_array(prediction)
bicubic_psnr = tf.image.psnr(lowres_img_arr, highres_img_arr, max_val=255)
test_psnr = tf.image.psnr(predict_img_arr, highres_img_arr, max_val=255)
total_bicubic_psnr += bicubic_psnr
total_test_psnr += test_psnr
print(
"PSNR of low resolution image and high resolution image is %.4f" % bicubic_psnr
)
print("PSNR of predict and high resolution is %.4f" % test_psnr)
plot_results(lowres_img, index, "lowres")
plot_results(highres_img, index, "highres")
plot_results(prediction, index, "prediction")
print("Avg. PSNR of lowres images is %.4f" % (total_bicubic_psnr / 10))
print("Avg. PSNR of reconstructions is %.4f" % (total_test_psnr / 10))
Explanation: Run model prediction and plot the results
Let's compute the reconstructed version of a few images and save the results.
End of explanation |
14,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LSTM
[recurrent.LSTM.0] units=4, activation='tanh', recurrent_activation='hard_sigmoid'
Note dropout_W and dropout_U are only applied during training phase
Step1: [recurrent.LSTM.1] units=5, activation='sigmoid', recurrent_activation='sigmoid'
Note dropout_W and dropout_U are only applied during training phase
Step2: [recurrent.LSTM.2] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True
Note dropout_W and dropout_U are only applied during training phase
Step3: [recurrent.LSTM.3] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True
Note dropout_W and dropout_U are only applied during training phase
Step4: [recurrent.LSTM.4] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=True
Note dropout_W and dropout_U are only applied during training phase
Step5: [recurrent.LSTM.5] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=False, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
Step6: [recurrent.LSTM.6] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=False, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
Step7: [recurrent.LSTM.7] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
Step8: [recurrent.LSTM.8] units=4, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True, return_sequences=True, go_backwards=True, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
Step9: export for Keras.js tests | Python Code:
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid')
layer_0 = Input(shape=data_in_shape)
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3000 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: LSTM
[recurrent.LSTM.0] units=4, activation='tanh', recurrent_activation='hard_sigmoid'
Note dropout_W and dropout_U are only applied during training phase
End of explanation
data_in_shape = (8, 5)
rnn = LSTM(5, activation='sigmoid', recurrent_activation='sigmoid')
layer_0 = Input(shape=data_in_shape)
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3100 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.1] units=5, activation='sigmoid', recurrent_activation='sigmoid'
Note dropout_W and dropout_U are only applied during training phase
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid',
return_sequences=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3110 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.2] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True
Note dropout_W and dropout_U are only applied during training phase
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid',
return_sequences=False, go_backwards=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3120 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.3] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True
Note dropout_W and dropout_U are only applied during training phase
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid',
return_sequences=True, go_backwards=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3120 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.4] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=True
Note dropout_W and dropout_U are only applied during training phase
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid',
return_sequences=False, go_backwards=False, stateful=True)
layer_0 = Input(batch_shape=(1, *data_in_shape))
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3130 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.5] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=False, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid',
return_sequences=True, go_backwards=False, stateful=True)
layer_0 = Input(batch_shape=(1, *data_in_shape))
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3140 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.6] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=False, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid',
return_sequences=False, go_backwards=True, stateful=True)
layer_0 = Input(batch_shape=(1, *data_in_shape))
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3150 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.7] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
End of explanation
data_in_shape = (3, 6)
rnn = LSTM(4, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True,
return_sequences=True, go_backwards=True, stateful=True)
layer_0 = Input(batch_shape=(1, *data_in_shape))
layer_1 = rnn(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(3160 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'U']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['recurrent.LSTM.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [recurrent.LSTM.8] units=4, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True, return_sequences=True, go_backwards=True, stateful=True
Note dropout_W and dropout_U are only applied during training phase
To test statefulness, model.predict is run twice
End of explanation
import os
filename = '../../../test/data/layers/recurrent/LSTM.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
14,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy testing
Step1: jsum testing
jsum is very basic function for testing cython. It should be mcuh faster in cython than in python. | Python Code:
A = [2,2,3]
jcy.f_test(A)
print A
C = np.array([0,0], dtype = np.float64)
A = np.array([1,2], dtype = np.float64)
B = np.array([3,5], dtype = np.float64)
%timeit jcy.jsum_float( C, A, B, 1000)
print C
%timeit jpy.jsum_float( C, A, B, 1000)
print C
Explanation: Numpy testing
End of explanation
%timeit jcy.jsum( 100)
%timeit jpy.jsum( 100)
96.2*1e3/404
C = np.array([0,0])
A = np.array([1,2])
B = np.array([3,5])
print A, B, C
jpy.jsum_float( C, A, B)
print C
def f(A):
A[0] = 1
f(A)
print A
def f(A):
A[0] = 1
A = np.array([2,2,3])
f(A)
print A
jpy.f(A)
print A
Explanation: jsum testing
jsum is very basic function for testing cython. It should be mcuh faster in cython than in python.
End of explanation |
14,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License
Step6: Code from previous chapters
make_system, plot_results, and calc_total_infected are unchanged.
Step7: Sweeping beta
Make a range of values for beta, with constant gamma.
Step8: Run the simulation once for each value of beta and print total infections.
Step10: Wrap that loop in a function and return a SweepSeries object.
Step11: Sweep beta and plot the results.
Step12: Sweeping gamma
Using the same array of values for beta
Step13: And now an array of values for gamma
Step14: For each value of gamma, sweep beta and plot the results.
Step15: Exercise
Step17: SweepFrame
The following sweeps two parameters and stores the results in a SweepFrame
Step18: Here's what the SweepFrame look like.
Step19: And here's how we can plot the results.
Step20: We can also plot one line for each value of beta, although there are a lot of them.
Step21: It's often useful to separate the code that generates results from the code that plots the results, so we can run the simulations once, save the results, and then use them for different analysis, visualization, etc.
After running sweep_parameters, we have a SweepFrame with one row for each value of beta and one column for each value of gamma. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
def make_system(beta, gamma):
Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def plot_results(S, I, R):
Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
def calc_total_infected(results):
Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
return get_first_value(results.S) - get_last_value(results.S)
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
init, t0, t_end = system.init, system.t0, system.t_end
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
def update_func(state, t, system):
Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
beta, gamma = system.beta, system.gamma
s, i, r = state
infected = beta * i * s
recovered = gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
Explanation: Code from previous chapters
make_system, plot_results, and calc_total_infected are unchanged.
End of explanation
beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1]
gamma = 0.2
Explanation: Sweeping beta
Make a range of values for beta, with constant gamma.
End of explanation
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(system.beta, calc_total_infected(results))
Explanation: Run the simulation once for each value of beta and print total infections.
End of explanation
def sweep_beta(beta_array, gamma):
Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
Explanation: Wrap that loop in a function and return a SweepSeries object.
End of explanation
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected')
savefig('figs/chap13-fig01.pdf')
Explanation: Sweep beta and plot the results.
End of explanation
beta_array
Explanation: Sweeping gamma
Using the same array of values for beta
End of explanation
gamma_array = [0.2, 0.4, 0.6, 0.8]
Explanation: And now an array of values for gamma
End of explanation
plt.figure(figsize=(7, 4))
for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected',
loc='upper left')
plt.legend(bbox_to_anchor=(1.02, 1.02))
plt.tight_layout()
savefig('figs/chap13-fig02.pdf')
Explanation: For each value of gamma, sweep beta and plot the results.
End of explanation
# Solution
# Sweep beta with fixed gamma
gamma = 1/2
infected_sweep = sweep_beta(beta_array, gamma)
# Solution
# Interpolating by eye, we can see that the infection rate passes through 0.4
# when beta is between 0.6 and 0.7
# We can use the `crossings` function to interpolate more precisely
# (although we don't know about it yet :)
beta_estimate = crossings(infected_sweep, 0.4)
# Solution
# Time between contacts is 1/beta
time_between_contacts = 1/beta_estimate
Explanation: Exercise: Suppose the infectious period for the Freshman Plague is known to be 2 days on average, and suppose during one particularly bad year, 40% of the class is infected at some point. Estimate the time between contacts.
End of explanation
def sweep_parameters(beta_array, gamma_array):
Sweep a range of values for beta and gamma.
beta_array: array of infection rates
gamma_array: array of recovery rates
returns: SweepFrame with one row for each beta
and one column for each gamma
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
Explanation: SweepFrame
The following sweeps two parameters and stores the results in a SweepFrame
End of explanation
frame = sweep_parameters(beta_array, gamma_array)
frame.head()
Explanation: Here's what the SweepFrame look like.
End of explanation
for gamma in gamma_array:
label = 'gamma = ' + str(gamma)
plot(frame[gamma], label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected',
title='',
loc='upper left')
Explanation: And here's how we can plot the results.
End of explanation
plt.figure(figsize=(7, 4))
for beta in [1.1, 0.9, 0.7, 0.5, 0.3]:
label = 'beta = ' + str(beta)
plot(frame.row[beta], label=label)
decorate(xlabel='Recovery rate (gamma)',
ylabel='Fraction infected')
plt.legend(bbox_to_anchor=(1.02, 1.02))
plt.tight_layout()
savefig('figs/chap13-fig03.pdf')
Explanation: We can also plot one line for each value of beta, although there are a lot of them.
End of explanation
contour(frame)
decorate(xlabel='Recovery rate (gamma)',
ylabel='Contact rate (beta)',
title='Fraction infected, contour plot')
savefig('figs/chap13-fig04.pdf')
Explanation: It's often useful to separate the code that generates results from the code that plots the results, so we can run the simulations once, save the results, and then use them for different analysis, visualization, etc.
After running sweep_parameters, we have a SweepFrame with one row for each value of beta and one column for each value of gamma.
End of explanation |
14,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create a variable of the true number of deaths of an event
Step2: Create a variable that is denotes if the while loop should keep running
Step3: while running is True | Python Code:
import random
Explanation: Title: while Statement
Slug: while_statements
Summary: while Statement
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
A while loop loops while a condition is true, stops when the condition becomes false
Import the random module
End of explanation
deaths = 6
Explanation: Create a variable of the true number of deaths of an event
End of explanation
running = True
Explanation: Create a variable that is denotes if the while loop should keep running
End of explanation
while running:
# Create a variable that randomly create a integer between 0 and 10.
guess = random.randint(0,10)
# if guess equals deaths,
if guess == deaths:
# then print this
print('Correct!')
# and then also change running to False to stop the script
running = False
# else if guess is lower than deaths
elif guess < deaths:
# then print this
print('No, it is higher.')
# if guess is none of the above
else:
# print this
print('No, it is lower')
Explanation: while running is True
End of explanation |
14,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo de word2vec con gensim
En la siguiente celda, importamos las librerías necesarias y configuramos los mensajes de los logs.
Step1: Entrenamiento de un modelo
Implemento una clase Corpus con un iterador sobre un directorio que contiene ficheros de texto. Utilizaré una instancia de Corpus para poder procesar de manera más eficiente una colección, sin necesidad de cargarlo previamente en memoria.
Step2: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
Step3: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
Step4: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción
Step5: Probando nuestro modelo
El objeto model contiene una enorme matriz de números
Step6: Cada término del vocabulario está representado como un vector con 150 dimensiones
Step7: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños
Step8: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match
Step9: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo
Step10: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones. | Python Code:
import gensim, logging, os
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Ejemplo de word2vec con gensim
En la siguiente celda, importamos las librerías necesarias y configuramos los mensajes de los logs.
End of explanation
class Corpus(object):
'''Clase Corpus que permite leer de manera secuencial un directorio de documentos de texto'''
def __init__(self, directorio):
self.directory = directorio
def __iter__(self):
for fichero in os.listdir(self.directory):
for linea in open(os.path.join(self.directory, fichero)):
yield linea.split()
Explanation: Entrenamiento de un modelo
Implemento una clase Corpus con un iterador sobre un directorio que contiene ficheros de texto. Utilizaré una instancia de Corpus para poder procesar de manera más eficiente una colección, sin necesidad de cargarlo previamente en memoria.
End of explanation
CORPUSDIR = 'PATH_TO_YOUR_CORPUS_DIRECTORY'
oraciones = Corpus(CORPUSDIR)
model = gensim.models.Word2Vec(oraciones, min_count=10, size=150, workers=2)
# el modelo puede entrenarse en dos pasos sucesivos pero por separado
#model = gensim.models.Word2Vec() # modelo vacío
#model.build_vocab(oraciones) # primera pasada para crear la lista de vocabulario
#model.train(other_sentences) # segunda pasada para calcula vectores
Explanation: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
End of explanation
model.save('PATH_TO_YOUR_MODEL.w2v')
Explanation: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
End of explanation
#model = gensim.models.Word2Vec.load('PATH_TO_YOUR_MODEL.w2v')
#model = gensim.models.Word2Vec.load('/data/w2v/eswiki-280.w2v')
model = gensim.models.Word2Vec.load('/data/w2v/efe.model.w2v')
Explanation: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción:
End of explanation
print(model.corpus_count)
Explanation: Probando nuestro modelo
El objeto model contiene una enorme matriz de números: una tabla, donde cada fila es uno de los términos del vocabulario reconocido y cada columna es una de las características que permiten modelar el significado de dicho término.
En nuestro modelo, tal y como está entrenado, tenemos más de 26 millones de términos:
End of explanation
print(model['azul'], '\n')
print(model['verde'], '\n')
print(model['microsoft'])
Explanation: Cada término del vocabulario está representado como un vector con 150 dimensiones: 105 características. Podemos acceder al vector de un término concreto:
End of explanation
print('hombre - mujer', model.similarity('hombre', 'mujer'))
print('madrid - parís', model.similarity('madrid', 'parís'))
print('perro - gato', model.similarity('perro', 'gato'))
print('gato - periódico', model.similarity('gato', 'periódico'))
Explanation: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños :-/
El mismo objeto model permite acceder a una serie de funcionalidades ya implementadas que nos van a permitir evaluar formal e informalmente el modelo. Por el momento, nos contentamos con los segundo: vamos a revisar visualmente los significados que nuestro modelo ha aprendido por su cuenta.
Podemos calcular la similitud semántica entre dos términos usando el método similarity, que nos devuelve un número entre 0 y 1:
End of explanation
lista1 = 'madrid barcelona gonzález washington'.split()
print('en la lista', ' '.join(lista1), 'sobra:', model.doesnt_match(lista1))
lista2 = 'psoe pp ciu epi'.split()
print('en la lista', ' '.join(lista2), 'sobra:', model.doesnt_match(lista2))
lista3 = 'publicaron declararon soy negaron'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
lista3 = 'homero saturno cervantes shakespeare cela'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
Explanation: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match:
End of explanation
terminos = 'psoe chicago sevilla aznar podemos estuvieron'.split()
terminos = 'microsoft ibm iberia repsol'.split()
for t in terminos:
print(t, '==>', model.most_similar(t), '\n')
Explanation: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo:
End of explanation
print('==> alcalde + mujer - hombre')
most_similar = model.most_similar(positive=['alcalde', 'mujer'], negative=['hombre'], topn=3)
for item in most_similar:
print(item)
print('==> madrid + filipinas - españa')
most_similar = model.most_similar(positive=['madrid', 'filipinas'], negative=['españa'], topn=3)
for item in most_similar:
print(item)
Explanation: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones.
End of explanation |
14,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fit the flight acquisition probability model in 2017
Fit values here were computed 2017-Aug-9
This version introduces a dependence on the search box size. Search box sizes of 160 or 180 arcsec
(required for at least 3 star slots) were used in normal operations starting in the MAR2017 products. This followed
two PMSTA anomalies.
In addition this version uses the 2017 dark current model from chandra_aca version 3.15. This requires computing the warm pixel fraction values instead of using the values provided in the acqusition database.
Step1: Final 2017 fit values
Step2: Final 2015 fit values
Step3: Fit code
Step6: Histogram of warm pixel fraction (and use current dark model, not values in database)
Step7: Plotting and validation
Step8: Color != 1.5 fit
Step9: Color == 1.5 fit
Step10: Compare 2017 to 2015 coefficients
Failure prob vs. mag for Wp=(0.1, 0.2, 0.3)
Step11: Failure prob vs. Wp for mag=(10.0, 10.25, 10.5) | Python Code:
from __future__ import division
import re
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy.time import Time
import tables
from scipy import stats
import tables3_api
from chandra_aca.dark_model import get_warm_fracs
%matplotlib inline
Explanation: Fit the flight acquisition probability model in 2017
Fit values here were computed 2017-Aug-9
This version introduces a dependence on the search box size. Search box sizes of 160 or 180 arcsec
(required for at least 3 star slots) were used in normal operations starting in the MAR2017 products. This followed
two PMSTA anomalies.
In addition this version uses the 2017 dark current model from chandra_aca version 3.15. This requires computing the warm pixel fraction values instead of using the values provided in the acqusition database.
End of explanation
SOTA2017_FIT_NO_1P5 = [4.38145, # scl0
6.22480, # scl1
2.20862, # scl2
-2.24494, # off0
0.32180, # off1
0.08306, # off2
0.00384, # p_bright_fail
]
SOTA2017_FIT_ONLY_1P5 = [4.73283, # scl0
7.63540, # scl1
4.56612, # scl2
-1.49046, # off0
0.53391, # off1
-0.37074, # off2
0.00199, # p_bright_fail
]
Explanation: Final 2017 fit values
End of explanation
SOTA2015_FIT_ALL = [3.9438714542029976, 5.4601129927961134, 1.6582423213669775,
-2.0646518576907495, 0.36414269305801689, -0.0075143036207362852,
0.003740065500207244]
SOTA2015_FIT_NO_1P5 = [4.092016310373646, 6.5415918325159641, 1.8191919043258409,
-2.2301709573082413, 0.30337711472920426, 0.10116735012955963,
0.0043395964215468185]
SOTA2015_FIT_ONLY_1P5 = [4.786710417762472, 4.839392687262392, 1.8646719319052267,
-1.4926740399312248, 0.76412972998935347, -0.20229644263097146,
0.0016270748026844457]
Explanation: Final 2015 fit values
End of explanation
with tables.open_file('/proj/sot/ska/data/acq_stats/acq_stats.h5', 'r') as h5:
cols = h5.root.data.cols
names = {'tstart': 'guide_tstart',
'obsid': 'obsid',
'obc_id': 'acqid',
'halfwidth': 'halfw',
'warm_pix': 'n100_warm_frac',
'mag': 'mag_aca',
'known_bad': 'known_bad',
'color': 'color1',
'img_func': 'img_func',
'ion_rad': 'ion_rad',
'sat_pix': 'sat_pix',
'ccd_temp': 'ccd_temp'}
acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()],
names=list(names.keys()))
year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately
acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4')
acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4')
acqs['color_1p5'] = np.where(acqs['color'] == 1.5, 1, 0)
# Filter for year and mag
ok = (acqs['year'] > 2007) & (acqs['mag'] > 6.0) & (acqs['mag'] < 11.0)
# Filter known bad obsids
print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(ok)))
bad_obsids = [
# Venus
2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,
7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,
16500,16501,16503,16504,16505,16506,16502,
]
for badid in bad_obsids:
ok = ok & (acqs['obsid'] != badid)
print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(ok)))
data_all = acqs[ok]
data_all.sort('year')
data_all['mag10'] = data_all['mag'] - 10.0
# Adjust probability (in probit space) for box size. See:
# https://github.com/sot/skanb/blob/master/pea-test-set/fit_box_size_acq_prob.ipynb
b1 = 0.96
b2 = -0.30
box0 = (data_all['halfwidth'] - 120) / 120 # normalized version of box, equal to 0.0 at nominal default
data_all['box_delta'] = b1 * box0 + b2 * box0**2
Explanation: Fit code
End of explanation
# Compute warm fracs using current dark model. This takes a couple of minutes
warm_fracs = [get_warm_fracs(100, date=tstart, T_ccd=ccd_temp)
for tstart, ccd_temp in zip(data_all['tstart'], data_all['ccd_temp'])]
n, bins, patches = plt.hist(data_all['warm_pix'], bins=100, label='acq database')
plt.grid()
plt.xlabel('Warm pixel fraction')
plt.hist(warm_fracs, bins=bins, facecolor='r', alpha=0.5, label='current dark model')
plt.legend();
# Substitute current dark model values instead of acq database
data_all['warm_pix'] = warm_fracs
data_all = data_all.group_by('quarter')
data_mean = data_all.groups.aggregate(np.mean)
def p_fail(pars, m10, wp, box_delta=0.0):
Acquisition probability model
:param pars: 7 parameters (3 x offset, 3 x scale, p_fail for bright stars)
:param m10: mag - 10
:param wp: warm pixel fraction
:param box: search box half width (arcsec)
scl0, scl1, scl2 = pars[0:3]
off0, off1, off2 = pars[3:6]
p_bright_fail = pars[6]
scale = scl0 + scl1 * m10 + scl2 * m10**2
offset = off0 + off1 * m10 + off2 * m10**2
p_fail = offset + scale * wp + box_delta
p_fail = stats.norm.cdf(p_fail) # probit transform
p_fail[m10 < -1.5] = p_bright_fail # For stars brighter than 8.5 mag use a constant
return p_fail
def p_acq_fail(data=None):
Sherpa fit function wrapper to ensure proper use of data in fitting.
if data is None:
data = data_all
m10 = data['mag10']
wp = data['warm_pix']
box_delta = data['box_delta']
def sherpa_func(pars, x):
return p_fail(pars, m10, wp, box_delta)
return sherpa_func
def fit_sota_model(data_mask=None, ms_disabled=False):
from sherpa import ui
obc_id = data_all['obc_id']
if ms_disabled:
obc_id |= (data_all['img_func'] == 'star') & ~data_all['ion_rad'] & ~data_all['sat_pix']
data_all['fail'] = np.where(obc_id, 0.0, 1.0)
data = data_all if data_mask is None else data_all[data_mask]
data_id = 1
ui.set_method('simplex')
ui.set_stat('cash')
ui.load_user_model(p_acq_fail(data), 'model')
ui.add_user_pars('model', ['scl0', 'scl1', 'scl2', 'off0', 'off1', 'off2', 'p_bright_fail'])
ui.set_model(data_id, 'model')
ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float))
# Initial fit values from fit of all data
start_vals = iter(SOTA2015_FIT_ALL) # Offset
fmod = ui.get_model_component('model')
for name in ('scl', 'off'):
for num in (0, 1, 2):
comp_name = name + str(num)
setattr(fmod, comp_name, next(start_vals))
comp = getattr(fmod, comp_name)
comp.min = -100000
comp.max = 100000
# ui.freeze(comp)
fmod.p_bright_fail = 0.025
fmod.p_bright_fail.min = 0.0
fmod.p_bright_fail.max = 1.0
# ui.freeze(fmod.p_bright_fail)
ui.fit(data_id)
# conf = ui.get_confidence_results()
return ui.get_fit_results()
Explanation: Histogram of warm pixel fraction (and use current dark model, not values in database)
End of explanation
def plot_fit_grouped(pars, group_col, group_bin, mask=None, log=False, colors='br', label=None):
data = data_all if mask is None else data_all[mask]
data['model'] = p_acq_fail(data)(pars, None)
group = np.trunc(data[group_col] / group_bin)
data = data.group_by(group)
data_mean = data.groups.aggregate(np.mean)
len_groups = np.diff(data.groups.indices)
fail_sigmas = np.sqrt(data_mean['fail'] * len_groups) / len_groups
plt.errorbar(data_mean[group_col], data_mean['fail'], yerr=fail_sigmas, fmt='.' + colors[0], label=label)
plt.plot(data_mean[group_col], data_mean['model'], '-' + colors[1])
if log:
ax = plt.gca()
ax.set_yscale('log')
def mag_filter(mag0, mag1):
ok = (data_all['mag'] > mag0) & (data_all['mag'] < mag1)
return ok
def wp_filter(wp0, wp1):
ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1)
return ok
def print_fit_results(fit, label):
label = label + ' = ['
print(label, end='')
space = ''
for parname, parval in zip(fit.parnames, fit.parvals):
parname = re.sub(r'model\.', '', parname)
print(f'{space}{parval:.5f}, # {parname}')
space = ' ' * len(label)
print(space + ']')
def plot_fit_all(fit, mask=None):
print(fit)
parvals = [par.val for par in model.pars]
print(parvals)
if mask is None:
mask = np.ones(len(data_all), dtype=bool)
plt.figure()
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.10, 0.20) & mask, log=False, colors='cm', label='0.10 < WP < 0.2')
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.0, 0.10) & mask, log=False, colors='br', label='0 < WP < 0.10')
plt.legend(loc='upper left');
plt.ylim(0.001, 1.0);
plt.xlim(9, 11)
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(10, 10.6) & mask, log=True, colors='cm', label='10 < mag < 10.6')
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(9, 10) & mask, log=True, colors='br', label='9 < mag < 10')
plt.legend(loc='best')
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10, 10.6) & mask, colors='cm', label='10 < mag < 10.6')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.5, 10) & mask, colors='br', label='9.5 < mag < 10')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.0, 9.5) & mask, colors='gk', label='9.0 < mag < 9.5')
plt.legend(loc='best')
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10, 10.6) & mask, colors='cm', label='10 < mag < 10.6', log=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.5, 10) & mask, colors='br', label='9.5 < mag < 10', log=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.0, 9.5) & mask, colors='gk', label='9.0 < mag < 9.5', log=True)
plt.legend(loc='best')
plt.grid();
Explanation: Plotting and validation
End of explanation
print('Hang tight, this could take a few minutes')
# fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True)
mask = data_all['color'] != 1.5
fit_n1p5 = fit_sota_model(mask, ms_disabled=True)
print_fit_results(fit_n1p5, 'SOTA2017_FIT_NO_1P5')
plot_fit_all(fit_n1p5, mask=mask)
Explanation: Color != 1.5 fit
End of explanation
print('Hang tight, this could take a few minutes')
mask = data_all['color'] == 1.5
fit_1p5 = fit_sota_model(mask, ms_disabled=True)
print_fit_results(fit_1p5, 'SOTA2017_FIT_ONLY_1P5')
plot_fit_all(fit_1p5, mask=mask)
Explanation: Color == 1.5 fit
End of explanation
mag = np.linspace(9, 11, 30)
for wp in (0.1, 0.2, 0.3):
plt.plot(mag, p_fail(SOTA2015_FIT_NO_1P5, mag-10, wp), 'r',
label='2015 model' if wp == 0.1 else None)
plt.plot(mag, p_fail(SOTA2017_FIT_NO_1P5, mag-10, wp), 'b',
label='2017 model' if wp == 0.1 else None)
plt.grid()
plt.xlabel('Mag')
plt.ylim(0, 1)
plt.title('Failure prob vs. mag for Wp=(0.1, 0.2, 0.3)')
plt.legend(loc='upper left')
plt.ylabel('Prob');
Explanation: Compare 2017 to 2015 coefficients
Failure prob vs. mag for Wp=(0.1, 0.2, 0.3)
End of explanation
for mag in (10.0, 10.25, 10.5):
wp = np.linspace(0, 0.4, 30)
plt.plot(wp, p_fail(SOTA2015_FIT_NO_1P5, mag-10, wp), 'r',
label='2015 model' if mag == 10.0 else None)
plt.plot(wp, p_fail(SOTA2017_FIT_NO_1P5, mag-10, wp), 'b',
label='2017 model' if mag == 10.0 else None)
plt.grid()
plt.xlabel('Warm pix frac')
plt.ylim(0, 1)
plt.title('Failure prob vs. Wp for mag=(10.0, 10.25, 10.5)')
plt.ylabel('Fail prob');
Explanation: Failure prob vs. Wp for mag=(10.0, 10.25, 10.5)
End of explanation |
14,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
========================================================
Time-frequency on simulated data (Multitaper vs. Morlet)
========================================================
This examples demonstrates on simulated data the different time-frequency
estimation methods. It shows the time-frequency resolution trade-off
and the problem of estimation variance.
Step1: Simulate data
Step2: Consider different parameter possibilities for multitaper convolution | Python Code:
# Authors: Hari Bharadwaj <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
from mne import create_info, EpochsArray
from mne.time_frequency import tfr_multitaper, tfr_stockwell, tfr_morlet
print(__doc__)
Explanation: ========================================================
Time-frequency on simulated data (Multitaper vs. Morlet)
========================================================
This examples demonstrates on simulated data the different time-frequency
estimation methods. It shows the time-frequency resolution trade-off
and the problem of estimation variance.
End of explanation
sfreq = 1000.0
ch_names = ['SIM0001', 'SIM0002']
ch_types = ['grad', 'grad']
info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
n_times = int(sfreq) # 1 second long epochs
n_epochs = 40
seed = 42
rng = np.random.RandomState(seed)
noise = rng.randn(n_epochs, len(ch_names), n_times)
# Add a 50 Hz sinusoidal burst to the noise and ramp it.
t = np.arange(n_times, dtype=np.float) / sfreq
signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal
signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing
on_time = np.logical_and(t >= 0.45, t <= 0.55)
signal[on_time] *= np.hanning(on_time.sum()) # Ramping
data = noise + signal
reject = dict(grad=4000)
events = np.empty((n_epochs, 3), dtype=int)
first_event_sample = 100
event_id = dict(sin50hz=1)
for k in range(n_epochs):
events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz']
epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id,
reject=reject)
Explanation: Simulate data
End of explanation
freqs = np.arange(5., 100., 3.)
# You can trade time resolution or frequency resolution or both
# in order to get a reduction in variance
# (1) Least smoothing (most variance/background fluctuations).
n_cycles = freqs / 2.
time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper)
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3.,
title='Sim: Least smoothing, most variance')
# (2) Less frequency smoothing, more time smoothing.
n_cycles = freqs # Increase time-window length to 1 second.
time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3.,
title='Sim: Less frequency smoothing, more time smoothing')
# (3) Less time smoothing, more frequency smoothing.
n_cycles = freqs / 2.
time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers.
power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3.,
title='Sim: Less time smoothing, more frequency smoothing')
# #############################################################################
# Stockwell (S) transform
# S uses a Gaussian window to balance temporal and spectral resolution
# Importantly, frequency bands are phase-normalized, hence strictly comparable
# with regard to timing, and, the input signal can be recoverd from the
# transform in a lossless way if we disregard numerical errors.
fmin, fmax = freqs[[0, -1]]
for width in (0.7, 3.0):
power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width)
power.plot([0], baseline=(0., 0.1), mode='mean',
title='Sim: Using S transform, width '
'= {:0.1f}'.format(width), show=True)
# #############################################################################
# Finally, compare to morlet wavelet
n_cycles = freqs / 2.
power = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False)
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3.,
title='Sim: Using Morlet wavelet')
Explanation: Consider different parameter possibilities for multitaper convolution
End of explanation |
14,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following will download a pretrained neural net model for the notebook on image classification.
Step1: The following checks that scikit-image is properly installed
Step2: Optional | Python Code:
import sys
print("python command used for this notebook:")
print(sys.executable)
import tensorflow as tf
print("tensorflow:", tf.__version__)
from tensorflow.keras.applications.resnet50 import preprocess_input, ResNet50
model = ResNet50(weights='imagenet')
Explanation: The following will download a pretrained neural net model for the notebook on image classification.
End of explanation
from skimage.io import imread
from skimage.transform import resize
Explanation: The following checks that scikit-image is properly installed:
End of explanation
import cv2
Explanation: Optional: Check that opencv-python is properly installed:
End of explanation |
14,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objective
Load Data, vectorize reviews to numbers
Build a basic model based on counting
Evaluate the Model
Make a first Kaggle Submission
Download Data from Kaggle
Step1: Load Data
Step2: Explore Dataset
Step4: Vectorize Data (a.k.a. covert text to numbers)
Computers don't understand Texts, so we need to convert texts to numbers before we could do any math on it and see if we can build a system to classify a review as Positive or Negative.
Ways to vectorize data
Step5: Observations
Step6: Let's Amplify the difference using Log Scale
Neutral Values are Close to 1
Negative Sentiment Words are less than 1
Positive Sentiment Words are greater than 1
When Converted to Log Scale -
Neutral Values are Close to 0
Negative Sentiment Words are negative
Positive Sentiment Words are postive
That not only makes lot of sense when looking at the numbers, but we could use it for our first classifier
Step7: Time to build a Counting Model
For each Review, we will ADD all the pos_neg_freq values and if the Total for all words in the given review is > 0, we will call it Positive Review and if it's a negative total, we will call it a Negative Review. Sounds good?
Step8: Machine Learning Easy? What Gives?
Remember this is Training Accuracy. We have not split our Data into Train and Validation (which we will do in our next notebook when we actualy build a Machine Learning Model)
Make a Submission to Kaggle
Predict on Test Data and Submit to Kaggle. May be we could end the tutorial right here
Step9: Reasons for Testing Accuracy Being Lower?
One Hypothesis, Since we are just Adding up ALL of the scores for each word in the review, the length of the reivew could have an impact. Let's look at length of reviews in train and test dataset | Python Code:
from __future__ import print_function # Python 2/3 compatibility
import numpy as np
import pandas as pd
from collections import Counter
from IPython.display import Image
Explanation: Objective
Load Data, vectorize reviews to numbers
Build a basic model based on counting
Evaluate the Model
Make a first Kaggle Submission
Download Data from Kaggle:
Competition Link: https://www.kaggle.com/c/movie-sentiment-analysis
Unzip into Data Directory
End of explanation
train_df = pd.read_csv("data/train.tsv", sep="\t")
train_df.sample(10)
# Load the Test Dataset
# Note that it's missing the Sentiment Column. That's what we need to Predict
#
test_df = pd.read_csv("data/test.tsv", sep="\t")
test_df.head()
Explanation: Load Data
End of explanation
# Equal Number of Positive and Negative Sentiments
train_df.sentiment.value_counts()
# Lets take a look at some examples
def print_reviews(reviews, max_words=500):
for review in reviews:
print(review[:500], end="\n\n")
# Some Positive Reviews
print("Sample **Positive** Reviews: ", "\n")
print_reviews(train_df[train_df["sentiment"] == 1].sample(3).review)
# Some Negative Reviews
print("Sample **Negative** Reviews: ", "\n")
print_reviews(train_df[train_df["sentiment"] == 0].sample(3).review)
Explanation: Explore Dataset
End of explanation
## Doing it by Hand
def bag_of_words_vocab(reviews):
Returns words in the reviews
# all_words = []
# for review in reviews:
# for word in review.split():
# all_words.append(word)
## List comprehension method of the same lines above
all_words = [word.lower() for review in reviews for word in review.split(" ")]
return Counter(all_words)
words_vocab = bag_of_words_vocab(train_df.review)
words_vocab.most_common(20)
Explanation: Vectorize Data (a.k.a. covert text to numbers)
Computers don't understand Texts, so we need to convert texts to numbers before we could do any math on it and see if we can build a system to classify a review as Positive or Negative.
Ways to vectorize data:
Bag of Words
TF-IDF
Word Embeddings (Word2Vec)
Bag of Words
Take each sentence and count how many occurances of a particular word.
End of explanation
pos_words_vocab = bag_of_words_vocab(train_df[train_df.sentiment == 1].review)
neg_words_vocab = bag_of_words_vocab(train_df[train_df.sentiment == 0].review)
pos_words_vocab.most_common(10)
neg_words_vocab.most_common(10)
pos_neg_freq = Counter()
for word in words_vocab:
pos_neg_freq[word] = (pos_words_vocab[word] + 1e-3) / (neg_words_vocab[word] + 1e-3)
print("Neutral words:")
print("Pos-to-neg for 'the' = {:.2f}".format(pos_neg_freq["is"]))
print("Pos-to-neg for 'movie' = {:.2f}".format(pos_neg_freq["is"]))
print("\nPositive and Negative review words:")
print("Pos-to-neg for 'amazing' = {:.2f}".format(pos_neg_freq["great"]))
print("Pos-to-neg for 'terrible' = {:.2f}".format(pos_neg_freq["terrible"]))
Explanation: Observations:
Common words are not that meaningful (also called Stop words - unfortunately)
These words are likely to appear in both Positive and Negative Reviews
We need a way to find what words are mroe likely to cocur in Postive Review as compared to Negative Review
End of explanation
# https://www.desmos.com/calculator
Image("images/log-function.png", width=960)
for word in pos_neg_freq:
pos_neg_freq[word] = np.log(pos_neg_freq[word])
print("Neutral words:")
print("Pos-to-neg for 'the' = {:.2f}".format(pos_neg_freq["is"]))
print("Pos-to-neg for 'movie' = {:.2f}".format(pos_neg_freq["is"]))
print("\nPositive and Negative review words:")
print("Pos-to-neg for 'amazing' = {:.2f}".format(pos_neg_freq["great"]))
print("Pos-to-neg for 'terrible' = {:.2f}".format(pos_neg_freq["terrible"]))
Explanation: Let's Amplify the difference using Log Scale
Neutral Values are Close to 1
Negative Sentiment Words are less than 1
Positive Sentiment Words are greater than 1
When Converted to Log Scale -
Neutral Values are Close to 0
Negative Sentiment Words are negative
Positive Sentiment Words are postive
That not only makes lot of sense when looking at the numbers, but we could use it for our first classifier
End of explanation
class CountingClassifier(object):
def __init__(self, pos_neg_freq):
self.pos_neg_freq = pos_neg_freq
def fit(self, X, y=None):
# No Machine Learing here. It's just counting
pass
def predict(self, X):
predictions = []
for review in X:
all_words = [word.lower() for word in review.split()]
result = np.sum(self.pos_neg_freq.get(word, 0) for word in all_words)
predictions.append(result)
return np.array(predictions)
counting_model = CountingClassifier(pos_neg_freq)
train_predictions = counting_model.predict(train_df.review)
train_predictions[:10]
# Covert to Binary Classifier
train_predictions > 0
y_pred = (train_predictions > 0).astype(int)
y_pred
y_true = train_df.sentiment
len(y_true)
np.sum(y_pred == y_true)
## Accuracy
train_accuracy = np.sum(y_pred == y_true) / len(y_true)
print("Accuracy on Train Data: {:.2f}".format(train_accuracy))
Explanation: Time to build a Counting Model
For each Review, we will ADD all the pos_neg_freq values and if the Total for all words in the given review is > 0, we will call it Positive Review and if it's a negative total, we will call it a Negative Review. Sounds good?
End of explanation
## Test Accracy
test_predictions = counting_model.predict(test_df.review)
test_predictions
y_pred = (test_predictions > 0).astype(int)
df = pd.DataFrame({
"document_id": test_df.document_id,
"sentiment": y_pred
})
df.head()
df.to_csv("data/count-submission.csv", index=False)
Explanation: Machine Learning Easy? What Gives?
Remember this is Training Accuracy. We have not split our Data into Train and Validation (which we will do in our next notebook when we actualy build a Machine Learning Model)
Make a Submission to Kaggle
Predict on Test Data and Submit to Kaggle. May be we could end the tutorial right here :-D
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
train_df.review.str.len().hist(log=True)
test_df.review.str.len().hist(log=True)
Explanation: Reasons for Testing Accuracy Being Lower?
One Hypothesis, Since we are just Adding up ALL of the scores for each word in the review, the length of the reivew could have an impact. Let's look at length of reviews in train and test dataset
End of explanation |
14,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background
This notebook seeks to quantify the value of leaving a certain number of tiles in the bag during the pre-endgame based on a repository of games. We will then implement these values as a pre-endgame heuristic in the Macondo speedy player to improve simulation quality.
Initial questions
Step1: Store the final spread of each game for comparison. The assumption here is that the last row logged is the final turn of the game, so for each game ID we overwrite the final move dictionary until there are no more rows from that game
Step2: Save a summary and a verbose version of preendgame heuristic values. | Python Code:
from copy import deepcopy
import csv
from datetime import date
import numpy as np
import pandas as pd
import seaborn as sns
import time
log_folder = '../logs/'
log_file = log_folder + 'log_20200515_preendgames.csv'
todays_date = date.today().strftime("%Y%m%d")
final_spread_dict = {}
out_first_dict = {}
win_dict = {}
Explanation: Background
This notebook seeks to quantify the value of leaving a certain number of tiles in the bag during the pre-endgame based on a repository of games. We will then implement these values as a pre-endgame heuristic in the Macondo speedy player to improve simulation quality.
Initial questions:
1. What is the probability that you will go out first if you make a play leaving N tiles in the bag?
2. What is the expected difference between your end-of-turn spread and end-of-game spread?
2. What's your win probability?
Implementation details
Similar
Assumptions
We're only analyzing complete games
Next steps
Standardize sign convention for spread.
Start figuring out how to calculate pre-endgame spread
Quackle values for reference
0,0.0
1,-8.0
2,0.0
3,-0.5
4,-2.0
5,-3.5
6,-2.0
7,2.0
8,10.0,
9,7.0,
10,4.0,
11,-1.0,
12,-2.0
Runtime
I was able to run this script on my local machine for ~20M rows in 2 minutes.
End of explanation
t0 = time.time()
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
if i<10:
print(row)
if row[0]=='p1':
final_spread_dict[row[1]] = int(row[6])-int(row[11])
else:
final_spread_dict[row[1]] = int(row[11])-int(row[6])
out_first_dict[row[1]] = row[0]
# This flag indicates whether p1 won or not, with 0.5 as the value if the game was tied.
win_dict[row[1]] = (np.sign(final_spread_dict[row[1]])+1)/2
preendgame_boundaries = [8,21] # how many tiles are in the bag before we count as pre-endgame?
preendgame_tile_range = range(preendgame_boundaries[0],preendgame_boundaries[1]+1)
counter_dict = {x:{y:0 for y in range(x-7,x+1)} for x in preendgame_tile_range}
end_of_turn_spread_counter_dict = deepcopy(counter_dict)
equity_counter_dict = deepcopy(counter_dict)
final_spread_counter_dict = deepcopy(counter_dict)
game_counter_dict = deepcopy(counter_dict)
out_first_counter_dict = deepcopy(counter_dict)
win_counter_dict = deepcopy(counter_dict)
t0=time.time()
print('There are {} games'.format(len(final_spread_dict)))
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
beginning_of_turn_tiles_left = int(row[10])
end_of_turn_tiles_left = int(row[10])-int(row[7])
if (beginning_of_turn_tiles_left >= preendgame_boundaries[0] and
beginning_of_turn_tiles_left <= preendgame_boundaries[1]):
end_of_turn_spread_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] +=\
int(row[6])-int(row[11])
equity_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] +=\
float(row[9])-float(row[5])
game_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += 1
out_first_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += \
out_first_dict[row[1]] == row[0]
if row[0]=='p1':
final_spread_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += final_spread_dict[row[1]]
win_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += win_dict[row[1]]
else:
final_spread_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] -= final_spread_dict[row[1]]
win_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += (1-win_dict[row[1]])
# if i<1000:
# print(row)
# print(game_counter_dict[beginning_of_turn_tiles_left])
# print(end_of_turn_spread_counter_dict[beginning_of_turn_tiles_left])
# print(equity_counter_dict[beginning_of_turn_tiles_left])
# print(final_spread_counter_dict[beginning_of_turn_tiles_left])
# print(win_counter_dict[beginning_of_turn_tiles_left])
# print(out_first_counter_dict[beginning_of_turn_tiles_left])
count_df = pd.DataFrame(game_counter_dict)
end_of_turn_spread_df = pd.DataFrame(end_of_turn_spread_counter_dict)
equity_df = pd.DataFrame(equity_counter_dict)
final_spread_df = pd.DataFrame(final_spread_counter_dict)
out_first_df = pd.DataFrame(out_first_counter_dict)
win_df = pd.DataFrame(win_counter_dict)
spread_delta_df = final_spread_df-end_of_turn_spread_df
avg_spread_delta_df = spread_delta_df/count_df
avg_equity_df = equity_df/count_df
out_first_pct_df = out_first_df/count_df
win_pct_df = 100*win_df/count_df
tst_df = avg_spread_delta_df-avg_equity_df
win_pct_df
np.mean(tst_df,axis=1)
sns.heatmap(tst_df)
avg_spread_delta_plot = sns.heatmap(avg_spread_delta_df)
fig = avg_spread_delta_plot.get_figure()
fig.savefig("average_spread_delta.png")
quackle_peg_dict = {
1:-8.0,
2:0.0,
3:-0.5,
4:-2.0,
5:-3.5,
6:-2.0,
7:2.0,
8:10.0,
9:7.0,
10:4.0,
11:-1.0,
12:-2.0
}
quackle_peg_series = pd.Series(quackle_peg_dict, name='quackle_values')
df = pd.concat([df,quackle_peg_series],axis=1)
df['quackle_macondo_delta'] = df['quackle_values']-df['avg_spread_delta']
df = df.reset_index().rename({'index':'tiles_left_after_play'}, axis=1)
df
sns.barplot(x='tiles_left_after_play',y='final_spread',data=df)
sns.barplot(x='tiles_left_after_play',y='avg_spread_delta',data=df)
sns.barplot(x='tiles_left_after_play',y='out_first_pct',data=df)
sns.barplot(x='tiles_left_after_play',y='win_pct',data=df)
Explanation: Store the final spread of each game for comparison. The assumption here is that the last row logged is the final turn of the game, so for each game ID we overwrite the final move dictionary until there are no more rows from that game
End of explanation
df['avg_spread_delta'].to_csv('peg_heuristics_' + todays_date + '.csv')
df.to_csv('peg_summary_' + todays_date + '.csv')
Explanation: Save a summary and a verbose version of preendgame heuristic values.
End of explanation |
14,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Hyperparameters Tested
2000,0.2, 5,1,0.248,0.408
2000,0.4, 5,1,0.150,0.274
4000,0.4, 5,1,0.098,0.199
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
# sigmoid calculation
self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.
# sigmoid derivative calculation
self.derivative_function = lambda x : self.activation_function(x) * (1 - self.activation_function(x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values '''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * self.derivative_function(hidden_inputs)
output_error_term = error
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 4000
learning_rate = 0.4
hidden_nodes = 5
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Hyperparameters Tested
2000,0.2, 5,1,0.248,0.408
2000,0.4, 5,1,0.150,0.274
4000,0.4, 5,1,0.098,0.199
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
14,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
蓍草卜卦
大衍之数五十,其用四十有九。分而为二以象两,挂一以象三,揲之以四以象四时,归奇于扐以象闰。五岁再闰,故再扐而后挂。天一,地二;天三,地四;天五,地六;天七,地八;天九,地十。天数五,地数五。五位相得而各有合,天数二十有五,地数三十,凡天地之数五十有五,此所以成变化而行鬼神也。乾之策二百一十有六,坤之策百四十有四,凡三百六十,当期之日。二篇之策,万有一千五百二十,当万物之数也。是故四营而成《易》,十有八变而成卦,八卦而小成。引而伸之,触类而长之,天下之能事毕矣。显道神德行,是故可与酬酢,可与祐神矣。子曰:“知变化之道者,其知神之所为乎。”
大衍之数五十,存一不用,构造天地人三者,历经三变,第一次的余数是5或9,第二次的是4或8,第三次的是4或8,剩下的数量除以4就是结果。即为一爻,算六爻要一个小时。古人构造随机数的方法太费时间啦。用Python写个程序来搞吧!
Step1: 大衍之数五十,存一不用
Step2: 一变
Step3: 二变
Step4: 三变
Step5: 得到六爻及变卦
Step6: 得到卦名
Step7: 添加卦辞 | Python Code:
import random
def sepSkyEarth(data):
sky = random.randint(1, data-2)
earth = data - sky
earth -= 1
return sky , earth
def getRemainder(num):
rm = num % 4
if rm == 0:
rm = 4
return rm
def getChange(data):
sky, earth = sepSkyEarth(data)
skyRemainder = getRemainder(sky)
earthRemainder = getRemainder(earth)
change = skyRemainder + earthRemainder + 1
data = data - change
return sky, earth, change, data
def getYao(data):
sky, earth, firstChange, data = getChange(data)
sky, earth, secondChange, data = getChange(data)
sky, earth, thirdChange, data = getChange(data)
yao = data/4
return yao, firstChange, secondChange, thirdChange
def sixYao():
yao1 = getYao(data = 50 - 1)[0]
yao2 = getYao(data = 50 - 1)[0]
yao3 = getYao(data = 50 - 1)[0]
yao4 = getYao(data = 50 - 1)[0]
yao5 = getYao(data = 50 - 1)[0]
yao6 = getYao(data = 50 - 1)[0]
return[yao1, yao2, yao3, yao4, yao5, yao6]
def fixYao(num):
if num == 6 or num == 9:
print "there is a changing predict! Also run changePredict()"
return num % 2
def changeYao(num):
if num == 6:
num = 1
elif num == 9:
num = 2
num = num % 2
return(num)
def fixPredict(pred):
fixprd = [fixYao(i) for i in pred]
fixprd = list2str(fixprd)
return fixprd
def list2str(l):
si = ''
for i in l:
si = si + str(i)
return si
def changePredict(pred):
changeprd = [changeYao(i) for i in pred]
changeprd = list2str(changeprd)
return changeprd
def getPredict():
pred = sixYao()
fixPred = fixPredict(pred)
if 6 in pred or 9 in pred:
changePred = changePredict(pred)
else:
changePred = None
return fixPred, changePred
def interpretPredict(now, future):
dt = {'111111':'乾','011111':'夬','000000':'坤','010001':'屯','100010':'蒙','010111':'需','111010':'讼','000010':'师',
'010000':'比','110111':'小畜','111011':'履','000111':'泰','111000':'否','111101':'同人','101111':'大有','000100':'谦',
'001000':'豫','011001':'随','100110':'蛊','000011':'临','110000':'观','101001':'噬嗑','100101':'贲','100000':'剥',
'000001':'复','111001':'无妄','100111':'大畜','100001':'颐','011110':'大过','010010':'坎','101101':'离','011100':'咸',
'001110':'恒','111100':'遁','001111':'大壮','101000':'晋','000101':'明夷','110101':'家人','101011':'睽','010100':'蹇',
'001010':'解','100011':'损','110001':'益','111110':'姤','011000':'萃','000110':'升','011010':'困','010110':'井',
'011101':'革','101110':'鼎','001001':'震','100100':'艮','110100':'渐','001011':'归妹','001101':'丰','101100':'旅',
'110110':'巽','011011':'兑','110010':'涣','010011':'节','110011':'中孚','001100':'小过','010101':'既济','101010':'未济'}
if future:
name = dt[now] + ' & ' + dt[future]
else:
name = dt[now]
print name
def plotTransitionRemainder(N, w):
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from collections import defaultdict
changes = {}
for i in range(N):
sky, earth, firstChange, data = getChange(data = 50 -1)
sky, earth, secondChange, data = getChange(data)
sky, earth, thirdChange, data = getChange(data)
changes[i]=[firstChange, secondChange, thirdChange, data/4]
ichanges = changes.values()
firstTransition = defaultdict(int)
for i in ichanges:
firstTransition[i[0], i[1]]+=1
secondTransition = defaultdict(int)
for i in ichanges:
secondTransition[i[1], i[2]]+=1
thirdTransition = defaultdict(int)
for i in ichanges:
thirdTransition[i[2], i[3]]+=1
cmap = cm.get_cmap('Accent_r', len(ichanges))
for k, v in firstTransition.iteritems():
plt.plot([1, 2], k, linewidth = v*w/N)
for k, v in secondTransition.iteritems():
plt.plot([2, 3], k, linewidth = v*w/N)
for k, v in thirdTransition.iteritems():
plt.plot([3, 4], k, linewidth = v*w/N)
plt.xlabel(u'Time')
plt.ylabel(u'Changes')
Explanation: 蓍草卜卦
大衍之数五十,其用四十有九。分而为二以象两,挂一以象三,揲之以四以象四时,归奇于扐以象闰。五岁再闰,故再扐而后挂。天一,地二;天三,地四;天五,地六;天七,地八;天九,地十。天数五,地数五。五位相得而各有合,天数二十有五,地数三十,凡天地之数五十有五,此所以成变化而行鬼神也。乾之策二百一十有六,坤之策百四十有四,凡三百六十,当期之日。二篇之策,万有一千五百二十,当万物之数也。是故四营而成《易》,十有八变而成卦,八卦而小成。引而伸之,触类而长之,天下之能事毕矣。显道神德行,是故可与酬酢,可与祐神矣。子曰:“知变化之道者,其知神之所为乎。”
大衍之数五十,存一不用,构造天地人三者,历经三变,第一次的余数是5或9,第二次的是4或8,第三次的是4或8,剩下的数量除以4就是结果。即为一爻,算六爻要一个小时。古人构造随机数的方法太费时间啦。用Python写个程序来搞吧!
End of explanation
data = 50 - 1
Explanation: 大衍之数五十,存一不用
End of explanation
sky, earth, firstChange, data = getChange(data)
print sky, '\n', earth, '\n',firstChange, '\n', data
Explanation: 一变
End of explanation
sky, earth, secondChange, data = getChange(data)
print sky, '\n', earth, '\n',secondChange, '\n', data
Explanation: 二变
End of explanation
sky, earth, thirdChange, data = getChange(data)
print sky, '\n', earth, '\n',thirdChange, '\n', data
Explanation: 三变
End of explanation
getPredict()
getPredict()
getPredict()
Explanation: 得到六爻及变卦
End of explanation
fixPred, changePred = getPredict()
interpretPredict(fixPred, changePred )
Explanation: 得到卦名
End of explanation
#http://baike.fututa.com/zhouyi64gua/
import urllib2
from bs4 import BeautifulSoup
import os
# set work directory
os.chdir('/Users/chengjun/github/iching/')
dt = {'111111':'乾','011111':'夬','000000':'坤','010001':'屯','100010':'蒙','010111':'需','111010':'讼','000010':'师',
'010000':'比','110111':'小畜','111011':'履','000111':'泰','111000':'否','111101':'同人','10111':'大有','000100':'谦',
'001000':'豫','011001':'随','100110':'蛊','000011':'临','110000':'观','101001':'噬嗑','100101':'贲','100000':'剥',
'000001':'复','111001':'无妄','100111':'大畜','100001':'颐','011110':'大过','010010':'坎','101101':'离','011100':'咸',
'001110':'恒','111100':'遁','001111':'大壮','101000':'晋','000101':'明夷','110101':'家人','101011':'睽','010100':'蹇',
'001010':'解','100011':'损','110001':'益','111110':'姤','011000':'萃','000110':'升','011010':'困','010110':'井',
'011101':'革','101110':'鼎','001001':'震','100100':'艮','110100':'渐','001011':'归妹','001101':'丰','101100':'旅',
'110110':'巽','011011':'兑','110010':'涣','010011':'节','110011':'中孚','001100':'小过','010101':'既济','101010':'未济'}
dr = {}
for i, j in dt.iteritems():
dr[unicode(j, 'utf8')]= i
url = "http://baike.fututa.com/zhouyi64gua/"
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content)
articles = soup.find_all('div', {'class', 'gualist'})[0].find_all('a')
links = [i['href'] for i in articles]
links[:2]
dtext = {}
from time import sleep
num = 0
for j in links:
sleep(0.1)
num += 1
ghtml = urllib2.urlopen(j).read() #获取网页的html文本
print j, num
gua = BeautifulSoup(ghtml, from_encoding = 'gb18030')
guaName = gua.title.text.split('_')[1].split(u'卦')[0]
guaId = dr[guaName]
guawen = gua.find_all('div', {'class', 'gua_wen'})
guaText = []
for i in guawen:
guaText.append(i.get_text() + '\n\n')
guaText = ''.join(guaText)
dtext[guaId] = guaText
dtextu = {}
for i, j in dtext.iteritems():
dtextu[i]= j.encode('utf-8')
dtext.values()[0]
import json
with open("/Users/chengjun/github/iching/package_data.dat",'w') as outfile:
json.dump(dtextu, outfile, ensure_ascii=False) #, encoding = 'utf-8')
dat = json.load(open('package_data.dat'), encoding='utf-8')
print dat.values()[1]
now, future = getPredict()
def ichingText(k):
import json
dat = json.load(open('iching/package_data.dat'))
print dat[k]
ichingText(future)
%matplotlib inline
plotTransitionRemainder(10000, w = 50)
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15, 10),facecolor='white')
plt.subplot(2, 2, 1)
plotTransitionRemainder(1000, w = 50)
plt.subplot(2, 2, 2)
plotTransitionRemainder(1000, w = 50)
plt.subplot(2, 2, 3)
plotTransitionRemainder(1000, w = 50)
plt.subplot(2, 2, 4)
plotTransitionRemainder(1000, w = 50)
dt = {'111111':u'乾','011111':u'夬','000000':u'坤','010001':u'屯','100010':u'蒙','010111':u'需','111010':u'讼','000010':'师',
'010000':u'比','110111':u'小畜','111011':u'履','000111':u'泰','111000':u'否','111101':u'同人','101111':u'大有','000100':u'谦',
'001000':u'豫','011001':u'随','100110':u'蛊','000011':u'临','110000':u'观','101001':u'噬嗑','100101':u'贲','100000':'u剥',
'000001':u'复','111001':u'无妄','100111':u'大畜','100001':u'颐','011110':u'大过','010010':u'坎','101101':u'离','011100':u'咸',
'001110':u'恒','111100':u'遁','001111':u'大壮','101000':u'晋','000101':u'明夷','110101':u'家人','101011':u'睽','010100':u'蹇',
'001010':u'解','100011':u'损','110001':u'益','111110':u'姤','011000':u'萃','000110':u'升','011010':u'困','010110':u'井',
'011101':u'革','101110':u'鼎','001001':u'震','100100':u'艮','110100':u'渐','001011':u'归妹','001101':u'丰','101100':u'旅',
'110110':u'巽','011011':u'兑','110010':u'涣','010011':u'节','110011':u'中孚','001100':u'小过','010101':u'既济','101010':u'未济'
}
for i in dt.values():
print i
dtu = {}
for i, j in dt.iteritems():
dtu[i] = unicode(j, 'utf-8')
def ichingDate(d):
import random
random.seed(d)
try:
print 'Your birthday & your prediction time:', str(d)
except:
print('Your birthday & your prediction time:', str(d))
Explanation: 添加卦辞
End of explanation |
14,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'pcmdi-test-1-0', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: PCMDI
Source ID: PCMDI-TEST-1-0
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
14,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mining used-car sales
This is a running log of some work on used-car saled. I have no intention to use this information for financial purposes, rather I'd like to ask the question "is there regional variation in used-car prices?". To do this I will use the BeautifulSoup package to look at www.pistonheads.com, my preferred way to search for used cars.
Step2: After some fiddling I have wrapped up the method to get a page of results (100 at a time) in a function
Step3: An example
Here we get 100 results from the url with "M" 269. On the pistonheads website this corresponds to a BMW 1 series. I choose this search in particular as there isn't huge variation in the specs, most have a similar engine size and there are few special runs. Let's have a look at the first few results
Step4: This looks good so far, now from the source website we see there is 2,580 entries so let's pull down a few and see what we can do with the data. The best way to do this is to iterate through the page counts and concatanate the resulting data frames
Step5: Okay so now we have a good number of adverts to look at.
Mileage VS Price
Step6: This is kind of messy and I would be interested to know what those low-milage high price outliers indicate
Outliers
We will use the pandas query module (which requires numexpr to be installed) to have a look at these outliers.
Step7: As we can see these are all the "1M" series, a much faster version. The BHP is 335 which is significantly greater than the average
Step8: Averaged Price VS Mileage
Let's clean the data by removing all NaN (this is typically ill-advised, but this is just for fun) and also get rid of those high-performance cars
Step9: And now we have a clean collection, we will apply a rolling mean to the price and milage
Step10: Fitting to the data
We may now imagine coming up with a model for the price. The obvious answer is to begin by stating that the depreciation is proportional to the mileage, therefore
Step11: This model seems fairly able to predict the average used car sale price. The parameter $P_0$ measures the value at 0 miles, of course all used-cars should have at least some mileage to qualify as used, but a surprising number have very low mileage
Step12: Add latitude and longitude of the location
Step13: As a result we have predicted the initial price to be $20259 \pm 23$ which isn't all that far off the actual new-car prices $21180-21710$. | Python Code:
from BeautifulSoup import BeautifulSoup
import urllib
import pandas as pd
import seaborn
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as so
%matplotlib inline
import seaborn as sns
sns.set_style(rc={'font.family': ['sans-serif'],'axis.labelsize': 25})
sns.set_context("notebook")
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['axes.labelsize'] = 18
Explanation: Mining used-car sales
This is a running log of some work on used-car saled. I have no intention to use this information for financial purposes, rather I'd like to ask the question "is there regional variation in used-car prices?". To do this I will use the BeautifulSoup package to look at www.pistonheads.com, my preferred way to search for used cars.
End of explanation
def strip_results_from_ad(ad):
Strip out the information from a single advert and add it to the results dictionary
desc = ad.find("div", attrs={"class": "listing-headline"}).find("h3").text
loc = ad.findAll("p", attrs={"class": "location"})[0].text
price = int(ad.find("div", attrs={"class": "price"}).text.replace(u"£", "").replace(",", ""))
specs = ad.find("ul", attrs={"class": "specs"}).findAll("li")
if len(specs) == 4:
miles = int(specs[0].text.rstrip(" miles").replace(",", ""))
fuel = specs[1].text
bhp = int(specs[2].text.rstrip(" bhp"))
transmission = specs[3].text
else:
fuel = "NA"
bhp = np.nan
transmission = "NA"
try:
miles = int(specs[0].text.rstrip(" miles").replace(",", ""))
except:
# Except any error...!
miles = np.nan
return desc, loc, price, miles, fuel, bhp, transmission
def create_url(page=1, M=269, rpp=100):
base = ("http://www.pistonheads.com/classifieds?Category=used-cars"
"&M={M}&ResultsPerPage={rpp}&Page={page}")
return base.format(page=page, rpp=rpp, M=M)
def get_results(*args, **kwargs):
url = create_url(*args, **kwargs)
f = urllib.urlopen(url).read()
soup = BeautifulSoup(f)
ads = soup.findAll("div", attrs={"class": "ad-listing"})
results = {"desc":[], "loc":[], "price":[], "miles":[], "fuel":[], "bhp":[], "transmission":[]}
for ad in ads:
try:
desc, loc, price, miles, fuel, bhp, transmission = strip_results_from_ad(ad)
except:
break
results["desc"].append(desc)
results["loc"].append(loc)
results["price"].append(price)
results["miles"].append(miles)
results["fuel"].append(fuel)
results["bhp"].append(bhp)
results["transmission"].append(transmission)
return results
Explanation: After some fiddling I have wrapped up the method to get a page of results (100 at a time) in a function:
End of explanation
r = get_results(M=269, rpp=20)
df_small = pd.DataFrame(r)
df_small
Explanation: An example
Here we get 100 results from the url with "M" 269. On the pistonheads website this corresponds to a BMW 1 series. I choose this search in particular as there isn't huge variation in the specs, most have a similar engine size and there are few special runs. Let's have a look at the first few results
End of explanation
dfs = []
for page in xrange(1, 25):
r = get_results(M=269, rpp=100, page=page)
dfs.append(pd.DataFrame(r))
df = pd.concat(dfs)
len(df)
Explanation: This looks good so far, now from the source website we see there is 2,580 entries so let's pull down a few and see what we can do with the data. The best way to do this is to iterate through the page counts and concatanate the resulting data frames:
End of explanation
ax = df.sort("price").plot("miles", "price", kind="scatter")
Explanation: Okay so now we have a good number of adverts to look at.
Mileage VS Price
End of explanation
df.query("miles < 50000 and price > 30000")
Explanation: This is kind of messy and I would be interested to know what those low-milage high price outliers indicate
Outliers
We will use the pandas query module (which requires numexpr to be installed) to have a look at these outliers.
End of explanation
df.bhp.mean()
Explanation: As we can see these are all the "1M" series, a much faster version. The BHP is 335 which is significantly greater than the average:
End of explanation
df_clean = df[~np.isnan(df.price)]
df_clean = df_clean[~np.isnan(df_clean.miles)]
df_clean = df_clean[df_clean.price < 30000]
df_clean = df_clean.sort("miles")
Explanation: Averaged Price VS Mileage
Let's clean the data by removing all NaN (this is typically ill-advised, but this is just for fun) and also get rid of those high-performance cars
End of explanation
ax = plt.subplot(111)
window = 100
mean_miles = pd.rolling_mean(df_clean.miles, window, center=True)
mean_price = pd.rolling_mean(df_clean.price, window, center=True)
# Drop the nans created in the rolling_window
mean_miles = mean_miles[~np.isnan(mean_miles)]
mean_price = mean_price[~np.isnan(mean_price)]
ax.plot(df_clean.miles, df_clean.price, "o", alpha=0.3, markersize=5)
ax.plot(mean_miles, mean_price, lw=3, color=seaborn.xkcd_rgb["pale red"])
ax.set_xlabel("Mileage")
ax.set_ylabel(u"Price (£)")
plt.show()
Explanation: And now we have a clean collection, we will apply a rolling mean to the price and milage:
End of explanation
import scipy.optimize as so
ax = plt.subplot(111)
def P(m, P0, k):
return P0 * np.exp(-k * m)
popt, pcov = so.curve_fit(P, mean_miles, mean_price, p0=[20000, 1e-10])
ax.plot(df_clean.miles, df_clean.price, "o", alpha=0.3, markersize=5)
ax.plot(mean_miles, mean_price, lw=3, color=seaborn.xkcd_rgb["pale red"])
fit_miles = np.linspace(df_clean.miles.min(), df_clean.miles.max(), 1000)
ax.plot(fit_miles, P(fit_miles, *popt), zorder=10)
ax.annotate(xy=(0.6, 0.7), xycoords="figure fraction",
s=("$P_0={:5.0f} \pm {:5.0f}$\n$k={:1.2e} \pm {:1.2e}$".format(
popt[0], np.sqrt(pcov[0, 0]), popt[1], np.sqrt(pcov[1, 1]))),
fontsize=20)
ax.set_xlabel("Mileage")
ax.set_ylabel(u"Price (£)")
plt.show()
Explanation: Fitting to the data
We may now imagine coming up with a model for the price. The obvious answer is to begin by stating that the depreciation is proportional to the mileage, therefore:
$$ \frac{dP}{dm} = -k P $$
where $P$ is the price, $m$ is the mileage and $k$ is a constant of proportionality. Solving in the usual way yields:
$$ P(m) = P_{0} e^{-km} $$
Let's try fitting this:
End of explanation
len(df.query("miles < 500"))
Explanation: This model seems fairly able to predict the average used car sale price. The parameter $P_0$ measures the value at 0 miles, of course all used-cars should have at least some mileage to qualify as used, but a surprising number have very low mileage:
End of explanation
df['P0'] = df.price*np.exp(popt[1] * df.miles)
df.head()
grouped = df.groupby(['loc',])
P0_mean = grouped['P0'].mean().values
counts = grouped['P0'].count().values
locs = grouped.all().index
df_mean = pd.DataFrame(dict(locs = locs,
P0_mean = P0_mean,
counts = counts))
df_mean.head()
df_mean_clean = df_mean[df_mean.counts > 1]
df_mean_clean.head()
from geopy.geocoders import Nominatim
N = Nominatim()
location = [N.geocode(loc) for loc in df_mean_clean.locs]
df_mean_clean['location'] = location
df_mean_clean = df_mean_clean.dropna()
df_mean_clean['lat'] = [l.latitude for l in df_mean_clean.location.values]
df_mean_clean['lon'] = [l.longitude for l in df_mean_clean.location.values]
df_mean_clean.head()
Explanation: Add latitude and longitude of the location
End of explanation
fig = plt.figure()
from mpl_toolkits.basemap import Basemap
# Calculate some parameters which will be resused]
#lat_0 = df.latitude.mean()
#lon_0 = df.longitude.mean()
#llcrnrlon, urcrnrlon = PaddingFunction(df.longitude.min(), df.longitude.max(), frac=0.3)
#llcrnrlat, urcrnrlat = PaddingFunction(df.latitude.min(), df.latitude.max())
lat_0 = 0
lon_0 = 1
llcrnrlon, urcrnrlon = -7, 2
llcrnrlat, urcrnrlat = 49, 60
# Create a map, using the Gall–Peters projection,
m = Basemap(projection='gall',
resolution = 'l',
area_thresh = 10000.0,
lat_0=lat_0, lon_0=lon_0,
llcrnrlon=llcrnrlon,
urcrnrlon=urcrnrlon,
llcrnrlat=llcrnrlat,
urcrnrlat=urcrnrlat,
ax=fig.gca()
)
m.drawcounties()
m.drawcoastlines()
m.fillcontinents(color = '#996633')
m.drawmapboundary(fill_color='#0099FF')
lons = df_mean_clean.lon.values
lats = df_mean_clean.lat.values
z = df_mean_clean.P0_mean.values
x, y = m(lons*180./np.pi, lats*180./np.pi)
m.pcolormesh(x, y, z)
Explanation: As a result we have predicted the initial price to be $20259 \pm 23$ which isn't all that far off the actual new-car prices $21180-21710$.
End of explanation |
14,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lmec', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-LMEC
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
14,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[2-1] 動画作成用のモジュールをインポートして、動画を表示可能なモードにセットします。
Step1: [2-2] x軸方向に移動しながら、y軸方向に落下する物体の動きを描きます。
動画のGIFファイル「animation03.gif」も同時に作成します。
Step2: [2-3] 斜め上に投げ上げた物体の動きを描きます。
動画のGIFファイル「animation04.gif」も同時に作成します。
Step3: [2-4] 「モンキーハンティング」の問題を動画で再現します。
動画のGIFファイル「animation05.gif」も同時に作成します。
Step4: [2-5] 「打ち上げ花火」を動画で再現します。
動画のGIFファイル「animation06.gif」も同時に作成します。 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
%matplotlib nbagg
Explanation: [2-1] 動画作成用のモジュールをインポートして、動画を表示可能なモードにセットします。
End of explanation
fig = plt.figure(figsize=(4,4))
x = 0
y, vy = 0, 0
images = []
for _ in range(25):
image = plt.scatter([x],[y])
images.append([image])
x += 1
y += vy
vy -= 1
ani = animation.ArtistAnimation(fig, images, interval=100)
ani.save('animation03.gif', writer='imagemagick', fps=25)
Explanation: [2-2] x軸方向に移動しながら、y軸方向に落下する物体の動きを描きます。
動画のGIFファイル「animation03.gif」も同時に作成します。
End of explanation
fig = plt.figure(figsize=(4,4))
x = 0
y, vy = 0, 10
images = []
for _ in range(22):
image = plt.scatter([x],[y])
images.append([image])
y += vy
vy -= 1
x += 1
ani = animation.ArtistAnimation(fig, images, interval=100)
ani.save('animation04.gif', writer='imagemagick', fps=25)
Explanation: [2-3] 斜め上に投げ上げた物体の動きを描きます。
動画のGIFファイル「animation04.gif」も同時に作成します。
End of explanation
fig = plt.figure(figsize=(4,4))
x1, y1 = 0, 0
vx1, vy1 = 20, 20
y2, vy2 = 500, 0
images = []
for _ in range(50):
image = plt.scatter([x1, 500],[y1, y2])
images.append([image])
x1 += vx1
y1 += vy1
y2 += vy2
vy1 -= 1
vy2 -= 1
ani = animation.ArtistAnimation(fig, images, interval=100)
ani.save('animation05.gif', writer='imagemagick', fps=25)
Explanation: [2-4] 「モンキーハンティング」の問題を動画で再現します。
動画のGIFファイル「animation05.gif」も同時に作成します。
End of explanation
fig = plt.figure(figsize=(5,5))
plt.xticks([])
plt.yticks([])
plt.xlim(-1500,1500)
plt.ylim(-2200,800)
num = 20
x, y, vx, vy = [], [], [], []
for i in range(num):
x.append(0)
y.append(0)
vx.append(20*np.cos(2*np.pi*i/num))
vy.append(20*np.sin(2*np.pi*i/num))
images = []
for _ in range(110):
xs, ys = [], []
for i in range(num):
xs.append(x[i])
ys.append(y[i])
x[i] += vx[i]
y[i] += vy[i]
vy[i] -= 1
image = plt.scatter(xs, ys)
images.append([image])
ani = animation.ArtistAnimation(fig, images, interval=20)
ani.save('animation06.gif', writer='imagemagick', fps=25)
Explanation: [2-5] 「打ち上げ花火」を動画で再現します。
動画のGIFファイル「animation06.gif」も同時に作成します。
End of explanation |
14,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Working-with-Python-Classes" data-toc-modified-id="Working-with-Python-Classes-1"><span class="toc-item-num">1 </span>Working with Python Classes</a></span><ul class="toc-item"><li><span><a href="#Public,-Private,-Protected" data-toc-modified-id="Public,-Private,-Protected-1.1"><span class="toc-item-num">1.1 </span>Public, Private, Protected</a></span></li><li><span><a href="#Class-Decorators" data-toc-modified-id="Class-Decorators-1.2"><span class="toc-item-num">1.2 </span>Class Decorators</a></span><ul class="toc-item"><li><span><a href="#@Property" data-toc-modified-id="@Property-1.2.1"><span class="toc-item-num">1.2.1 </span>@Property</a></span></li><li><span><a href="#@classmethod-and-@staticmethod" data-toc-modified-id="@[email protected]"><span class="toc-item-num">1.2.2 </span>@classmethod and @staticmethod</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Working with Python Classes
Encapsulation is seen as the bundling of data with the methods that operate on that data. It is often accomplished by providing two kinds of methods for attributes
Step2: When the Python compiler sees a private attribute, it actually transforms the actual name to _[Class name]__[private attribute name]. However, this still does not prevent the end-user from accessing the attribute. Thus in Python land, it is more common to use public and protected attribute, write proper docstrings and assume that everyone is a consenting adult, i.e. won't do anything with the protected method unless they know what they are doing.
Class Decorators
@property The Pythonic way to introduce attributes is to make them public, and not introduce getters and setters to retrieve or change them.
@classmethod To add additional constructor to the class.
@staticmethod To attach functions to classes so people won't misuse them in wrong places.
@Property
Let's assume one day we decide to make a class that could store the temperature in degree Celsius. The temperature will be a private method, so our end-users won't have direct access to it.
The class will also implement a method to convert the temperature into degree Fahrenheit. And we also want to implement a value constraint to the temperature, so that it cannot go below -273 degree Celsius. One way of doing this is to define a getter and setter interfaces to manipulate it.
Step3: Instead of that, now the property way. Where we define the @property and the @[attribute name].setter.
Step4: @classmethod and @staticmethod
@classmethods create alternative constructors for the class. An example of this behavior is there are different ways to construct a dictionary.
Step5: The cls is critical, as it is an object that holds the class itself. This makes them work with inheritance.
Step6: The purpose of @staticmethod is to attach functions to classes. We do this to improve the findability of the function and to make sure that people are using the function in the appropriate context. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
%watermark -a 'Ethen' -d -t -v
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Working-with-Python-Classes" data-toc-modified-id="Working-with-Python-Classes-1"><span class="toc-item-num">1 </span>Working with Python Classes</a></span><ul class="toc-item"><li><span><a href="#Public,-Private,-Protected" data-toc-modified-id="Public,-Private,-Protected-1.1"><span class="toc-item-num">1.1 </span>Public, Private, Protected</a></span></li><li><span><a href="#Class-Decorators" data-toc-modified-id="Class-Decorators-1.2"><span class="toc-item-num">1.2 </span>Class Decorators</a></span><ul class="toc-item"><li><span><a href="#@Property" data-toc-modified-id="@Property-1.2.1"><span class="toc-item-num">1.2.1 </span>@Property</a></span></li><li><span><a href="#@classmethod-and-@staticmethod" data-toc-modified-id="@[email protected]"><span class="toc-item-num">1.2.2 </span>@classmethod and @staticmethod</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
class A:
def __init__(self):
self.__priv = "I am private"
self._prot = "I am protected"
self.pub = "I am public"
x = A()
print(x.pub)
# Whenever we assign or retrieve any object attribute
# Python searches it in the object's __dict__ dictionary
print(x.__dict__)
Explanation: Working with Python Classes
Encapsulation is seen as the bundling of data with the methods that operate on that data. It is often accomplished by providing two kinds of methods for attributes: The methods for retrieving or accessing the values of attributes are called getter methods. Getter methods do not change the values of attributes, they just return the values. The methods used for changing the values of attributes are called setter methods.
Public, Private, Protected
There are two ways to restrict the access to class attributes:
protected. First, we can prefix an attribute name with a leading underscore "_". This marks the attribute as protected. It tells users of the class not to use this attribute unless, somebody writes a subclass.
private. Second, we can prefix an attribute name with two leading underscores "__". The attribute is now inaccessible and invisible from outside. It's neither possible to read nor write to those attributes except inside of the class definition itself.
End of explanation
class Celsius:
def __init__(self, temperature = 0):
self.set_temperature(temperature)
def to_fahrenheit(self):
return (self.get_temperature() * 1.8) + 32
def get_temperature(self):
return self._temperature
def set_temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
self._temperature = value
# c = Celsius(-277) # this returns an error
c = Celsius(37)
c.get_temperature()
Explanation: When the Python compiler sees a private attribute, it actually transforms the actual name to _[Class name]__[private attribute name]. However, this still does not prevent the end-user from accessing the attribute. Thus in Python land, it is more common to use public and protected attribute, write proper docstrings and assume that everyone is a consenting adult, i.e. won't do anything with the protected method unless they know what they are doing.
Class Decorators
@property The Pythonic way to introduce attributes is to make them public, and not introduce getters and setters to retrieve or change them.
@classmethod To add additional constructor to the class.
@staticmethod To attach functions to classes so people won't misuse them in wrong places.
@Property
Let's assume one day we decide to make a class that could store the temperature in degree Celsius. The temperature will be a private method, so our end-users won't have direct access to it.
The class will also implement a method to convert the temperature into degree Fahrenheit. And we also want to implement a value constraint to the temperature, so that it cannot go below -273 degree Celsius. One way of doing this is to define a getter and setter interfaces to manipulate it.
End of explanation
class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
# have access to the value like it is an attribute instead of a method
@property
def temperature(self):
return self._temperature
# like accessing the attribute with an extra layer of error checking
@temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
print('Setting value')
self._temperature = value
c = Celsius(37)
# much easier to access then the getter, setter way
print(c.temperature)
# note that you can still access the private attribute
# and violate the temperature checking,
# but then it's the users fault not yours
c._temperature = -300
print(c._temperature)
# accessing the attribute will return the ValueError error
# c.temperature = -300
Explanation: Instead of that, now the property way. Where we define the @property and the @[attribute name].setter.
End of explanation
print(dict.fromkeys(['raymond', 'rachel', 'mathew']))
import time
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# Primary
a = Date(2012, 12, 21)
print(a.__dict__)
# Alternate
b = Date.today()
print(b.__dict__)
Explanation: @classmethod and @staticmethod
@classmethods create alternative constructors for the class. An example of this behavior is there are different ways to construct a dictionary.
End of explanation
class NewDate(Date):
pass
# Creates an instance of Date (cls=Date)
c = Date.today()
print(c.__dict__)
# Creates an instance of NewDate (cls=NewDate)
d = NewDate.today()
print(d.__dict__)
Explanation: The cls is critical, as it is an object that holds the class itself. This makes them work with inheritance.
End of explanation
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# the logic belongs with the date class
@staticmethod
def show_tomorrow_date():
t = time.localtime()
return t.tm_year, t.tm_mon, t.tm_mday + 1
Date.show_tomorrow_date()
Explanation: The purpose of @staticmethod is to attach functions to classes. We do this to improve the findability of the function and to make sure that people are using the function in the appropriate context.
End of explanation |
14,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC.
Step1: Hourglass
Step2: Download ImageNet32/64 data
Downloading the datasets for evaluation requires some hacks because URLs from tensorflow_datasets are invalid. Two cells below download data for ImageNet32 and ImageNet64, respectively. Choose the one appropriate for the checkpoint you want to evaluate.
Step3: Load the ImageNet32 model
This colab can be used to evaluate both imagenet32 and imagenet64 models. We start with our ImageNet32 checkpoint.
Step4: Evaluate on the validation set
Step5: ImageNet32 evaluation
Step6: ImageNet64 evaluation | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC.
End of explanation
!pip install -q --upgrade jaxlib==0.1.71+cuda111 -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q --upgrade jax==0.2.21
!pip install -q git+https://github.com/google/trax.git
!pip install -q pickle5
!pip install -q gin
# Execute this for a proper TPU setup!
# Make sure the Colab Runtime is set to Accelerator: TPU.
import jax
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20200416'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
jax.devices()
Explanation: Hourglass: ImageNet32/64 evaluation
Install dependencies
End of explanation
# Download ImageNet32 data (the url in tfds is down)
!gdown https://drive.google.com/uc?id=1OV4lBnuIcbqeuoiK83jWtlnQ9Afl6Tsr
!tar -zxf /content/im32.tar.gz
# tfds hack for imagenet32
import json
json_path = '/content/content/drive/MyDrive/imagenet/downsampled_imagenet/32x32/2.0.0/dataset_info.json'
with open(json_path, mode='r') as f:
ds_info = json.load(f)
if 'moduleName' in ds_info:
del ds_info['moduleName']
with open(json_path, mode='w') as f:
json.dump(ds_info, f)
!mkdir -p /root/tensorflow_datasets/downsampled_imagenet/32x32
!cp -r /content/content/drive/MyDrive/imagenet/downsampled_imagenet/32x32/2.0.0 /root/tensorflow_datasets/downsampled_imagenet/32x32
# Download and set up ImageNet64 (validation only) data
!gdown https://drive.google.com/uc?id=1ZoI3ZKMUXfrIlqPfIBCcegoe0aJHchpo
!tar -zxf im64_valid.tar.gz
!mkdir -p /root/tensorflow_datasets/downsampled_imagenet/64x64/2.0.0
!cp im64_valid/* /root/tensorflow_datasets/downsampled_imagenet/64x64/2.0.0
# Download gin configs
!wget -q https://raw.githubusercontent.com/google/trax/master/trax/supervised/configs/hourglass_imagenet32.gin
!wget -q https://raw.githubusercontent.com/google/trax/master/trax/supervised/configs/hourglass_imagenet64.gin
Explanation: Download ImageNet32/64 data
Downloading the datasets for evaluation requires some hacks because URLs from tensorflow_datasets are invalid. Two cells below download data for ImageNet32 and ImageNet64, respectively. Choose the one appropriate for the checkpoint you want to evaluate.
End of explanation
import gin
gin.parse_config_file('hourglass_imagenet32.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
'gs://trax-ml/hourglass/imagenet32/model_470000.pkl.gz',
weights_only=True,
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
Explanation: Load the ImageNet32 model
This colab can be used to evaluate both imagenet32 and imagenet64 models. We start with our ImageNet32 checkpoint.
End of explanation
import gin
import trax
# Here is the hacky part to remove shuffling of the dataset
def get_eval_dataset():
dataset_name = gin.query_parameter('data_streams.dataset_name')
data_dir = trax.data.tf_inputs.download_and_prepare(dataset_name, None)
train_data, eval_data, keys = trax.data.tf_inputs._train_and_eval_dataset(
dataset_name, data_dir, eval_holdout_size=0)
bare_preprocess_fn = gin.query_parameter('data_streams.bare_preprocess_fn')
eval_data = bare_preprocess_fn.scoped_configurable_fn(eval_data, training=False)
return trax.fastmath.dataset_as_numpy(eval_data)
from trax import fastmath
from trax.fastmath import numpy as jnp
from tqdm import tqdm
def batched_inputs(data_gen, batch_size):
inp_stack, mask_stack = [], []
for input_example, mask in data_gen:
inp_stack.append(input_example)
mask_stack.append(mask)
if len(inp_stack) % batch_size == 0:
if len(set(len(example) for example in inp_stack)) > 1:
for x, m in zip(inp_stack, mask_stack):
yield x, m
else:
input_batch = jnp.stack(inp_stack)
mask_batch = jnp.stack(mask_stack)
yield input_batch, mask_batch
inp_stack, mask_stack = [], []
if len(inp_stack) > 0:
for inp, mask in zip(inp_stack, mask_stack):
yield inp, mask
def run_full_evaluation(accelerated_model_with_loss, examples_data_gen,
batch_size, pad_to_len=None):
# Important: we assume batch size per device = 1
assert batch_size % fastmath.local_device_count() == 0
assert fastmath.local_device_count() == 1 or \
batch_size == fastmath.local_device_count()
loss_sum, n_tokens = 0.0, 0
def pad_right(inp_tensor):
if pad_to_len:
return jnp.pad(inp_tensor,
[[0, 0], [0, max(0, pad_to_len - inp_tensor.shape[1])]])
else:
return inp_tensor
batch_gen = batched_inputs(examples_data_gen, batch_size)
def batch_leftover_example(input_example, example_mask):
def extend_shape_to_batch_size(tensor):
return jnp.repeat(tensor, repeats=batch_size, axis=0)
return map(extend_shape_to_batch_size,
(input_example[None, ...], example_mask[None, ...]))
for i, (inp, mask) in tqdm(enumerate(batch_gen)):
leftover_batch = False
if len(inp.shape) == 1:
inp, mask = batch_leftover_example(inp, mask)
leftover_batch = True
inp, mask = map(pad_right, [inp, mask])
example_losses = accelerated_model_with_loss((inp, inp, mask))
if leftover_batch:
example_losses = example_losses[:1]
mask = mask[:1]
example_lengths = mask.sum(axis=-1)
loss_sum += (example_lengths * example_losses).sum()
n_tokens += mask.sum()
if i % 200 == 0:
print(f'Batches: {i}, current loss: {loss_sum / float(n_tokens)}')
return loss_sum / float(n_tokens)
Explanation: Evaluate on the validation set
End of explanation
def data_gen(dataset):
for example in dataset:
example = example['image']
mask = jnp.ones_like(example)
yield example, mask
BATCH_SIZE = 8
eval_data_gen = data_gen(get_eval_dataset())
loss = run_full_evaluation(model_eval, eval_data_gen, BATCH_SIZE)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
Explanation: ImageNet32 evaluation
End of explanation
gin.parse_config_file('hourglass_imagenet64.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
'gs://trax-ml/hourglass/imagenet64/model_300000.pkl.gz',
weights_only=True,
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
BATCH_SIZE = 8
eval_data_gen = data_gen(get_eval_dataset())
loss = run_full_evaluation(model_eval, eval_data_gen, BATCH_SIZE)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
Explanation: ImageNet64 evaluation
End of explanation |
14,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kód k vylepšení
Step3: Vylepšená verze
Step4: Co je tady navíc? | Python Code:
import ai
import utils
from random import randrange
def vyhodnot(pole):
# Funkce vezme hrací pole a vrátí výsledek
# na základě aktuálního stavu hry
if "xxx" in pole: #Vyhrál hráč s křížky
return "x"
elif "ooo" in pole: #Vyhrál hráč s kolečky.
return "o"
elif "-" not in pole: #Nikdo nevyhrál
return "!"
else: #Hra ještě neskončila.
return "-"
def tah_pocitace(pole):
"Počítač vybere pozici, na kterou hrát, a vrátí herní pole se zaznamenaným tahem počítače"
delka=len(pole)
while True:
pozice=randrange(1,delka-1)
if "-" in pole[pozice]:
if "o" in pole[pozice+1] or "o" in pole[pozice-1] or "x" in pole[pozice+1] or "x" in pole[pozice-1]: #počítač hraje strategicky
return pozice
Explanation: Kód k vylepšení
End of explanation
from random import randrange
def vyhodnot(pole):
Funkce vezme hrací pole a vrátí výsledek
na základě aktuálního stavu hry
if "xxx" in pole: #Vyhrál hráč s křížky
return "x"
elif "ooo" in pole: #Vyhrál hráč s kolečky.
return "o"
elif "-" not in pole: #Nikdo nevyhrál
return "!"
else: #Hra ještě neskončila.
return "-"
def tah_pocitace(pole):
Počítač vybere pozici, na kterou hrát,
a vrátí ideální pozici k tahu
delka = len(pole)
while True:
pozice = randrange(1, delka - 1)
if "-" in pole[pozice]:
if "o" in pole[pozice + 1] or "o" in pole[pozice - 1] or \
"x" in pole[pozice + 1] or "x" in pole[pozice - 1]: #počítač hraje strategicky
return pozice
Explanation: Vylepšená verze
End of explanation
pozice = int(randrange(len(pole)))
Explanation: Co je tady navíc?
End of explanation |
14,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NERC
Source ID: HADGEM3-GC31-HM
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
14,431 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a data frame with one (string) column and I'd like to split it into two (string) columns, with one column header as 'fips' and the other 'row' | Problem:
import pandas as pd
df = pd.DataFrame({'row': ['114 AAAAAA', '514 ENENEN',
'1926 HAHAHA', '0817 O-O,O-O',
'998244353 TTTTTT']})
def g(df):
return pd.DataFrame(df.row.str.split(' ',1).tolist(), columns = ['fips','row'])
df = g(df.copy()) |
14,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Install TensorFlow for C
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Linker
On Linux/macOS, if you extract the TensorFlow C library to a system directory,
such as /usr/local, configure the linker with ldconfig
Step3: If you extract the TensorFlow C library to a non-system directory, such as
~/mydir, then configure the linker environmental variables
Step4: Compile
Compile the example program to create an executable, then run
Step5: Success | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
%%bash
FILENAME=libtensorflow-cpu-linux-x86_64-2.8.0.tar.gz
wget -q --no-check-certificate https://storage.googleapis.com/tensorflow/libtensorflow/${FILENAME}
sudo tar -C /usr/local -xzf ${FILENAME}
Explanation: Install TensorFlow for C
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/install/lang_c"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/install/lang_c.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/install/lang_c.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/install/lang_c.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlow provides a C API that can be used to build
bindings for other languages.
The API is defined in
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h" class="external"><code>c_api.h</code></a>
and designed for simplicity and uniformity rather than convenience.
Nightly libtensorflow C packages
libtensorflow packages are built nightly and uploaded to GCS for all supported
platforms. They are uploaded to the
libtensorflow-nightly GCS bucket
and are indexed by operating system and date built. For MacOS and Linux shared
objects, there is a
script
that renames the .so files versioned to the current date copied into the
directory with the artifacts.
Supported Platforms
TensorFlow for C is supported on the following systems:
Linux, 64-bit, x86
macOS, Version 10.12.6 (Sierra) or higher
Windows, 64-bit x86
Setup
Download and extract
<table>
<tr><th>TensorFlow C library</th><th>URL</th></tr>
<tr class="alt"><td colspan="2">Linux</td></tr>
<tr>
<td>Linux CPU only</td>
<td class="devsite-click-to-copy"><a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.8.0.tar.gz">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.8.0.tar.gz</a></td>
</tr>
<tr>
<td>Linux GPU support</td>
<td class="devsite-click-to-copy"><a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.8.0.tar.gz">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.8.0.tar.gz</a></td>
</tr>
<tr class="alt"><td colspan="2">macOS</td></tr>
<tr>
<td>macOS CPU only</td>
<td class="devsite-click-to-copy"><a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-darwin-x86_64-2.8.0.tar.gz">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-darwin-x86_64-2.8.0.tar.gz</a></td>
</tr>
<tr class="alt"><td colspan="2">Windows</td></tr>
<tr>
<td>Windows CPU only</td>
<td class="devsite-click-to-copy"><a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-windows-x86_64-2.8.0.zip">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-windows-x86_64-2.8.0.zip</a></td>
</tr>
<tr>
<td>Windows GPU only</td>
<td class="devsite-click-to-copy"><a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-windows-x86_64-2.8.0.zip">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-windows-x86_64-2.8.0.zip</a></td>
</tr>
</table>
Extract the downloaded archive, which contains the header files to include in
your C program and the shared libraries to link against.
On Linux and macOS, you may want to extract to /usr/local/lib:
End of explanation
%%bash
sudo ldconfig /usr/local/lib
Explanation: Linker
On Linux/macOS, if you extract the TensorFlow C library to a system directory,
such as /usr/local, configure the linker with ldconfig:
End of explanation
%%writefile hello_tf.c
#include <stdio.h>
#include <tensorflow/c/c_api.h>
int main() {
printf("Hello from TensorFlow C library version %s\n", TF_Version());
return 0;
}
Explanation: If you extract the TensorFlow C library to a non-system directory, such as
~/mydir, then configure the linker environmental variables:
<div class="ds-selector-tabs">
<section>
<h3>Linux</h3>
<pre class="prettyprint lang-bsh">
export LIBRARY_PATH=$LIBRARY_PATH:~/mydir/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/mydir/lib
</pre>
</section>
<section>
<h3>macOS</h3>
<pre class="prettyprint lang-bsh">
export LIBRARY_PATH=$LIBRARY_PATH:~/mydir/lib
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:~/mydir/lib
</pre>
</section>
</div>
Build
Example program
With the TensorFlow C library installed, create an example program with the
following source code (hello_tf.c):
<!--/ds-selector-tabs-->
End of explanation
%%bash
gcc hello_tf.c -ltensorflow -o hello_tf
./hello_tf
Explanation: Compile
Compile the example program to create an executable, then run:
End of explanation
%%bash
gcc -I/usr/local/include -L/usr/local/lib hello_tf.c -ltensorflow -o hello_tf
./hello_tf
Explanation: Success: The TensorFlow C library is configured.
If the program doesn't build, make sure that gcc can access the TensorFlow C
library. If extracted to /usr/local, explicitly pass the library location to
the compiler:
End of explanation |
14,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Differential Analysis with both GEO and RNA-Seq
Import everything from the imports notebook. This reads in all of the expression data as well as the functions needed to analyse differential expression data.
Step1: Read in matched Gene expression data.
Step2: Run a simple screen for DX probes
Here we take the matched data and run a basic screen
fc = 1 means that we have no foldchange buffer for a gene to be considered over or underexpressed in a patient
If there are ties or missing data, I omit these from the test. This can cause underpowered tests which have extreme test statistics but weak p-values. For this reason I filter all gene/probes/markers with a sample size of less than 300 patients.
Step3: Pathway and Gene Annotation Analysis
Step4: Overexpressed pathways
Step5: Underexpressed pathways
Step6: I am folling up on Fatty Acid Metabolism as opposed to biological oxidations, because it has a larger effect size, although the smaller gene-set size gives it a less extreme p-value. | Python Code:
import NotebookImport
from Imports import *
import seaborn as sns
sns.set_context('paper',font_scale=1.5)
sns.set_style('white')
Explanation: Differential Analysis with both GEO and RNA-Seq
Import everything from the imports notebook. This reads in all of the expression data as well as the functions needed to analyse differential expression data.
End of explanation
matched_rna = pd.read_hdf('/data_ssd/RNASeq_2014_07_15.h5', 'matched_tn')
rna_microarray = pd.read_hdf('/data_ssd/GEO_microarray_dx.h5', 'data')
matched_rna = rna_microarray.join(matched_rna)
Explanation: Read in matched Gene expression data.
End of explanation
dx_rna = binomial_test_screen(matched_rna, fc=1.)
dx_rna = dx_rna[dx_rna.num_dx > 300]
dx_rna.frac.hist(bins=30)
dx_rna.ix[['ADH1A','ADH1B','ADH1C']]
dx_rna.shape
dx_rna.p.rank().ix[['ADH1A','ADH1B','ADH1C']]
dx_rna.sort('p').head(10)
paired_bp_tn_split(matched_rna.ix['ADH1B'], codes, data_type='mRNA')
Explanation: Run a simple screen for DX probes
Here we take the matched data and run a basic screen
fc = 1 means that we have no foldchange buffer for a gene to be considered over or underexpressed in a patient
If there are ties or missing data, I omit these from the test. This can cause underpowered tests which have extreme test statistics but weak p-values. For this reason I filter all gene/probes/markers with a sample size of less than 300 patients.
End of explanation
gs2 = gene_sets.ix[dx_rna.index].fillna(0)
rr = screen_feature(dx_rna.frac, rev_kruskal, gs2.T,
align=False)
fp = (1.*gene_sets.T * dx_rna.frac).T.dropna().replace(0, np.nan).mean().order()
fp.name = 'mean frac'
Explanation: Pathway and Gene Annotation Analysis
End of explanation
rr.ix[ti(fp > .5)].join(fp).sort('p').head()
Explanation: Overexpressed pathways
End of explanation
rr.ix[ti(fp < .5)].join(fp).sort('p').head()
Explanation: Underexpressed pathways
End of explanation
def fig_1f(ax):
v = pd.concat([dx_rna.frac,
dx_rna.frac.ix[ti(gs2['REACTOME_CELL_CYCLE_MITOTIC']>0)],
dx_rna.frac.ix[ti(gs2['KEGG_FATTY_ACID_METABOLISM']>0)]])
v1 = pd.concat([pd.Series('All Genes', dx_rna.frac.index),
pd.Series('Cell Cycle\nMitotic',
ti(gs2['REACTOME_CELL_CYCLE_MITOTIC']>0)),
pd.Series('Fatty Acid\nMetabolism',
ti(gs2['KEGG_FATTY_ACID_METABOLISM']>0))])
v1.name = ''
v.name = 'Fraction Overexpressed'
violin_plot_pandas(v1, v, ann=None, ax=ax)
prettify_ax(ax)
return ax
#Do not import
fig, ax = subplots(1,1, figsize=(5,3))
fig_1f(ax);
Explanation: I am folling up on Fatty Acid Metabolism as opposed to biological oxidations, because it has a larger effect size, although the smaller gene-set size gives it a less extreme p-value.
End of explanation |
14,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
jPCA
Churchland, Mark M., et al. "Neural population dynamics during reaching." Nature 487.7405 (2012)
Step1: The data can be loaded with pandas
Step2: It's the responses of theta-modulated thalamic neurons to hippocampal sharp-waves ripples
Step3: There are 767 neurons here with 201 time bins. The order of the data matrix is
Step4: First step is the classical PCA to reduce the dimensionality of the dataset.
Similar to the original article, I reduced it to 6 dimensions
Step5: We can thus work on the 6 first components of the PCA
Step6: We can plot the 6 components
Step7: Now we can compute $\dot{X}$ using the function written below
Step8: The function derivative is called for each component
Step9: Next step is to build the H mapping using this function
Step10: $\tilde{X}$ is the block diagonal matrix
Step11: We can put $\dot{X}$ in columns
Step12: Multiply $\tilde{X}$ by $H$
Step13: and solve $(\tilde{X}.H).k = \dot{X}$
Step14: Do $m = H.k$ to get $M_{skew}$
Step15: Construct the two vectors for projection with $M_{skew}$
Step16: and get the jpc vectors as $X_r = X.u$
Step17: We can now look at the two jpc components
Step18: We can now project the data on rX to find the swr angle
Step19: We can now represent the sharp-waves phase for all neurons as | Python Code:
import numpy as np
from sklearn.decomposition import PCA
import pandas as pd
from pylab import *
Explanation: jPCA
Churchland, Mark M., et al. "Neural population dynamics during reaching." Nature 487.7405 (2012): 51.
End of explanation
data = pd.read_hdf("swr_modth.h5")
Explanation: The data can be loaded with pandas :
End of explanation
figure()
plot(data)
xlabel("Time lag (ms)")
ylabel("Modulation (z-scored)")
show()
Explanation: It's the responses of theta-modulated thalamic neurons to hippocampal sharp-waves ripples :
End of explanation
print(data.shape)
Explanation: There are 767 neurons here with 201 time bins. The order of the data matrix is :
End of explanation
n = 6
pca = PCA(n_components = n)
new_data = pca.fit_transform(data.values.T) # data needs to be inverted here depending of how you do the PCA
Explanation: First step is the classical PCA to reduce the dimensionality of the dataset.
Similar to the original article, I reduced it to 6 dimensions
End of explanation
X = pca.components_.transpose()
times = data.index.values
Explanation: We can thus work on the 6 first components of the PCA
End of explanation
figure()
plot(times, X)
xlabel("Time lag (ms)")
ylabel("pc")
show()
Explanation: We can plot the 6 components :
End of explanation
def derivative(x, f):
'''
Compute the derivative of a time serie
Used for jPCA
'''
from scipy.stats import linregress
fish = np.zeros(len(f))
slopes_ = []
tmpf = np.hstack((f[0],f,f[-1])) # not circular
binsize = x[1]-x[0]
tmpx = np.hstack((np.array([x[0]-binsize]),x,np.array([x[-1]+binsize])))
# plot(tmpx, tmpf, 'o')
# plot(x, f, '+')
for i in range(len(f)):
slope, intercept, r_value, p_value, std_err = linregress(tmpx[i:i+3], tmpf[i:i+3])
slopes_.append(slope)
# plot(tmpx[i:i+3], tmpx[i:i+3]*slope+intercept, '-')
return np.array(slopes_)/binsize
Explanation: Now we can compute $\dot{X}$ using the function written below :
End of explanation
dX = np.zeros_like(X)
for i in range(n):
dX[:,i] = derivative(times, X[:,i])
Explanation: The function derivative is called for each component :
End of explanation
def buildHMap(n, ):
'''
build the H mapping for a given n
used for the jPCA
'''
from scipy.sparse import lil_matrix
M = np.zeros((n,n), dtype = np.int)
M[np.triu_indices(n,1)] = np.arange(1,int(n*(n-1)/2)+1)
M = M - M.transpose()
m = np.vstack(M.reshape(n*n))
k = np.vstack(M[np.triu_indices(n,1)]).astype('int')
H = lil_matrix( (len(m), len(k)), dtype = np.float16)
H = np.zeros( (len(m), len(k) ))
# first column
for i in k.flatten():
# positive
H[np.where(m == i)[0][0],i-1] = 1.0
# negative
H[np.where(m == -i)[0][0],i-1] = -1.0
return H
H = buildHMap(n)
Explanation: Next step is to build the H mapping using this function :
End of explanation
Xtilde = np.zeros( (X.shape[0]*X.shape[1], X.shape[1]*X.shape[1]) )
for i, j in zip( (np.arange(0,n**2,n) ), np.arange(0, n*X.shape[0], X.shape[0]) ):
Xtilde[j:j+X.shape[0],i:i+X.shape[1]] = X
Explanation: $\tilde{X}$ is the block diagonal matrix:
End of explanation
dXv = np.vstack(dX.transpose().reshape(X.shape[0]*X.shape[1]))
Explanation: We can put $\dot{X}$ in columns :
End of explanation
XtH = np.dot(Xtilde, H)
Explanation: Multiply $\tilde{X}$ by $H$ :
End of explanation
k, residuals, rank, s = np.linalg.lstsq(XtH, dXv, rcond = None)
Explanation: and solve $(\tilde{X}.H).k = \dot{X}$
End of explanation
m = np.dot(H, k)
Mskew = m.reshape(n,n).transpose()
Explanation: Do $m = H.k$ to get $M_{skew}$:
End of explanation
evalues, evectors = np.linalg.eig(Mskew)
index = np.argsort(np.array([np.linalg.norm(i) for i in evalues]).reshape(int(n/2),2)[:,0])
evectors = evectors.transpose().reshape(int(n/2),2,n)
u = np.vstack([np.real(evectors[index[-1]][0] + evectors[index[-1]][1]),
np.imag(evectors[index[-1]][0] - evectors[index[-1]][1])]).transpose()
Explanation: Construct the two vectors for projection with $M_{skew}$:
End of explanation
rX = np.dot(X, u)
Explanation: and get the jpc vectors as $X_r = X.u$
End of explanation
figure(figsize=(15, 5))
subplot(121)
plot(times, rX)
xlabel("Time lag (ms)")
subplot(122)
plot(rX[:,0], rX[:,1])
show()
Explanation: We can now look at the two jpc components :
End of explanation
score = np.dot(data.values.T, rX)
phi = np.mod(np.arctan2(score[:,1], score[:,0]), 2*np.pi)
Explanation: We can now project the data on rX to find the swr angle :
End of explanation
figure(figsize = (10,10))
scatter(score[:,0], score[:,1], c = phi)
scatter(np.cos(phi)*np.max(score), np.sin(phi)*np.max(score), c = phi)
show()
Explanation: We can now represent the sharp-waves phase for all neurons as :
End of explanation |
14,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Validation of MCOE
This notebook runs sanity checks on the results of the calculations of marginal cost of electricity (MCOE) that we do based on EIA 923 and EIA 860. Currently this only includes per-generator fuel costs, which also necessitates the calculation of per-generator heat rates and capacity factors. These are the same tests which are run by the mcoe validation tests using PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
Step1: Perform the MCOE calculation
In order to validate the output from the MCOE calculation we first have to... do that calculation. We can do it at both monthly and annual resolution. Because we are testing the overall calculation, we don't want to impose the min/max heat rate and capacitiy factor constraints -- that would artificially clean up the outputs, which is what we're trying to evaluate.
Step2: Some of the tests only really work for the monthly case, so that's the default here. Uncomment if you need annual
Step3: Validation Against Fixed Bounds
Some of the MCOE outputs have a fixed range of reasonable values, like the generator heat rates or capacity factors. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section
Step4: MCOE vs Self
Step5: Natural Gas Heat Rates (2015+)
Unfortunately EIA fuel / generator data only becomes usable for natural gas as of 2015.
Step6: Coal Heat Rates
Step7: Fuel Cost per MWh
Step8: Fuel Cost per MMBTU
Step9: Gas Capacity Factors
Step10: Coal Capacity Factors | Python Code:
%load_ext autoreload
%autoreload 2
import sys
import pandas as pd
import numpy as np
import sqlalchemy as sa
import pudl
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_settings
Explanation: Validation of MCOE
This notebook runs sanity checks on the results of the calculations of marginal cost of electricity (MCOE) that we do based on EIA 923 and EIA 860. Currently this only includes per-generator fuel costs, which also necessitates the calculation of per-generator heat rates and capacity factors. These are the same tests which are run by the mcoe validation tests using PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
End of explanation
pudl_out_year = pudl.output.pudltabl.PudlTabl(pudl_engine, freq="AS")
pudl_out_month = pudl.output.pudltabl.PudlTabl(pudl_engine, freq="MS")
Explanation: Perform the MCOE calculation
In order to validate the output from the MCOE calculation we first have to... do that calculation. We can do it at both monthly and annual resolution. Because we are testing the overall calculation, we don't want to impose the min/max heat rate and capacitiy factor constraints -- that would artificially clean up the outputs, which is what we're trying to evaluate.
End of explanation
%%time
#mcoe_year = pudl_out_year.mcoe(
# update=True,
# min_heat_rate=None,
# min_fuel_cost_per_mwh=None,
# min_cap_fact=None,
# max_cap_fact=None
#)
%%time
mcoe_month = pudl_out_month.mcoe(
update=True,
min_heat_rate=None,
min_fuel_cost_per_mwh=None,
min_cap_fact=None,
max_cap_fact=None
)
Explanation: Some of the tests only really work for the monthly case, so that's the default here. Uncomment if you need annual
End of explanation
# mcoe = mcoe_year
mcoe = mcoe_month
Explanation: Validation Against Fixed Bounds
Some of the MCOE outputs have a fixed range of reasonable values, like the generator heat rates or capacity factors. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
* Tails: are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
* Middle: Is the central value of the distribution where it should be?
Fields that need checking:
heat_rate_mmbtu_mwh (gas, coal)
capacity_factor (gas, coal)
fuel_cost_per_mmbtu (gas, coal)
fuel_cost_per_mwh (gas, coal)
End of explanation
pudl.validate.plot_vs_self(mcoe, pudl.validate.mcoe_self)
pudl.validate.plot_vs_self(mcoe, pudl.validate.mcoe_self_fuel_cost_per_mmbtu)
pudl.validate.plot_vs_self(mcoe, pudl.validate.mcoe_self_fuel_cost_per_mwh)
Explanation: MCOE vs Self
End of explanation
pudl.validate.plot_vs_bounds(mcoe, pudl.validate.mcoe_gas_heat_rate)
Explanation: Natural Gas Heat Rates (2015+)
Unfortunately EIA fuel / generator data only becomes usable for natural gas as of 2015.
End of explanation
pudl.validate.plot_vs_bounds(mcoe, pudl.validate.mcoe_coal_heat_rate)
Explanation: Coal Heat Rates
End of explanation
pudl.validate.plot_vs_bounds(mcoe, pudl.validate.mcoe_fuel_cost_per_mwh)
Explanation: Fuel Cost per MWh
End of explanation
pudl.validate.plot_vs_bounds(mcoe, pudl.validate.mcoe_fuel_cost_per_mmbtu)
Explanation: Fuel Cost per MMBTU
End of explanation
mcoe_gas = mcoe.query("fuel_type_code_pudl=='gas'")
nonzero_cf = mcoe_gas.query("capacity_factor!=0.0")
idle_gas_capacity = 1.0 - (nonzero_cf.capacity_mw.sum() / mcoe_gas.capacity_mw.sum())
logger.info(f"Idle gas capacity: {idle_gas_capacity:.2%}")
pudl.validate.plot_vs_bounds(mcoe, pudl.validate.mcoe_gas_capacity_factor)
Explanation: Gas Capacity Factors
End of explanation
mcoe_coal = mcoe.query("fuel_type_code_pudl=='coal'")
nonzero_cf = mcoe_coal.query("capacity_factor!=0.0")
idle_coal_capacity = 1.0 - (nonzero_cf.capacity_mw.sum() / mcoe_coal.capacity_mw.sum())
logger.info(f"Idle coal capacity: {idle_coal_capacity:.2%}")
pudl.validate.plot_vs_bounds(mcoe[mcoe.capacity_factor!=0.0], pudl.validate.mcoe_coal_capacity_factor)
Explanation: Coal Capacity Factors
End of explanation |
14,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="font-size
Step2: Важно
- Не забывайте делать GradCheck, чтобы проверить численно что производные правильные, обычно с первого раза не выходит никогда, пример тут https
Step4: Optimizer is implemented for you.
Step5: Toy example
Use this example to debug your code, start with logistic regression and then test other layers. You do not need to change anything here. This code is provided for you to test the layers. Also it is easy to use this code in MNIST task.
Step6: Define a logistic regression for debugging.
Step7: Start with batch_size = 1000 to make sure every step lowers the loss, then try stochastic version.
Step8: Train
Basic training loop. Examine it.
Step9: Digit classification
We are using MNIST as our dataset. Lets start with cool visualization. The most beautiful demo is the second one, if you are not familiar with convolutions you can return to it in several lectures.
Step10: One-hot encode the labels first.
Step11: Compare ReLU, ELU activation functions.
You would better pick the best optimizer params for each of them, but it is overkill for now. Use an architecture of your choice for the comparison.
Step12: Finally, use all your knowledge to build a super cool model on this dataset, do not forget to split dataset into train and validation. Use dropout to prevent overfitting, play with learning rate decay. You can use data augmentation such as rotations, translations to boost your score. Use your knowledge and imagination to train a model.
Step13: Print here your accuracy. It should be around 90%.
Step14: Следствие
Step15: Some time ago NNs were a lot poorer and people were struggling to learn deep models. To train a classification net people were training autoencoder first (to train autoencoder people were pretraining single layers with RBM), then substituting the decoder part with classification layer (yeah, they were struggling with training autoencoders a lot, and complex techniques were used at that dark times). We are going to this now, fast and easy.
Step16: What do you think, does it make sense to build real-world classifiers this way ? Did it work better for you than a straightforward one? Looks like it was not the same ~8 years ago, what has changed beside computational power?
Run PCA with 30 components on the train set, plot original image, autoencoder and PCA reconstructions side by side for 10 samples from validation set.
Probably you need to use the following snippet to make aoutpencoder examples look comparible. | Python Code:
%matplotlib inline
from time import time, sleep
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
Explanation: <span style="font-size: 14pt">MIPT, Advanced ML, Spring 2018</span>
<h1 align="center">Organization Info</h1>
Дедлайн 20 апреля 2018 23:59 для всех групп.
В качестве решения задания нужно прислать ноутбук с подробными комментариями.
Оформление дз:
- Присылайте выполненное задание на почту [email protected]
- Укажите тему письма в следующем формате ML2018_fall_<номер_группы>_<фамилия>, к примеру -- ML2018_fall_495_ivanov
- Выполненное дз сохраните в файл <фамилия>_<группа>_task<номер>.ipnb, к примеру -- ivanov_401_task6.ipnb
Вопросы:
- Присылайте вопросы на почту [email protected]
- Укажите тему письма в следующем формате ML2018_fall Question <Содержание вопроса>
PS1: Используются автоматические фильтры, и просто не найдем ваше дз, если вы неаккуратно его подпишите.
PS2: Просроченный дедлайн снижает максимальный вес задания по формуле, указнной на первом семинаре
PS3: Допустимы исправление кода предложенного кода ниже, если вы считаете
Home work 1: Basic Artificial Neural Networks
Credit https://github.com/yandexdataschool/YSDA_deeplearning17, https://github.com/DmitryUlyanov
Зачем это всё нужно?! Зачем понимать как работают нейросети внутри когда уже есть куча библиотек?
- Время от времени Ваши сети не учатся, веса становятся nan-ами, все расходится и разваливается -- это можно починить если понимать бекпроп
- Если Вы не понимаете как работают оптимизаторы, то не сможете правильно выставить гиперпарааметры :) и тоже ничего выучить не выйдет
- https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b
The goal of this homework is simple, yet an actual implementation may take some time :). We are going to write an Artificial Neural Network (almost) from scratch. The software design of was heavily inspired by Torch which is the most convenient neural network environment when the work involves defining new layers.
This homework requires sending "multiple files, please do not forget to include all the files when sending to TA. The list of files:
- This notebook
- hw6_Modules.ipynb
If you want to read more about backprop this links can be helpfull:
- http://udacity.com/course/deep-learning--ud730
- http://cs231n.stanford.edu/2016/syllabus.html
- http://www.deeplearningbook.org
<h1 align="center">Check Questions</h1>
Вопрос 1: Чем нейросети отличаются от линейных моделей, а чем похожи?
Похожи тем, что нейросеть состоит в том числе и из композиции нескольких линейных моделей, но отличается тем, что есть нелинейная составляющая, чтобы в итоге не была 100% линейная модель.
Вопрос 2: В чем недостатки полносвзяных нейронных сетей, какая мотивация к использованию свёрточных?
Первое - переобучение. Второе - они смотрят на каждый пиксель, а сверточные сворачивают несколько соседних (пулинг) в один, таким образом появляются взаимосвязи: ясно, что два очень далеких пикселя не сильно влияют на изображение, а вот два соседних уже могут много о чем сказать.
Вопрос 3: Какие слои используются в современных нейронных сетях? Опишите как работает каждый слой и свою интуицию зачем он нужен.
Интуция почти везде в том, что это просто черная магия, с которой приходится.
- InputLayer - просто вход, тут задается размер и картинка.
- Conv2DLayer - свертки, то есть применяем несколько фильтров фиксированного размера, получаем матрицу после фильтров.
- MaxPool2DLayer - тут мы берем каждые kxn подматрицу (неразрывную) и заменяем ее на максимальный в ней элемент. Получаем матрицу поменьше, но с максимумами.
- DropoutLayer - берет каждый нейрон в fowrard pass и с заданной вероятностью его убивает. Это нужно, чтобы сеть не переобучалась. Магия в том, что эта операция не позволяет нейрости накачивать вес отдельных признаков, вследствие чего сетьне переобучается.
- DenceLayer - собственно линейное преобразование Wx + b, после которого идет нелинейная связка и все по новой.
Вопрос 4: Может ли нейросеть решать задачу регрессии, какой компонент для этого нужно заменить в нейросети из лекции 1?
Да, давайте выбросим нелинейные куски и получим линейную модель. Осталось сделать ее как нейросеть.
Вопрос 5: Почему обычные методы оптимизации плохо работают с нейросетями? А какие работают хорошо? Почему они работают хорошо?
Потому что признаков очень много, а обычные методы оптимизации умножают и обращают всякие матрицы.
Вопрос 6: Для чего нужен backprop, чем это лучше/хуже чем считать градиенты без него? Почему backprop эффективно считается на GPU?
Backprop - метод вычисления градиентов, когда мы какую-то остаточную информацию запоминаем в нейронах, а градиенты восстанавливаем по ней, идя справа налево. Это получается более точно и эффективно, так как функции вообще могут быть достаочно сложными, и считать производные ручками нереально, нужно использовать численные методы.
Вопрос 7: Почему для нейросетей не используют кросс валидацию, что вместо неё? Можно ли ее использовать?
Это глупо, поскольку во-первых, нейросетки обычно обучаются на огромном количестве данных, а во вторых, в несколько этапов, таким образом это огромные накладные расходы. Ну и в третьих, в этом нет смысла, если мы умеем делать дропаут.
Вопрос 8: Небольшой quiz который поможет разобраться со свертками https://www.youtube.com/watch?v=DDRa5ASNdq4
Политика списывания. Вы можете обсудить решение с одногрупниками, так интереснее и веселее :)
Не шарьте друг-другу код, в этом случаи вы ничему не научитесь -- "мыши плакали кололись но продолжали жрать кактус".
Теперь формально. Разница между списыванием и помощью товарища иногда едва различима. Мы искренне надеемся, что при любых сложностях вы можете обратиться к семинаристам и с их подсказками самостоятельно справиться с заданием. При зафиксированных случаях списывания (одинаковый код, одинаковые ошибки), баллы за задание будут обнулены всем участникам инцидента.
End of explanation
--------------------------------------
-- Tech note
--------------------------------------
Inspired by torch I would use
np.multiply, np.add, np.divide, np.subtract instead of *,+,/,-
for better memory handling
Suppose you allocated a variable
a = np.zeros(...)
So, instead of
a = b + c # will be reallocated, GC needed to free
I would go for:
np.add(b,c,out = a) # puts result in `a`
But it is completely up to you.
%run hw6_Modules.ipynb
Explanation: Важно
- Не забывайте делать GradCheck, чтобы проверить численно что производные правильные, обычно с первого раза не выходит никогда, пример тут https://goo.gl/pzvzfe
- Ваш код не должен содержать циклов, все вычисления должны бить векторные, внутри numpy
Framework
Implement everything in Modules.ipynb. Read all the comments thoughtfully to ease the pain. Please try not to change the prototypes.
Do not forget, that each module should return AND store output and gradInput.
The typical assumption is that module.backward is always executed after module.forward,
so output is stored, this would be useful for SoftMax.
End of explanation
def sgd_momentum(x, dx, config, state):
This is a very ugly implementation of sgd with momentum
just to show an example how to store old grad in state.
config:
- momentum
- learning_rate
state:
- old_grad
# x and dx have complex structure, old dx will be stored in a simpler one
state.setdefault('old_grad', {})
i = 0
for cur_layer_x, cur_layer_dx in zip(x,dx):
for cur_x, cur_dx in zip(cur_layer_x,cur_layer_dx):
cur_old_grad = state['old_grad'].setdefault(i, np.zeros_like(cur_dx))
np.add(config['momentum'] * cur_old_grad, config['learning_rate'] * cur_dx, out = cur_old_grad)
cur_x -= cur_old_grad
i += 1
Explanation: Optimizer is implemented for you.
End of explanation
# Generate some data
N = 500
X1 = np.random.randn(N,2) + np.array([2,2])
X2 = np.random.randn(N,2) + np.array([-2,-2])
Y = np.concatenate([np.ones(N),np.zeros(N)])[:,None]
Y = np.hstack([Y, 1-Y])
X = np.vstack([X1,X2])
plt.scatter(X[:,0],X[:,1], c = Y[:,0], edgecolors= 'none')
Explanation: Toy example
Use this example to debug your code, start with logistic regression and then test other layers. You do not need to change anything here. This code is provided for you to test the layers. Also it is easy to use this code in MNIST task.
End of explanation
# net = Sequential()
# net.add(Linear(2, 2))
# net.add(SoftMax())
criterion = ClassNLLCriterion()
# print(net)
# Test something like that then
net = Sequential()
net.add(Linear(2, 4))
net.add(ReLU())
net.add(Linear(4, 2))
net.add(SoftMax())
Explanation: Define a logistic regression for debugging.
End of explanation
# Iptimizer params
optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9}
optimizer_state = {}
# Looping params
n_epoch = 20
batch_size = 128
# batch generator
def get_batches(dataset, batch_size):
X, Y = dataset
n_samples = X.shape[0]
# Shuffle at the start of epoch
indices = np.arange(n_samples)
np.random.shuffle(indices)
for start in range(0, n_samples, batch_size):
end = min(start + batch_size, n_samples)
batch_idx = indices[start:end]
yield X[batch_idx], Y[batch_idx]
Explanation: Start with batch_size = 1000 to make sure every step lowers the loss, then try stochastic version.
End of explanation
loss_history = []
for i in range(n_epoch):
for x_batch, y_batch in get_batches((X, Y), batch_size):
net.zeroGradParameters()
# Forward
predictions = net.forward(x_batch)
loss = criterion.forward(predictions, y_batch)
# Backward
dp = criterion.backward(predictions, y_batch)
net.backward(x_batch, dp)
# Update weights
sgd_momentum(net.getParameters(),
net.getGradParameters(),
optimizer_config,
optimizer_state)
loss_history.append(loss)
# Visualize
display.clear_output(wait=True)
plt.figure(figsize=(8, 6))
plt.title("Training loss")
plt.xlabel("#iteration")
plt.ylabel("loss")
plt.plot(loss_history, 'b')
plt.show()
print('Current loss: %f' % loss)
Explanation: Train
Basic training loop. Examine it.
End of explanation
import os
from sklearn.datasets import fetch_mldata
# Fetch MNIST dataset and create a local copy.
if os.path.exists('mnist.npz'):
with np.load('mnist.npz', 'r') as data:
X = data['X']
y = data['y']
else:
mnist = fetch_mldata("mnist-original")
X, y = mnist.data / 255.0, mnist.target
np.savez('mnist.npz', X=X, y=y)
Explanation: Digit classification
We are using MNIST as our dataset. Lets start with cool visualization. The most beautiful demo is the second one, if you are not familiar with convolutions you can return to it in several lectures.
End of explanation
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
Y = encoder.fit_transform(y.reshape(-1, 1))
Explanation: One-hot encode the labels first.
End of explanation
from sklearn.model_selection import train_test_split
train_sample, test_sample, train_sample_answers, test_sample_answers = train_test_split(X, Y, test_size=0.2, random_state=42)
from sklearn.metrics import accuracy_score
plt.figure(figsize=(8, 6))
plt.title("Training loss")
plt.xlabel("#iteration")
plt.ylabel("loss")
for Activation in [ReLU, LeakyReLU]:
net = Sequential()
net.add(Linear(X.shape[1], 42))
net.add(Activation())
net.add(Linear(42, Y.shape[1]))
net.add(SoftMax())
loss_history = []
optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9}
optimizer_state = {}
for i in range(n_epoch):
for x_batch, y_batch in get_batches((train_sample, train_sample_answers), batch_size):
net.zeroGradParameters()
# Forward
predictions = net.forward(x_batch)
loss = criterion.forward(predictions, y_batch)
# Backward
dp = criterion.backward(predictions, y_batch)
net.backward(x_batch, dp)
# Update weights
sgd_momentum(net.getParameters(),
net.getGradParameters(),
optimizer_config,
optimizer_state)
loss_history.append(loss)
test_sample_answers_true = test_sample_answers.argmax(axis=1)
test_sample_answers_predicted = net.forward(test_sample).argmax(axis=1)
plt.plot(loss_history, label=Activation())
print('Accuracy using {} = {}'.format(Activation(), accuracy_score(test_sample_answers_true, test_sample_answers_predicted)))
plt.legend()
plt.show()
Explanation: Compare ReLU, ELU activation functions.
You would better pick the best optimizer params for each of them, but it is overkill for now. Use an architecture of your choice for the comparison.
End of explanation
net = Sequential()
net.add(Linear(X.shape[1], 42))
net.add(Dropout())
net.add(LeakyReLU())
net.add(Linear(42, Y.shape[1]))
net.add(SoftMax())
optimizer_config = {'learning_rate' : 1e-1, 'momentum': 0.9}
optimizer_state = {}
for i in range(n_epoch):
for x_batch, y_batch in get_batches((train_sample, train_sample_answers), batch_size):
net.zeroGradParameters()
# Forward
predictions = net.forward(x_batch)
loss = criterion.forward(predictions, y_batch)
# Backward
dp = criterion.backward(predictions, y_batch)
net.backward(x_batch, dp)
# Update weights
sgd_momentum(net.getParameters(),
net.getGradParameters(),
optimizer_config,
optimizer_state)
Explanation: Finally, use all your knowledge to build a super cool model on this dataset, do not forget to split dataset into train and validation. Use dropout to prevent overfitting, play with learning rate decay. You can use data augmentation such as rotations, translations to boost your score. Use your knowledge and imagination to train a model.
End of explanation
test_sample_answers_true = test_sample_answers.argmax(axis=1)
test_sample_answers_predicted = net.forward(test_sample).argmax(axis=1)
print('Accuracy = {}'.format(accuracy_score(test_sample_answers_true, test_sample_answers_predicted)))
Explanation: Print here your accuracy. It should be around 90%.
End of explanation
# Your code goes here. ################################################
Explanation: Следствие: если нейросеть простенькая, надо чекать, вдруг дропаут лишний. Тут - лишний.
Bonus Part: Autoencoder
This part is OPTIONAL, you may not do it. It will not be scored, but it is easy and interesting.
Now we are going to build a cool model, named autoencoder. The aim is simple: encode the data to a lower dimentional representation. Why? Well, if we can decode this representation back to original data with "small" reconstuction loss then we can store only compressed representation saving memory. But the most important thing is -- we can reuse trained autoencoder for classification.
<img src="img/autoencoder.png">
Picture from this site.
Now implement an autoencoder:
Build it such that dimetionality inside autoencoder changes like that:
$$784 \text{ (data)} -> 512 -> 256 -> 128 -> 30 -> 128 -> 256 -> 512 -> 784$$
Use MSECriterion to score the reconstruction.
You may train it for 9 epochs with batch size = 256, initial lr = 0.1 droping by a factor of 2 every 3 epochs. The reconstruction loss should be about 6.0 and visual quality decent already.
Do not spend time on changing architecture, they are more or less the same.
End of explanation
# Extract inner representation for train and validation,
# you should get (n_samples, 30) matrices
# Your code goes here. ################################################
# Now build a logistic regression or small classification net
cnet = Sequential()
cnet.add(Linear(30, 2))
cnet.add(SoftMax())
# Learn the weights
# Your code goes here. ################################################
# Now chop off decoder part
# (you may need to implement `remove` method for Sequential container)
# Your code goes here. ################################################
# And add learned layers ontop.
autoenc.add(cnet[0])
autoenc.add(cnet[1])
# Now optimize whole model
# Your code goes here. ################################################
Explanation: Some time ago NNs were a lot poorer and people were struggling to learn deep models. To train a classification net people were training autoencoder first (to train autoencoder people were pretraining single layers with RBM), then substituting the decoder part with classification layer (yeah, they were struggling with training autoencoders a lot, and complex techniques were used at that dark times). We are going to this now, fast and easy.
End of explanation
# np.clip(prediction,0,1)
#
# Your code goes here. ################################################
Explanation: What do you think, does it make sense to build real-world classifiers this way ? Did it work better for you than a straightforward one? Looks like it was not the same ~8 years ago, what has changed beside computational power?
Run PCA with 30 components on the train set, plot original image, autoencoder and PCA reconstructions side by side for 10 samples from validation set.
Probably you need to use the following snippet to make aoutpencoder examples look comparible.
End of explanation |
14,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="imgs/header.png">
Visualization techniques for scalar fields in VTK + Python
Goals
Inspect VTK Objects via the qtconsole for Jupyter (using the magic %qtconsole)
Including a new filter, mapper, and actor to visualize the complete/partial mesh
Computing new data arrays using the vtkCalculator()
Performing data transformations
Step1: Challenge 2
Step2: Challenge 3
Step3: Challenge 4
Step4: An alternative to define colormaps
Step5: Renderer, render window, and interactor | Python Code:
import vtk
#help(vtk.vtkRectilinearGridReader())
rectGridReader = vtk.vtkRectilinearGridReader()
rectGridReader.SetFileName("data/jet4_0.500.vtk")
# do not forget to call "Update()" at the end of the reader
rectGridReader.Update()
rectGridOutline = vtk.vtkRectilinearGridOutlineFilter()
rectGridOutline.SetInputData(rectGridReader.GetOutput())
# New vtkRectilinearGridGeometryFilter() goes here:
#
#
#
#
rectGridOutlineMapper = vtk.vtkPolyDataMapper()
rectGridOutlineMapper.SetInputConnection(rectGridOutline.GetOutputPort())
rectGridGeomMapper = vtk.vtkPolyDataMapper()
#
outlineActor = vtk.vtkActor()
outlineActor.SetMapper(rectGridOutlineMapper)
outlineActor.GetProperty().SetColor(0, 0, 0)
gridGeomActor = vtk.vtkActor()
gridGeomActor.SetMapper(rectGridGeomMapper)
# Find out how to visualize this as a wireframe
# Play with the options you get for setting up actor properties (color, opacity, etc.)
Explanation: <img src="imgs/header.png">
Visualization techniques for scalar fields in VTK + Python
Goals
Inspect VTK Objects via the qtconsole for Jupyter (using the magic %qtconsole)
Including a new filter, mapper, and actor to visualize the complete/partial mesh
Computing new data arrays using the vtkCalculator()
Performing data transformations: from rectilinear grid to unstructured grid and image data
Data filtering
Visualizing scalar fields using, points, surfaces, isosurfaces, and volume rendering
Basics of transfer functions
Challenge 1: Adding a new Filter+Mapper+Actor to visualize the grid
Try to find out how to visualize the mesh structure (grid). Take a look at RectilinearGrid.py example from the VTK wiki.
End of explanation
magnitudeCalcFilter = vtk.vtkArrayCalculator()
magnitudeCalcFilter.SetInputConnection(rectGridReader.GetOutputPort())
magnitudeCalcFilter.AddVectorArrayName('vectors')
# Set up here the array that is going to be used for the computation ('vectors')
magnitudeCalcFilter.SetResultArrayName('magnitude')
magnitudeCalcFilter.SetFunction("mag(vectors)")
# Set up here the function that calculates the magnitude of a vector
magnitudeCalcFilter.Update()
#Inspect the output of the calculator using the IPython console to verify the result
Explanation: Challenge 2: Using the vtkCalulator to compute the vector magnitude
As you should have noticed our data set has only one point data array named vectors. We need now to use this array to calculate the magnitude of the vectors at each point of the grid. We will do this by using the vtk.vtkArrayCalculator().
End of explanation
#Extract the data from the result of the vtkCalculator
points = vtk.vtkPoints()
grid = magnitudeCalcFilter.GetOutput()
grid.GetPoints(points)
scalars = grid.GetPointData().GetArray('magnitude')
#Create an unstructured grid that will contain the points and scalars data
ugrid = vtk.vtkUnstructuredGrid()
ugrid.SetPoints(points)
ugrid.GetPointData().SetScalars(scalars)
#Populate the cells in the unstructured grid using the output of the vtkCalculator
for i in range (0, grid.GetNumberOfCells()):
cell = grid.GetCell(i)
ugrid.InsertNextCell(cell.GetCellType(), cell.GetPointIds())
#There are too many points, let's filter the points
subset = vtk.vtkMaskPoints()
subset.SetOnRatio(50)
subset.RandomModeOn()
subset.SetInputData(ugrid)
#Make a vtkPolyData with a vertex on each point.
pointsGlyph = vtk.vtkVertexGlyphFilter()
pointsGlyph.SetInputConnection(subset.GetOutputPort())
#pointsGlyph.SetInputData(ugrid)
pointsGlyph.Update()
pointsMapper = vtk.vtkPolyDataMapper()
pointsMapper.SetInputConnection(pointsGlyph.GetOutputPort())
pointsMapper.SetScalarModeToUsePointData()
pointsActor = vtk.vtkActor()
pointsActor.SetMapper(pointsMapper)
Explanation: Challenge 3: Visualize the data set as colored points based on the "magnitude" value
Take a look at the subset object. What happens when you change the SetOnRadio parameter? Try out changing between different RandomModeType options. What does the SetScalarModeToUsePointData() function does?
End of explanation
scalarRange = ugrid.GetPointData().GetScalars().GetRange()
print(scalarRange)
isoFilter = vtk.vtkContourFilter()
isoFilter.SetInputData(ugrid)
isoFilter.GenerateValues(10, scalarRange)
isoMapper = vtk.vtkPolyDataMapper()
isoMapper.SetInputConnection(isoFilter.GetOutputPort())
isoActor = vtk.vtkActor()
isoActor.SetMapper(isoMapper)
isoActor.GetProperty().SetOpacity(0.5)
Explanation: Challenge 4: Visualize the data set as isosurfaces based on the "magnitude" value
Go to the documentation of vtkContourFilter and explain what does the GenerateValues() do? Inspect the scalarRange of the magnitude array. What happens when you change these values in the function?
End of explanation
subset = vtk.vtkMaskPoints()
subset.SetOnRatio(10)
subset.RandomModeOn()
subset.SetInputConnection(rectGridReader.GetOutputPort())
#vtk.vtkColorTransferFunction()
#vtk.vtkLookupTable()
lut = vtk.vtkLookupTable()
lut.SetNumberOfColors(256)
lut.SetHueRange(0.667, 0.0)
lut.SetVectorModeToMagnitude()
lut.Build()
hh = vtk.vtkHedgeHog()
hh.SetInputConnection(subset.GetOutputPort())
hh.SetScaleFactor(0.001)
hhm = vtk.vtkPolyDataMapper()
hhm.SetInputConnection(hh.GetOutputPort())
hhm.SetLookupTable(lut)
hhm.SetScalarVisibility(True)
hhm.SetScalarModeToUsePointFieldData()
hhm.SelectColorArray('vectors')
hhm.SetScalarRange((rectGridReader.GetOutput().GetPointData().GetVectors().GetRange(-1)))
hha = vtk.vtkActor()
hha.SetMapper(hhm)
Explanation: An alternative to define colormaps
End of explanation
#Option 1: Default vtk render window
renderer = vtk.vtkRenderer()
renderer.SetBackground(0.5, 0.5, 0.5)
renderer.AddActor(outlineActor)
renderer.ResetCamera()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(500, 500)
renderWindow.Render()
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renderWindow)
iren.Start()
Explanation: Renderer, render window, and interactor
End of explanation |
14,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization of Non-Differentiable Functions Using Differential Evolution
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: For illustration, we optimize Griewank's function in 2 dimensions. The function is given by
\begin{equation}
f(\boldsymbol{x}) = \frac{x_1^2+x_2^2}{4000} - \cos(x_1)\cos\left(\frac{x_2}{\sqrt{2}}\right) + 1
\end{equation}
Note that Griewank's function is actually differentiable, however, the method works with any function which may not necessary be differentiable.
Step2: Plot the function as a heat map.
Step3: Helper function to generate a random initial population of $D$ dimensions $N_P$ elements. The elementsof the population are randomly distributed in the interval $[x_{\min},x_{\max}]$ (in every dimension).
Step4: Carry out differential evolution similar (not identical, slightly modified) to the scheme DE1 described in [1]
[1] R. Storn and K. Price, "Differential Evolution - A simple and efficient adaptive scheme for global optimization
over continuous spaces", Technical Report TR-95-012, March 1995
Step5: Generate animation. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: Optimization of Non-Differentiable Functions Using Differential Evolution
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* Use of differential evolution to optimize Griewank's function
End of explanation
# Griewank's function
def fun(x,y):
value = (x**2 + y**2)/4000.0 - np.cos(x)*np.cos(y/np.sqrt(2))+1
return value
# vector-version of the function
vfun = np.vectorize(fun)
Explanation: For illustration, we optimize Griewank's function in 2 dimensions. The function is given by
\begin{equation}
f(\boldsymbol{x}) = \frac{x_1^2+x_2^2}{4000} - \cos(x_1)\cos\left(\frac{x_2}{\sqrt{2}}\right) + 1
\end{equation}
Note that Griewank's function is actually differentiable, however, the method works with any function which may not necessary be differentiable.
End of explanation
# plot map of Griewanks function
x = np.arange(-20.0, 20.0, 0.1)
y = np.arange(-20.0, 20.0, 0.1)
X, Y = np.meshgrid(x, y)
fZ = vfun(X,Y)
plt.figure(1,figsize=(10,9))
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.axis('scaled')
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
Explanation: Plot the function as a heat map.
End of explanation
def initial_population(D, NP, xmin, xmax):
v = np.random.rand(NP, D)*(xmax - xmin) + xmin
return v
Explanation: Helper function to generate a random initial population of $D$ dimensions $N_P$ elements. The elementsof the population are randomly distributed in the interval $[x_{\min},x_{\max}]$ (in every dimension).
End of explanation
#dimension
D = 2
# population
NP = 15*D
# twiddling parameter
F = 0.8
# cross-over probability
CR = 0.3
# maximum 1000 iterations
max_iter = 1000
# generate initial population
population = initial_population(D, NP, -20, 20)[:]
# compute initial cost
cost = vfun(population[:,0], population[:,1])
best_index = np.argmin(cost)
best_cost = cost[best_index]
iteration = 0
# keep track of population
save_population = []
while iteration < max_iter:
# loop over every element from the population
for k in range(NP):
# get 4 random elements
rp = np.random.permutation(NP)[0:4]
# remove ourselves from the list
rp = [j for j in rp if j != k]
# generate new candidate vector
v = population[rp[0],:] + F*( population[rp[1],:] - population[rp[2],:] )
# take vector from population
u = np.array(population[k,:])
# cross-over each coordinate with probability CR with entry from candidate vector v
idx = np.random.rand(D) < CR
# cross-over
u[idx] = v[idx]
new_cost = fun(u[0], u[1])
if new_cost < cost[k]:
# better cost? keep!
cost[k] = new_cost
population[k,:] = u
if new_cost < best_cost:
best_cost = new_cost
best_index = k
save_population.append(np.array(population[:]))
iteration += 1
if iteration % 100 == 0:
print('After iteration %d, best cost %1.4f (obtained for (%1.2f,%1.2f))' % (iteration, best_cost, population[best_index,0], population[best_index,1]))
Explanation: Carry out differential evolution similar (not identical, slightly modified) to the scheme DE1 described in [1]
[1] R. Storn and K. Price, "Differential Evolution - A simple and efficient adaptive scheme for global optimization
over continuous spaces", Technical Report TR-95-012, March 1995
End of explanation
plt.figure(1,figsize=(9,9))
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
index = 180
cost = vfun(save_population[index][:,0], save_population[index][:,1])
best_index = np.argmin(cost)
plt.scatter(save_population[index][:,0], save_population[index][:,1], c='w')
plt.scatter(save_population[index][best_index,0], save_population[index][best_index,1], c='r')
plt.xlim((-20,20))
plt.ylim((-20,20))
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.savefig('DE_Griewangk.pdf',bbox_inches='tight')
%matplotlib notebook
# Generate animation
from matplotlib import animation, rc
from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.
font = {'size' : 18}
plt.rc('font', **font)
fig, ax = plt.subplots(1, figsize=(10,10))
ax.set_xlim(( -20, 20))
ax.set_ylim(( -20, 20))
ax.axis('scaled')
written = False
def animate(i):
ax.clear()
ax.contourf(X,Y,fZ,levels=20)
cost = vfun(save_population[i][:,0], save_population[i][:,1])
best_index = np.argmin(cost)
ax.scatter(save_population[i][:,0], save_population[i][:,1], c='w')
ax.scatter(save_population[i][best_index,0], save_population[i][best_index,1], c='r')
ax.set_xlabel(r'$x_1$',fontsize=18)
ax.set_ylabel(r'$x_2$',fontsize=18)
ax.set_xlim(( -20, 20))
ax.set_ylim(( -20, 20))
anim = animation.FuncAnimation(fig, animate, frames=300, interval=80, blit=False)
fig.show()
anim.save('differential_evolution_Griewank.gif', writer=PillowWriter(fps=7))
Explanation: Generate animation.
End of explanation |
14,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstrate impact of whitening on source estimates
This example demonstrates the relationship between the noise covariance
estimate and the MNE / dSPM source amplitudes. It computes source estimates for
the SPM faces data and compares proper regularization with insufficient
regularization based on the methods described in [1]. The example demonstrates
that improper regularization can lead to overestimation of source amplitudes.
This example makes use of the previous, non-optimized code path that was used
before implementing the suggestions presented in [1]. Please do not copy the
patterns presented here for your own analysis, this is example is purely
illustrative.
.. note
Step1: Get data
Step2: Estimate covariances
Step4: Show the resulting source estimates | Python Code:
# Author: Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import os
import os.path as op
import numpy as np
from scipy.misc import imread
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import spm_face
from mne.minimum_norm import apply_inverse, make_inverse_operator
from mne.cov import compute_covariance
print(__doc__)
Explanation: Demonstrate impact of whitening on source estimates
This example demonstrates the relationship between the noise covariance
estimate and the MNE / dSPM source amplitudes. It computes source estimates for
the SPM faces data and compares proper regularization with insufficient
regularization based on the methods described in [1]. The example demonstrates
that improper regularization can lead to overestimation of source amplitudes.
This example makes use of the previous, non-optimized code path that was used
before implementing the suggestions presented in [1]. Please do not copy the
patterns presented here for your own analysis, this is example is purely
illustrative.
.. note:: This example does quite a bit of processing, so even on a
fast machine it can take a couple of minutes to complete.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals,
vol. 108, 328-342, NeuroImage.
End of explanation
data_path = spm_face.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1) # Take first run
# To save time and memory for this demo, we'll just use the first
# 2.5 minutes (all we need to get 30 total events) and heavily
# resample 480->60 Hz (usually you wouldn't do either of these!)
raw = raw.crop(0, 150.).load_data().resample(60, npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, None, method='iir', n_jobs=1)
events = mne.find_events(raw, stim_channel='UPPT001')
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.5
baseline = None # no baseline as high-pass is applied
reject = dict(mag=3e-12)
# Make source space
trans = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
src = mne.setup_source_space('spm', fname=None, spacing='oct6',
subjects_dir=subjects_dir, add_dist=False)
bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(raw.info, trans, src, bem)
forward = mne.convert_forward_solution(forward, surf_ori=True)
del src
# inverse parameters
conditions = 'faces', 'scrambled'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
clim = dict(kind='value', lims=[0, 2.5, 5])
Explanation: Get data
End of explanation
samples_epochs = 5, 15,
method = 'empirical', 'shrunk'
colors = 'steelblue', 'red'
evokeds = list()
stcs = list()
methods_ordered = list()
for n_train in samples_epochs:
# estimate covs based on a subset of samples
# make sure we have the same number of conditions.
events_ = np.concatenate([events[events[:, 2] == id_][:n_train]
for id_ in [event_ids[k] for k in conditions]])
epochs_train = mne.Epochs(raw, events_, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject)
epochs_train.equalize_event_counts(event_ids, copy=False)
assert len(epochs_train) == 2 * n_train
noise_covs = compute_covariance(
epochs_train, method=method, tmin=None, tmax=0, # baseline only
return_estimators=True) # returns list
# prepare contrast
evokeds = [epochs_train[k].average() for k in conditions]
del epochs_train, events_
# do contrast
# We skip empirical rank estimation that we introduced in response to
# the findings in reference [1] to use the naive code path that
# triggered the behavior described in [1]. The expected true rank is
# 274 for this dataset. Please do not do this with your data but
# rely on the default rank estimator that helps regularizing the
# covariance.
stcs.append(list())
methods_ordered.append(list())
for cov in noise_covs:
inverse_operator = make_inverse_operator(evokeds[0].info, forward,
cov, loose=0.2, depth=0.8,
rank=274)
stc_a, stc_b = (apply_inverse(e, inverse_operator, lambda2, "dSPM",
pick_ori=None) for e in evokeds)
stc = stc_a - stc_b
methods_ordered[-1].append(cov['method'])
stcs[-1].append(stc)
del inverse_operator, evokeds, cov, noise_covs, stc, stc_a, stc_b
del raw, forward # save some memory
Explanation: Estimate covariances
End of explanation
fig, (axes1, axes2) = plt.subplots(2, 3, figsize=(9.5, 6))
def brain_to_mpl(brain):
convert image to be usable with matplotlib
tmp_path = op.abspath(op.join(op.curdir, 'my_tmp'))
brain.save_imageset(tmp_path, views=['ven'])
im = imread(tmp_path + '_ven.png')
os.remove(tmp_path + '_ven.png')
return im
for ni, (n_train, axes) in enumerate(zip(samples_epochs, (axes1, axes2))):
# compute stc based on worst and best
ax_dynamics = axes[1]
for stc, ax, method, kind, color in zip(stcs[ni],
axes[::2],
methods_ordered[ni],
['best', 'worst'],
colors):
brain = stc.plot(subjects_dir=subjects_dir, hemi='both', clim=clim)
brain.set_time(175)
im = brain_to_mpl(brain)
brain.close()
del brain
ax.axis('off')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.imshow(im)
ax.set_title('{0} ({1} epochs)'.format(kind, n_train * 2))
# plot spatial mean
stc_mean = stc.data.mean(0)
ax_dynamics.plot(stc.times * 1e3, stc_mean,
label='{0} ({1})'.format(method, kind),
color=color)
# plot spatial std
stc_var = stc.data.std(0)
ax_dynamics.fill_between(stc.times * 1e3, stc_mean - stc_var,
stc_mean + stc_var, alpha=0.2, color=color)
# signal dynamics worst and best
ax_dynamics.set_title('{0} epochs'.format(n_train * 2))
ax_dynamics.set_xlabel('Time (ms)')
ax_dynamics.set_ylabel('Source Activation (dSPM)')
ax_dynamics.set_xlim(tmin * 1e3, tmax * 1e3)
ax_dynamics.set_ylim(-3, 3)
ax_dynamics.legend(loc='upper left', fontsize=10)
fig.subplots_adjust(hspace=0.4, left=0.03, right=0.98, wspace=0.07)
fig.canvas.draw()
fig.show()
Explanation: Show the resulting source estimates
End of explanation |
14,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization
Step6: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code
Step9: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code
Step10: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
Step11: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
Step12: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note
Step13: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
Step14: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
Step15: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
Step16: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
Step17: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
Step18: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
Step19: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
Step20: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
Step21: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
Step22: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
Step23: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
Step24: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
Step25: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
Step26: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure
Step27: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
Step29: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note
Step31: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points
Step32: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
Step33: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note | Python Code:
# Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.
Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.
For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network.
Likewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.
When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).
Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.
Benefits of Batch Normalization<a id="benefits"></a>
Batch normalization optimizes network training. It has been shown to have several benefits:
1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall.
2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train.
3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.
4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.
5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.
6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network.
7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.
Batch Normalization in TensorFlow<a id="implementation_1"></a>
This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow.
The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.
End of explanation
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
Explanation: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code:
This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.
It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
End of explanation
def plot_training_accuracies(*args, **kwargs):
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code:
1. Layers with batch normalization do not include a bias term.
2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)
3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.
4. We add the normalization before calling the activation function.
In addition to that code, the training step is wrapped in the following with statement:
python
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.
Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.
Batch Normalization Demos<a id='demos'></a>
This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier.
We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.
Code to support testing
The following two functions support the demos we run in the notebook.
The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.
The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
End of explanation
train_and_test(False, 0.01, tf.nn.relu)
Explanation: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
End of explanation
train_and_test(False, 0.01, tf.nn.sigmoid)
Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)
In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
train_and_test(False, 1, tf.nn.relu)
Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
train_and_test(False, 1, tf.nn.relu)
Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
End of explanation
train_and_test(False, 1, tf.nn.sigmoid)
Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
End of explanation
train_and_test(False, 2, tf.nn.relu)
Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
train_and_test(False, 2, tf.nn.sigmoid)
Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
End of explanation
train_and_test(True, 0.01, tf.nn.relu)
Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
train_and_test(True, 0.01, tf.nn.sigmoid)
Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
train_and_test(True, 1, tf.nn.relu)
Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
End of explanation
train_and_test(True, 1, tf.nn.sigmoid)
Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
End of explanation
train_and_test(True, 2, tf.nn.relu)
Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
End of explanation
train_and_test(True, 2, tf.nn.sigmoid)
Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
End of explanation
train_and_test(True, 1, tf.nn.relu)
Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure: Batch Normalization Doesn't Fix Everything
Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.
This section includes two examples that show runs when batch normalization did not help at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
End of explanation
train_and_test(True, 2, tf.nn.relu)
Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
End of explanation
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.
End of explanation
def batch_norm_test(test_training_accuracy):
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).
End of explanation
batch_norm_test(True)
Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
End of explanation
batch_norm_test(False)
Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.
End of explanation |
14,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Errors and Crashes
Probably the most important chapter in this section is about how to handle error and crashes. Because at the beginning you will run into a few.
For example
Step1: Investigating the crash
Hidden, in the log file you can find the relevant information
Step2: Example Crash 2
Step3: This will give you the error
Step4: In a similar way, you will also get an error message if the input type is correct but you have a type in the name
Step5: Example Crash 3
Step6: This can be solved by using a MapNode
Step7: Now, make sure that you specify files that actually exist, otherwise you can the same problem as in crash example 1, but this time labeled as TraitError
Step8: By the way, not that those crashes don't create a crashfile, because they didn't happen during runtime, but still during workflow building.
Example Crash 4
Step9: But what does this UnicodeEncodeError mean?
UnicodeEncodeError
Step10: All seems to be fine, right? It even detects that the functional image has a temporal dimension. So what's wrong with MATLAB? To find out, let's run the script directly in matlab ourselves...
Step11: Now, here's at least a hint. At the end of the output, we get the following lines
Step12: As you can see, in this case you'll get the error
Step13: Example Crash 5
Step14: Or if you try to use SPM, but forgot to tell Nipype where to find it. If you forgot to tell the system where to find MATLAB (or MCR), than you will get same kind of error as above. But if you forgot to specify which SPM you want to use, you'll get the following RuntimeError
Step15: This gives you the error
Step16: Using input fields that don't exist
Let's see what happens if we try to specify a parameter that doesn't exist as an input field
Step17: This results in the TraitError
Step18: Now, let's create a new node and connect it to the already occupied input field in_file of the smooth node | Python Code:
from nipype import SelectFiles, Node, Workflow
from os.path import abspath as opap
from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth
# Create SelectFiles node
templates={'func': '{subject_id}/func/{subject_id}_task-flanker_run-1_bold.nii.gz'}
sf = Node(SelectFiles(templates),
name='selectfiles')
sf.inputs.base_directory = opap('/data/ds102')
sf.inputs.subject_id = 'sub-06'
# Create Motion Correction Node
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True),
name='mcflirt')
# Create Smoothing node
smooth = Node(IsotropicSmooth(fwhm=4),
name='smooth')
# Create a preprocessing workflow
wf = Workflow(name="preprocWF")
wf.base_dir = 'working_dir'
# Connect the three nodes to each other
wf.connect([(sf, mcflirt, [("func", "in_file")]),
(mcflirt, smooth, [("out_file", "in_file")])])
# Let's the workflow
wf.run()
Explanation: Errors and Crashes
Probably the most important chapter in this section is about how to handle error and crashes. Because at the beginning you will run into a few.
For example:
You specified file names or paths that don't exist.
You try to give an interface a string as input, where a float value is expected or you try to specify a parameter that doesn't exist. Be sure to use the right input type and input name.
You wanted to give a list of inputs [func1.nii, func2.nii, func3.nii] to a node that only expects one input file . MapNode is your solution.
You wanted to run SPM's motion correction on compressed NIfTI files, i.e. *.nii.gz? SPM cannot handle that. Nipype's Gunzip interface can help.
You haven't set up all necessary environment variables. Nipype for example doesn't find your MATLAB or SPM version.
You forget to specify a mandatory input field.
You try to connect a node to an input field that another node is already connected to.
Important note about crashfiles. Crashfiles are only created when you run a workflow, not during building a workflow. If you have a typo in a folder path, because they didn't happen during runtime, but still during workflow building.
Example Crash 1: File doesn't exist
When creating a new workflow, very often the initial errors are IOError, meaning Nipype cannot find the right files. For example, let's try to run a workflow on sub-06, that in our dataset doesn't exist.
Creating the crash
End of explanation
!nipype_display_crash /home/jovyan/work/notebooks/crash-*selectfiles-*.pklz
Explanation: Investigating the crash
Hidden, in the log file you can find the relevant information:
IOError: No files were found matching func template: /data/ds102/sub-06/func/sub-06_task-flanker_run-1_bold.nii.gz
Interface SelectFiles failed to run.
170301-13:04:17,458 workflow INFO:
***********************************
170301-13:04:17,460 workflow ERROR:
could not run node: preprocWF.selectfiles
170301-13:04:17,461 workflow INFO:
crashfile: /home/jovyan/work/notebooks/crash-20170301-130417-mnotter-selectfiles-45206d1b-73d9-4e03-a91e-437335577b8d.pklz
170301-13:04:17,462 workflow INFO:
This part tells you that it's an IOError and that it looked for the file /data/ds102/sub-06/func/sub-06_task-flanker_run-1_bold.nii.gz.
After the line ***********************************, you can additional see, that it's the node preprocWF.selectfiles that crasehd and that you can find a crashfile to this crash under /home/jovyan/work/notebooks/.
Reading the crashfile
To get the full picture of the error, we can read the content of the crashfile with the bash command nipype_display_crash. We will get the same information as above, but additionally, we can also see directly the input values of the Node that crashed.
End of explanation
from nipype.interfaces.fsl import IsotropicSmooth
smooth = IsotropicSmooth(fwhm='4')
Explanation: Example Crash 2: Wrong Input Type or Typo in the parameter
Very simple, if an interface expects a float as input, but you give it a string, it will crash:
End of explanation
IsotropicSmooth.help()
Explanation: This will give you the error: TraitError: The 'fwhm' trait of an IsotropicSmoothInput instance must be a float, but a value of '4' <type 'str'> was specified.
To make sure that you are using the right input types, just check the help section of a given interface. There you can see fwhm: (a float).
End of explanation
from nipype.interfaces.fsl import IsotropicSmooth
smooth = IsotropicSmooth(output_type='NIFTIiii')
Explanation: In a similar way, you will also get an error message if the input type is correct but you have a type in the name:
TraitError: The 'output_type' trait of an IsotropicSmoothInput instance must be u'NIFTI_PAIR' or u'NIFTI_PAIR_GZ' or u'NIFTI_GZ' or u'NIFTI', but a value of 'NIFTIiii' <type 'str'> was specified.
End of explanation
from nipype.algorithms.misc import Gunzip
from nipype.pipeline.engine import Node
files = ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz',
'/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz']
gunzip = Node(Gunzip(), name='gunzip',)
gunzip.inputs.in_file = files
Explanation: Example Crash 3: Giving an array as input where a single file is expected
As you an see in the MapNode example, if you try to feed an array as an input into a field that only expects a single file, you will get a TraitError.
End of explanation
from nipype.pipeline.engine import MapNode
gunzip = MapNode(Gunzip(), name='gunzip', iterfield=['in_file'])
gunzip.inputs.in_file = files
Explanation: This can be solved by using a MapNode:
End of explanation
files = ['/data/ds102/sub-06/func/sub-06_task-flanker_run-1_bold.nii.gz',
'/data/ds102/sub-06/func/sub-06_task-flanker_run-2_bold.nii.gz']
gunzip.inputs.in_file = files
Explanation: Now, make sure that you specify files that actually exist, otherwise you can the same problem as in crash example 1, but this time labeled as TraitError:
TraitError: Each element of the 'in_file' trait of a DynamicTraitedSpec instance must be an existing file name, but a value of '/data/ds102/sub-06/func/sub-06_task-flanker_run-1_bold.nii.gz' <type 'str'> was specified.
End of explanation
from nipype.interfaces.spm import Realign
realign = Realign(in_files='/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz')
realign.run()
Explanation: By the way, not that those crashes don't create a crashfile, because they didn't happen during runtime, but still during workflow building.
Example Crash 4: SPM doesn't like *.nii.gz files
SPM12 cannot handle compressed NIfTI files (*nii.gz). If you try to run the node nonetheless, it can give you different kind of problems:
SPM Problem 1 with *.nii.gz files
SPM12 has a problem with handling *.nii.gz files. For it a compressed functional image has no temporal dimension and therefore seems to be just a 3D file. So if we try to run the Realign interface on a compressed file, we will get a weired UnicodeEncodeError error.
End of explanation
!cat /home/jovyan/work/notebooks/pyscript_realign.m
Explanation: But what does this UnicodeEncodeError mean?
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf7' in position 7984: ordinal not in range(128)
Well, to find out, we need to dig a bit deeper and check the corresponding MATLAB script. Because every SPM interface creates an executable MATLAB script, either in the current location or in the folder of the node. So what's written in this script?
End of explanation
!/opt/spm12/run_spm12.sh /opt/mcr/v91/ batch pyscript_realign.m
Explanation: All seems to be fine, right? It even detects that the functional image has a temporal dimension. So what's wrong with MATLAB? To find out, let's run the script directly in matlab ourselves...
End of explanation
from nipype.interfaces.spm import Smooth
smooth = Smooth(in_files='/data/ds102/sub-01/anat/sub-01_T1w.nii.gz')
smooth.run()
Explanation: Now, here's at least a hint. At the end of the output, we get the following lines:
Item 'Session', field 'val': Number of matching files (0) less than required (1).
MATLAB code threw an exception:
No executable modules, but still unresolved dependencies or incomplete module inputs.
It's not too clear from the output, but MATLAB tries to tell you, that it cannot read the compressed NIfTI files. Therefore, it doesn't find one single NIfTI file (0 matching files, required 1).
Solve this issue by unzipping the compressed NIfTI file before giving it as an input to an SPM node. This can either be done by using the Gunzip interface from Nipype or even better, if the input is coming from a FSL interface, most of them have an input filed output_type='NIFTI', that you can set to NIFIT.
SPM problem 2 with *.nii.gz files
Even worse than the problem before, it might be even possible that SPM doesn't tell you at all what the problem is:
End of explanation
!/opt/spm12/run_spm12.sh /opt/mcr/v91/ batch pyscript_smooth.m
Explanation: As you can see, in this case you'll get the error:
FileNotFoundError: File/Directory '[u'/data/workflow/smooth/ssub-01_T1w.nii.gz']' not found for Smooth output 'smoothed_files'.
Interface Smooth failed to run.
It's easy to overlook the additional s in front of the file name. The problem is, the error tells you that it cannot find the output file of smooth, but doesn't tell you what the problem in MATLAB was.
And even if you run the MATLAB script yourself, you will get no hints. In this case, good luck...
...
------------------------------------------------------------------------
Running job #1
------------------------------------------------------------------------
Running 'Smooth'
Done 'Smooth'
Done
End of explanation
from nipype.interfaces.freesurfer import MRIConvert
convert = MRIConvert(in_file='/data/ds102/sub-01/anat/sub-01_T1w.nii.gz',
out_type='nii')
Explanation: Example Crash 5: Nipype cannot find the right software
Especially at the beginning, just after installation, you sometimes forgot to specify some environment variables. If you try to use an interface where the environment variables of the software are not specified, you'll errors, such as:
IOError: command 'mri_convert' could not be found on host mnotter
Interface MRIConvert failed to run.
End of explanation
from nipype.interfaces.spm import Realign
realign = Realign(register_to_mean=True)
realign.run()
Explanation: Or if you try to use SPM, but forgot to tell Nipype where to find it. If you forgot to tell the system where to find MATLAB (or MCR), than you will get same kind of error as above. But if you forgot to specify which SPM you want to use, you'll get the following RuntimeError:
Standard error:
MATLAB code threw an exception:
SPM not in matlab path
You can solve this issue by specifying the path to your SPM version:
python
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/usr/local/MATLAB/R2017a/toolbox/spm12')
Example Crash 6: You forget mandatory inputs or use input fields that don't exist
One of the simpler errors are the ones connected to input and output fields.
Forgetting mandatory input fields
Let's see what happens if you forget a [Mandatory] input field.
End of explanation
realign.help()
Explanation: This gives you the error:
ValueError: Realign requires a value for input 'in_files'. For a list of required inputs, see Realign.help()
As described by the error text, if we use the help() function, we can actually see, which inputs are mandatory and which are optional.
End of explanation
from nipype.interfaces.afni import Despike
despike = Despike(in_file='../../ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz',
output_type='NIFTI')
despike.run()
Explanation: Using input fields that don't exist
Let's see what happens if we try to specify a parameter that doesn't exist as an input field:
End of explanation
from nipype import SelectFiles, Node, Workflow
from os.path import abspath as opap
from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth
# Create SelectFiles node
templates={'func': '{subject_id}/func/{subject_id}_task-flanker_run-1_bold.nii.gz'}
sf = Node(SelectFiles(templates),
name='selectfiles')
sf.inputs.base_directory = opap('/data/ds102')
sf.inputs.subject_id = 'sub-01'
# Create Motion Correction Node
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True),
name='mcflirt')
# Create Smoothing node
smooth = Node(IsotropicSmooth(fwhm=4),
name='smooth')
# Create a preprocessing workflow
wf = Workflow(name="preprocWF")
wf.base_dir = 'working_dir'
# Connect the three nodes to each other
wf.connect([(sf, mcflirt, [("func", "in_file")]),
(mcflirt, smooth, [("out_file", "in_file")])])
Explanation: This results in the TraitError:
TraitError: Cannot set the undefined 'output_type' attribute of a 'DespikeInputSpec' object.
So what went wrong? If you use the help() function, you will see that the correct input filed is called outputtype and not output_type.
Example Crash 7: Trying to connect a node to an input field that is already occupied
Sometimes when you build a new workflow, you might forget that an output field was already connected and you try to connect a new node to the already occupied field.
First, let's create a simple workflow:
End of explanation
# Create a new node
mcflirt_NEW = Node(MCFLIRT(mean_vol=True),
name='mcflirt_NEW')
# Connect it to an already connected input field
wf.connect([(mcflirt_NEW, smooth, [("out_file", "in_file")])])
Explanation: Now, let's create a new node and connect it to the already occupied input field in_file of the smooth node:
End of explanation |
14,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: 1. Produce some samples
Recalling our Probability Essentials tutorial, let's jump right into the case of 2 correlated Gaussian variables.
First we need some samples to work with. In this bivariate Gaussian case, scipy has a function that could generate samples for us, but instead let's test your understanding of what we've already covered.
First, let's specify the parameters of the distribution, as before
Step2: Now, make an $N\times2$ table with each row being an $(x,y)$ pair, and $N$ being some large number of samples. Instead of using some fancy function to directly obtain samples of $(x,y)$, do this
Step3: Let's have a look. Qualitatively compare the plot below with the heatmap of $p(x,y)$ you made in the previous tutorial, for the case of correlated variables.
Step4: 2. Marginalization
Next, we'll look at estimates of the 1D marginal distributions from our samples. First, copy your implementations of the analytic solutions for these distributions from the previous notebook, so we have a known answer to compare to.
Step5: If you read ahead to the Monte Carlo Sampling notes, you've seen that the way we estimate a PDF from samples is by making a histogram (which simply records the density of the samples). Furthermore, the way we marginalize over a variable is incredibly simple - we just ignore it. So, estimating the marginal distribution of $x$ (or $y$) is as simple as making a histogram of the first (or second) column of samples.
To get a normalized histogram, we use the density=True option below - this simply divides the number of samples in each histogram bin by the bin width and the total number of samples. Notice that we don't have one of the sanity checks here that we did previously, namely the ability to explicitly check that our expression for the marginal PDF was normalized.
Below we plot the density of samples generated above, using hist, and compare with your analytic solution from the previous notebook.
Step6: 3. Conditioning
Again, pull in your analytic solutions from the previous notebook.
Step7: Conditioning is a little less straightforward than marginalization. In principle, if we want to condition on $x=x_0$, we would want to make a histogram of $y$ values for samples that have $x=x_0$. But we'd have to be incredibly lucky for any of our samples of $p(x,y)$ to satisfy $x=x_0$ exactly!
One natural (and necessarily approximate) solution is to work with samples that are close to $x=x_0$, within some window. To that end, store in j_fixed_x a list of indices into samples[
Step8: Now let's see how histograms of the samples you selected compare with the analytic solution in each case. Feel free to fiddle with the value of $\epsilon$ (and also the bins option to hist, below). How does it look?
Step9: One obvious issue with this approach is that we end up with potentially many fewer samples in our estimate of the conditional distribution than we started with. Let's see what fraction of the samples are actually used in each of the histograms above
Step10: There's not much one can do about this "waste" of samples, other than to take the time to generate samples directly from the conditional distribution, if we care that much.
However, we might get slightly better (or smoother) results by changing the nature of the window used to select samples. For example, instead of completely throwing away samples that are farther than $\epsilon$ from a value we want to condition on, we could use all the samples, weighted in such a way that the samples far from $x_0$ or $y_0$ contribute much less than those that are nearby. To that end, compute Gaussian weights for each sample based on their distance from $y_0$ (or $x_0$), with a standard deviation that you again get to pick.
Step11: Again, fiddle with the standard deviation and/or display binning to see what you can do. (Note the use of the weights option.)
Step12: Chances are that neither of these options looks great, so if we really cared about having the conditional PDF mapped well we would either want more samples, or we would need to sample from the conditional PDF directly instead of dealing with the conditioning this way.
On the other hand, if we just wanted to estimate something simple about the conditional PDF, say its mean, this might be good enough. Using the Gaussian-weighting method, the estimated mean of $x|y=3.8$ is
Step13: ... compared with the exact mean of $-0.5$.
4. Importance weighting
Let's go a little farther and think more generally about the marginal distribution of $y$ from the product of $p(x,y)$ and some other PDF, $q(x)$. Imagine that we have samples of $p(x,y)$ that were expensive to get, while $q(x)$ is straightforward to evaluate for any $x$. Then, instead of investing a lot of time in generating new samples from $p(x,y)\,q(x)$, we might want to do something like the weighting procedure above, which is called importance weighting.
We can think of conditioning as importance weighting with a PDF that says that $x$ must be really close to $x_0$, for example. But now, let's consider a different case. To keep it simple, let's say that $q(x) = \mathrm{Normal}(x|4,1)$. The weights for each sample are
Step14: Looking at the marginal distributions, you can see how the marginal distribution of $x$ is, naturally, pulled to larger $x$, but also the PDF of $y$ is pulled to lower $y$ due to the negative correlation of $p(x,y)$.
Step15: Did this work perfectly? We haven't bothered to work out the analytic solution for this product of PDFs, but we do know that it's a Gaussian, and so the histograms above should also have Gaussian shapes. Which they almost do... but the tails don't quite look equally heavy. This is because the original set of samples doesn't cover very well the entire region where the product $p(x,y) \, q(x)$ is large. Note that the weighting is still exactly right here - it's just that the final PDF estimate gets noisier as the number of samples falls (as we go to extreme values of $x$ and $y$), and eventually there are none left to re-weight! If $q(x)$ has been skinnier, or if its mean had been closer to the mean of $p(x)$, we would have been fine.
With this in mind, contrive a $q_2(x)$ such that the procedure above fails... badly.
$q_2(x) = $ | Python Code:
exec(open('tbc.py').read()) # define TBC and TBC_above
import numpy as np
import scipy.stats as st
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Tutorial: Working with Samples
In practice, we almost always work with samples from probability distributions rather than analytic or on-a-grid evaluations. Here we'll see how to do all the fun probability manipulations you've previously done (analytically and with grids) in the Probability Essentials tutorial, this time with Monte Carlo samples.
Specifically, you will learn to:
generate random samples from a probability distribution
estimate marginal and conditional probabilities from Monte Carlo samples
idenitfy when importance weighting might fail due to the samples' coverage
Warning: This notebook comes a little out of order. We will be doing things that are covered conceptually in the Monte Carlo Sampling notes, so you may want to jump ahead and read their short Motivation section. Or you could just take our word for a lot of things. This isn't ideal, but we think it's important to get you working with samples relatively early, even if the reasons why aren't fully apparent until later.
End of explanation
cor = {'mx':1.0, 'my':2.3, 'sx':1.0, 'sy':0.5, 'r':-0.5} # parameter values
# Note that we do not need to explicitly define a function for the density this time!
Explanation: 1. Produce some samples
Recalling our Probability Essentials tutorial, let's jump right into the case of 2 correlated Gaussian variables.
First we need some samples to work with. In this bivariate Gaussian case, scipy has a function that could generate samples for us, but instead let's test your understanding of what we've already covered.
First, let's specify the parameters of the distribution, as before:
End of explanation
N = 100000 # number of samples
samples = np.empty((N,2))
TBC()
# samples[:,0] = np.random.normal( ...
# samples[:,1] = np.random.normal( ...
Explanation: Now, make an $N\times2$ table with each row being an $(x,y)$ pair, and $N$ being some large number of samples. Instead of using some fancy function to directly obtain samples of $(x,y)$, do this:
1. Fill the first column of the table ($x$) with samples from the marginal distribution, $p(x)$.
2. Fill the second column ($y$) with samples from the conditional distribution, $p(y|x)$.
You should have expressions for these two distributions already from the previous notebook.
End of explanation
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.plot(samples[:,0], samples[:,1], 'b.');
plt.xlabel('x');
plt.ylabel('y');
Explanation: Let's have a look. Qualitatively compare the plot below with the heatmap of $p(x,y)$ you made in the previous tutorial, for the case of correlated variables.
End of explanation
def p_x(x, mx, my, sx, sy, r):
return TBC()
def p_y(y, mx, my, sx, sy, r):
return TBC()
TBC_above()
# these are only used for plotting the above functions this time
xvalues = np.arange(-4.0, 6.0, 0.1)
yvalues = np.arange(-0.2, 4.8, 0.1)
Explanation: 2. Marginalization
Next, we'll look at estimates of the 1D marginal distributions from our samples. First, copy your implementations of the analytic solutions for these distributions from the previous notebook, so we have a known answer to compare to.
End of explanation
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1,2);
ax[0].hist(samples[:,0], bins=25, density=True, histtype='step', color='b', linewidth=2, label='samples');
ax[0].plot(xvalues, p_x(xvalues, **cor), 'r-', label='analytic');
ax[0].set_xlabel('x');
ax[0].set_ylabel('p(x)');
ax[0].legend();
ax[1].hist(samples[:,1], bins=25, density=True, histtype='step', color='b', linewidth=2, label='samples');
ax[1].plot(yvalues, p_y(yvalues, **cor), 'r-', label='analytic');
ax[1].set_xlabel('y');
ax[1].set_ylabel('p(y)');
Explanation: If you read ahead to the Monte Carlo Sampling notes, you've seen that the way we estimate a PDF from samples is by making a histogram (which simply records the density of the samples). Furthermore, the way we marginalize over a variable is incredibly simple - we just ignore it. So, estimating the marginal distribution of $x$ (or $y$) is as simple as making a histogram of the first (or second) column of samples.
To get a normalized histogram, we use the density=True option below - this simply divides the number of samples in each histogram bin by the bin width and the total number of samples. Notice that we don't have one of the sanity checks here that we did previously, namely the ability to explicitly check that our expression for the marginal PDF was normalized.
Below we plot the density of samples generated above, using hist, and compare with your analytic solution from the previous notebook.
End of explanation
def p_x_given_y(x, y, mx, my, sx, sy, r):
return TBC()
def p_y_given_x(y, x, mx, my, sx, sy, r):
return TBC()
TBC_above()
# continuing to follow previous notebook
fixed_x = -1.0
fixed_y = 3.8
Explanation: 3. Conditioning
Again, pull in your analytic solutions from the previous notebook.
End of explanation
TBC()
# j_fixed_x = np.where( ...
# j_fixed_y = np.where( ...
Explanation: Conditioning is a little less straightforward than marginalization. In principle, if we want to condition on $x=x_0$, we would want to make a histogram of $y$ values for samples that have $x=x_0$. But we'd have to be incredibly lucky for any of our samples of $p(x,y)$ to satisfy $x=x_0$ exactly!
One natural (and necessarily approximate) solution is to work with samples that are close to $x=x_0$, within some window. To that end, store in j_fixed_x a list of indices into samples[:,0] (i.e. row numbers) where $|y-y_0|<\epsilon$, where $y_0$ is fixed_y, above, and $\epsilon$ is a threshold of your choice. Do the equivalent for $x$ and fixed_x also.
End of explanation
# fiddled with "bins" here to get something like the same resolution as elsewhere
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1,2);
ax[0].hist(samples[j_fixed_y,0], bins=10, density=True, histtype='step', color='b', linewidth=2, label='samples');
ax[0].plot(xvalues, p_x_given_y(xvalues, fixed_y, **cor), 'r-', label='analytic');
ax[0].set_xlabel('x');
ax[0].set_ylabel('p(x|y=' + str(fixed_y) + ')');
ax[0].legend();
ax[1].hist(samples[j_fixed_x,1], bins=20, density=True, histtype='step', color='b', linewidth=2, label='samples');
ax[1].plot(yvalues, p_y_given_x(yvalues, fixed_x, **cor), 'r-', label='analytic');
ax[1].set_xlabel('y');
ax[1].set_ylabel('p(y|x=' + str(fixed_x) + ')');
Explanation: Now let's see how histograms of the samples you selected compare with the analytic solution in each case. Feel free to fiddle with the value of $\epsilon$ (and also the bins option to hist, below). How does it look?
End of explanation
print(len(j_fixed_x) / samples.shape[0])
print(len(j_fixed_y) / samples.shape[0])
Explanation: One obvious issue with this approach is that we end up with potentially many fewer samples in our estimate of the conditional distribution than we started with. Let's see what fraction of the samples are actually used in each of the histograms above:
End of explanation
TBC()
# weights_fixed_x = st.norm.pdf( ...
# weights_fixed_y = st.norm.pdf( ...
Explanation: There's not much one can do about this "waste" of samples, other than to take the time to generate samples directly from the conditional distribution, if we care that much.
However, we might get slightly better (or smoother) results by changing the nature of the window used to select samples. For example, instead of completely throwing away samples that are farther than $\epsilon$ from a value we want to condition on, we could use all the samples, weighted in such a way that the samples far from $x_0$ or $y_0$ contribute much less than those that are nearby. To that end, compute Gaussian weights for each sample based on their distance from $y_0$ (or $x_0$), with a standard deviation that you again get to pick.
End of explanation
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1,2);
ax[0].hist(samples[:,0], weights=weights_fixed_y, bins=25, density=True, histtype='step', color='b', linewidth=2, label='samples');
ax[0].plot(xvalues, p_x_given_y(xvalues, fixed_y, **cor), 'r-', label='analytic');
ax[0].set_xlabel('x');
ax[0].set_ylabel('p(x|y=' + str(fixed_y) + ')');
ax[0].legend();
ax[1].hist(samples[:,1], weights=weights_fixed_x, bins=25, density=True, histtype='step', color='b', linewidth=2, label='samples');
ax[1].plot(yvalues, p_y_given_x(yvalues, fixed_x, **cor), 'r-', label='analytic');
ax[1].set_xlabel('y');
ax[1].set_ylabel('p(y|x=' + str(fixed_x) + ')');
Explanation: Again, fiddle with the standard deviation and/or display binning to see what you can do. (Note the use of the weights option.)
End of explanation
np.sum(samples[:,0]*weights_fixed_y) / np.sum(weights_fixed_y)
Explanation: Chances are that neither of these options looks great, so if we really cared about having the conditional PDF mapped well we would either want more samples, or we would need to sample from the conditional PDF directly instead of dealing with the conditioning this way.
On the other hand, if we just wanted to estimate something simple about the conditional PDF, say its mean, this might be good enough. Using the Gaussian-weighting method, the estimated mean of $x|y=3.8$ is:
End of explanation
weights_q = st.norm.pdf(samples[:,0], loc=4.0, scale=1.0)
Explanation: ... compared with the exact mean of $-0.5$.
4. Importance weighting
Let's go a little farther and think more generally about the marginal distribution of $y$ from the product of $p(x,y)$ and some other PDF, $q(x)$. Imagine that we have samples of $p(x,y)$ that were expensive to get, while $q(x)$ is straightforward to evaluate for any $x$. Then, instead of investing a lot of time in generating new samples from $p(x,y)\,q(x)$, we might want to do something like the weighting procedure above, which is called importance weighting.
We can think of conditioning as importance weighting with a PDF that says that $x$ must be really close to $x_0$, for example. But now, let's consider a different case. To keep it simple, let's say that $q(x) = \mathrm{Normal}(x|4,1)$. The weights for each sample are:
End of explanation
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1,2);
ax[0].hist(samples[:,0], weights=weights_q, bins=25, density=True, histtype='step', color='b', linewidth=2, label='weighted samples');
ax[0].plot(xvalues, p_x(xvalues, **cor), 'r-', label='just p(x,y)');
ax[0].set_xlabel('x');
ax[0].set_ylabel('marginal prob of x');
ax[0].legend();
ax[1].hist(samples[:,1], weights=weights_q, bins=25, density=True, histtype='step', color='b', linewidth=2, label='weighted samples');
ax[1].plot(yvalues, p_y(yvalues, **cor), 'r-', label='just p(x,y)');
ax[1].set_xlabel('y');
ax[1].set_ylabel('marginal prob of y');
Explanation: Looking at the marginal distributions, you can see how the marginal distribution of $x$ is, naturally, pulled to larger $x$, but also the PDF of $y$ is pulled to lower $y$ due to the negative correlation of $p(x,y)$.
End of explanation
TBC() # weights_q2 = st.norm.pdf(samples[:,0], ...
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1,2);
ax[0].hist(samples[:,0], weights=weights_q2, bins=25, density=True, histtype='step', color='b', linewidth=2, label='weighted samples');
ax[0].plot(xvalues, p_x(xvalues, **cor), 'r-', label='just p(x,y)');
ax[0].set_xlabel('x');
ax[0].set_ylabel('marginal prob of x');
ax[0].legend();
ax[1].hist(samples[:,1], weights=weights_q2, bins=25, density=True, histtype='step', color='b', linewidth=2, label='weighted samples');
ax[1].plot(yvalues, p_y(yvalues, **cor), 'r-', label='just p(x,y)');
ax[1].set_xlabel('y');
ax[1].set_ylabel('marginal prob of y');
Explanation: Did this work perfectly? We haven't bothered to work out the analytic solution for this product of PDFs, but we do know that it's a Gaussian, and so the histograms above should also have Gaussian shapes. Which they almost do... but the tails don't quite look equally heavy. This is because the original set of samples doesn't cover very well the entire region where the product $p(x,y) \, q(x)$ is large. Note that the weighting is still exactly right here - it's just that the final PDF estimate gets noisier as the number of samples falls (as we go to extreme values of $x$ and $y$), and eventually there are none left to re-weight! If $q(x)$ has been skinnier, or if its mean had been closer to the mean of $p(x)$, we would have been fine.
With this in mind, contrive a $q_2(x)$ such that the procedure above fails... badly.
$q_2(x) = $
End of explanation |
14,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Statement
There are influenza viruses that are collected from the "environment", or have an "unknown" host. How do we infer which hosts it came from? Well, that sounds like a Classification problem.
Step1: Train/Test Split
We're almost ready for training a machine learning model to classify the unknown hosts based on their sequence.
Here's the proper procedure.
Split the labelled data into a training and testing set. (~70 train/30 test to 80 train/20 test)
Train and evaluate a model on the training set.
Make predictions on the testing set, evaluate the model on testing set accuracy.
This procedure is known as cross-validation, and is a powerful, yet cheap & easy method for evaluating how good a particular supervised learning model works.
Step2: How do we evaluate how good the classification task performed?
For binary classification, the Receiver-Operator Characteristic curve is a great way to evaluate a classification task.
For multi-label classification, which is the case we have here, accuracy score is a great starting place.
Step3: What about those sequences for which the hosts were unknown?
We can run the predict(unknown_Xs) to predict what their hosts were likely to be, given their sequence.
Step4: What this gives us is the class label with the highest probability of being the correct one.
While we will not do this here, at this point, it would be a good idea to double-check your work with a sanity check. Are the sequences that are predicted to be Human truly of a close sequence similarity to actual Human sequences? You may want to do a Multiple Sequence Alignment, or you might want to simply compute the Levenshtein or Hamming distance between the two sequences, as a sanity check.
How do we interpret what the classifier learned?
Depending on the classifier used, you can peer inside the model to get a feel for what the classifier learned about the features that best predict the class label.
The RandomForestClassifier provides a feature_importances_ attribute that we can access and plot. | Python Code:
# Load the sequences into memory
sequences = [s for s in SeqIO.parse('data/20160127_HA_prediction.fasta', 'fasta') if len(s.seq) == 566] # we are cheating and not bothering with an alignment.
len(sequences)
# Load the sequence IDs into memory
seqids = [s.id for s in SeqIO.parse('data/20160127_HA_prediction.fasta', 'fasta') if len(s.seq) == 566]
len(seqids)
# Cast the sequences as a MultipleSeqAlignment object, and then turn that into a pandas DataFrame.
# Note: this cell takes a while.
seq_aln = MultipleSeqAlignment(sequences)
seq_df = pd.DataFrame(np.array(seq_aln))
seq_df.head()
# Transform the df into isoelectric point features.
seq_feats = seq_df.replace(isoelectric_points.keys(), isoelectric_points.values())
seq_feats.index = seqids
seq_feats.head()
# Quick check to make sure that we have no strings:
for c in seq_feats.columns:
letters = set(seq_feats[c])
for item in letters:
assert not isinstance(item, str)
# Let us now load our labels.
labels = pd.read_csv('data/20160127_HA_prediction.csv', parse_dates=['Collection Date'])
labels['Host Species'] = labels['Host Species'].str.replace('IRD:', '').str.replace('/Avian', '')
labels['Sequence Accession'] = labels['Sequence Accession'].str.replace('*', '')
labels.set_index('Sequence Accession', inplace=True)
labels.head()
# Let's join in the labels so that we have everything in one big massive table.
data_matrix = seq_feats.join(labels['Host Species'], how='inner')
data_matrix.head()
# Quickly inspect the different labels under "host species"
# set(data_matrix['Host Species'])
# We will want to predict the labels for: "Avian", "Bird", "Environment", "Unknown", "null"
unknown_labels = ['Avian', 'Bird', 'Environment', 'Unknown', 'null']
known_labels = set(data_matrix['Host Species']) - set(unknown_labels)
# Let's further split the data into the "unknowns" and the "knowns"
unknowns = data_matrix[data_matrix['Host Species'].isin(unknown_labels)]
knowns = data_matrix[data_matrix['Host Species'].isin(known_labels)]
# Finally, we want to convert the known host species into a matrix of 1s and 0s, so that we can use them as inputs
# to the training algorithm.
lb = LabelBinarizer()
lb.fit([s for s in known_labels])
lb.transform(knowns['Host Species']) # note: this has not done anything to the original data.
Explanation: Problem Statement
There are influenza viruses that are collected from the "environment", or have an "unknown" host. How do we infer which hosts it came from? Well, that sounds like a Classification problem.
End of explanation
# Split the data into a training and testing set.
X_cols = [i for i in range(0,566)]
X = knowns[X_cols]
Y = lb.transform(knowns['Host Species'])
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
# Train a Random Forest Classifier.
# Note: This cell takes a while; any questions?
# Initialize the classifier object.
clf = RandomForestClassifier()
# Train (i.e. "fit") the classifier to the training Xs and Ys
clf.fit(X_train, Y_train)
# Make predictions on the test X
preds = clf.predict(X_test)
preds
lb.inverse_transform(preds)
Explanation: Train/Test Split
We're almost ready for training a machine learning model to classify the unknown hosts based on their sequence.
Here's the proper procedure.
Split the labelled data into a training and testing set. (~70 train/30 test to 80 train/20 test)
Train and evaluate a model on the training set.
Make predictions on the testing set, evaluate the model on testing set accuracy.
This procedure is known as cross-validation, and is a powerful, yet cheap & easy method for evaluating how good a particular supervised learning model works.
End of explanation
# Let's first take a look at the accuracy score: the fraction that were classified correctly.
accuracy_score(lb.inverse_transform(Y_test), lb.inverse_transform(preds))
Explanation: How do we evaluate how good the classification task performed?
For binary classification, the Receiver-Operator Characteristic curve is a great way to evaluate a classification task.
For multi-label classification, which is the case we have here, accuracy score is a great starting place.
End of explanation
unknown_preds = clf.predict(unknowns[X_cols]) # make predictions; note: these are still dummy-encoded.
unknown_preds = lb.inverse_transform(unknown_preds) # convert dummy-encodings back to string labels.
unknown_preds
Explanation: What about those sequences for which the hosts were unknown?
We can run the predict(unknown_Xs) to predict what their hosts were likely to be, given their sequence.
End of explanation
plt.plot(clf.feature_importances_)
Explanation: What this gives us is the class label with the highest probability of being the correct one.
While we will not do this here, at this point, it would be a good idea to double-check your work with a sanity check. Are the sequences that are predicted to be Human truly of a close sequence similarity to actual Human sequences? You may want to do a Multiple Sequence Alignment, or you might want to simply compute the Levenshtein or Hamming distance between the two sequences, as a sanity check.
How do we interpret what the classifier learned?
Depending on the classifier used, you can peer inside the model to get a feel for what the classifier learned about the features that best predict the class label.
The RandomForestClassifier provides a feature_importances_ attribute that we can access and plot.
End of explanation |
14,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
911 Calls Capstone Project
For this capstone project we will be analyzing some 911 call data from Kaggle. The data contains the following fields
Step1: Import visualization libraries and set %matplotlib inline.
Step2: Read in the csv file as a dataframe called df
Step3: Check the info() of the df
Step4: Check the head of df
Step5: Basic Questions
What are the top 5 zipcodes for 911 calls?
Step6: What are the top 5 townships (twp) for 911 calls?
Step7: Take a look at the 'title' column, how many unique title codes are there?
Step8: Creating new features
In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.
For example, if the title column value is EMS
Step9: What is the most common Reason for a 911 call based off of this new column?
Step10: Now use seaborn to create a countplot of 911 calls by Reason.
Step11: Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?
Step12: You should have seen that these timestamps are still strings. Use pd.to_datetime to convert the column from strings to DateTime objects.
Step13: You can now grab specific attributes from a Datetime object by calling them. For example
Step14: Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week
Step15: Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column.
Step16: Now do the same for Month
Step17: Did you notice something strange about the Plot?
You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...
Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.
Step18: Now create a simple plot off of the dataframe indicating the count of calls per month.
Step19: Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column.
Step20: Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method.
Step21: Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.
Step22: Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call
Step23: Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an unstack method. Reference the solutions if you get stuck on this!
Step24: Now create a HeatMap using this new DataFrame. | Python Code:
import numpy as np
import pandas as pd
Explanation: 911 Calls Capstone Project
For this capstone project we will be analyzing some 911 call data from Kaggle. The data contains the following fields:
lat : String variable, Latitude
lng: String variable, Longitude
desc: String variable, Description of the Emergency Call
zip: String variable, Zipcode
title: String variable, Title
timeStamp: String variable, YYYY-MM-DD HH:MM:SS
twp: String variable, Township
addr: String variable, Address
e: String variable, Dummy variable (always 1)
Just go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!
Data and Setup
Import numpy and pandas
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Import visualization libraries and set %matplotlib inline.
End of explanation
df = pd.read_csv('911.csv')
Explanation: Read in the csv file as a dataframe called df
End of explanation
df.info()
Explanation: Check the info() of the df
End of explanation
df.head()
Explanation: Check the head of df
End of explanation
df['zip'].value_counts().head(5)
Explanation: Basic Questions
What are the top 5 zipcodes for 911 calls?
End of explanation
df['twp'].value_counts().head(5)
Explanation: What are the top 5 townships (twp) for 911 calls?
End of explanation
df['title'].nunique()
Explanation: Take a look at the 'title' column, how many unique title codes are there?
End of explanation
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
Explanation: Creating new features
In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.
For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS.
End of explanation
df['Reason'].value_counts()
Explanation: What is the most common Reason for a 911 call based off of this new column?
End of explanation
sns.countplot(x='Reason',data=df,palette='viridis')
Explanation: Now use seaborn to create a countplot of 911 calls by Reason.
End of explanation
type(df['timeStamp'].iloc[0])
Explanation: Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?
End of explanation
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
Explanation: You should have seen that these timestamps are still strings. Use pd.to_datetime to convert the column from strings to DateTime objects.
End of explanation
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
Explanation: You can now grab specific attributes from a Datetime object by calling them. For example:
time = df['timeStamp'].iloc[0]
time.hour
You can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.
End of explanation
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day of Week'] = df['Day of Week'].map(dmap)
Explanation: Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week:
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
End of explanation
sns.countplot(x='Day of Week', data=df,hue='Reason', palette='viridis')
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
Explanation: Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column.
End of explanation
sns.countplot(x='Month', data=df,hue='Reason', palette='viridis')
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
Explanation: Now do the same for Month:
End of explanation
byMonth = df.groupby('Month').count()
byMonth.head()
Explanation: Did you notice something strange about the Plot?
You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...
Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.
End of explanation
byMonth['lat'].plot()
Explanation: Now create a simple plot off of the dataframe indicating the count of calls per month.
End of explanation
sns.lmplot(x='Month',y='twp',data=byMonth.reset_index())
Explanation: Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column.
End of explanation
df['Date'] = df['timeStamp'].apply(lambda timestamp: timestamp.date())
Explanation: Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method.
End of explanation
df.groupby('Date')['lat'].count().plot()
plt.tight_layout()
Explanation: Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.
End of explanation
df[df['Reason']=='Traffic'].groupby('Date')['lat'].count().plot(title='Traffic')
plt.tight_layout()
df[df['Reason']=='Fire'].groupby('Date')['lat'].count().plot(title='Fire')
plt.tight_layout()
df[df['Reason']=='EMS'].groupby('Date')['lat'].count().plot(title='EMS')
plt.tight_layout()
Explanation: Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call
End of explanation
dayHour = df.groupby(by=['Day of Week', 'Hour']).count()['Reason'].unstack()
Explanation: Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an unstack method. Reference the solutions if you get stuck on this!
End of explanation
sns.heatmap(dayHour)
Explanation: Now create a HeatMap using this new DataFrame.
End of explanation |
14,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mover contenido de un usuario existente a otro nuevo
Step1: Cree una conexión con el portal.
Step2: Establecer variables para el usuario actual que se está realizando la transición y para que se cree el nuevo ID de usuario
Step3: Valide que el ID de usuario original es válido y accesible.
Step4: Crear un nuevo ID de usuario
Step5: Una vez que se ha creado correctamente el nuevo usuario, reasigne la propiedad del grupo y la pertenencia a grupos del usuario antiguo al nuevo usuario.
Step6: Una vez que se ha cambiado correctamente la propiedad / pertenencia del grupo, reasigne todo el contenido del usuario original al nuevo usuario. Esto ocurre en 2 pases. En primer lugar, reasigne todo en la carpeta raíz de 'Mis contenidos'. A continuación, haga un bucle en cada carpeta, cree la misma carpeta en la nueva cuenta de usuario y reasigne los elementos de cada carpeta al nuevo usuario en la carpeta correcta. | Python Code:
from arcgis.gis import *
Explanation: Mover contenido de un usuario existente a otro nuevo
End of explanation
gis = GIS("https://ags-enterprise4.aeroterra.com/arcgis/", "PythonApi", "test123456", verify_cert=False)
Explanation: Cree una conexión con el portal.
End of explanation
orig_userid = "afernandez"
new_userid = "pmayo"
Explanation: Establecer variables para el usuario actual que se está realizando la transición y para que se cree el nuevo ID de usuario
End of explanation
olduser = gis.users.get(orig_userid)
olduser
Explanation: Valide que el ID de usuario original es válido y accesible.
End of explanation
newuser = gis.users.create(new_userid, "pm123456", "Pablo", "Mayo", \
new_userid, description=olduser.description, \
role=olduser.role, provider='arcgis', level=2)
newuser = gis.users.get(new_userid)
newuser
Explanation: Crear un nuevo ID de usuario
End of explanation
usergroups = olduser['groups']
for group in usergroups:
grp = gis.groups.get(group['id'])
if (grp.owner == orig_userid):
grp.reassign_to(new_userid)
else:
grp.add_users(new_userid)
grp.remove_users(orig_userid)
Explanation: Una vez que se ha creado correctamente el nuevo usuario, reasigne la propiedad del grupo y la pertenencia a grupos del usuario antiguo al nuevo usuario.
End of explanation
usercontent = olduser.items()
folders = olduser.folders
for item in usercontent:
try:
item.reassign_to(new_userid)
except:
print(item)
for folder in folders:
gis.content.create_folder(folder['title'], new_userid)
folderitems = olduser.items(folder=folder['title'])
for item in folderitems:
item.reassign_to(new_userid, target_folder=folder['title'])
Explanation: Una vez que se ha cambiado correctamente la propiedad / pertenencia del grupo, reasigne todo el contenido del usuario original al nuevo usuario. Esto ocurre en 2 pases. En primer lugar, reasigne todo en la carpeta raíz de 'Mis contenidos'. A continuación, haga un bucle en cada carpeta, cree la misma carpeta en la nueva cuenta de usuario y reasigne los elementos de cada carpeta al nuevo usuario en la carpeta correcta.
End of explanation |
14,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Science Summer School - Split '17
5. Generating images of digits with Generative Adversarial Networks
Step1: Goals
Step5: What are we going to do with the data?
We have $70000$ images of hand-written digits generated from some distribution $X \sim P_{real}$
We have $70000$ labels $y_i \in {0,..., 9}$ indicating which digit is written on the image $x_i$
Problem
Step8: 5.4 The basic network for the discriminator
Step9: Intermezzo
Step10: 5.6 Check the implementation of the classes
Step11: Drawing samples from the latent space
Step12: 5.5 Define the model loss -- Vanilla GAN
The objective for the vanilla version of the GAN was defined as follows | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os, util
Explanation: Data Science Summer School - Split '17
5. Generating images of digits with Generative Adversarial Networks
End of explanation
data_folder = 'data'; dataset = 'mnist' # the folder in which the dataset is going to be stored
download_folder = util.download_mnist(data_folder, dataset)
images, labels = util.load_mnist(download_folder)
print("Folder:", download_folder)
print("Image shape:", images.shape) # greyscale, so the last dimension (color channel) = 1
print("Label shape:", labels.shape) # one-hot encoded
show_n_images = 25
sample_images, mode = util.get_sample_images(images, n=show_n_images)
mnist_sample = util.images_square_grid(sample_images, mode)
plt.imshow(mnist_sample, cmap='gray')
sample = images[3]*50 #
sample = sample.reshape((28, 28))
print(np.array2string(sample.astype(int), max_line_width=100, separator=',', precision=0))
plt.imshow(sample, cmap='gray')
Explanation: Goals:
Implement the model from "Generative Adversarial Networks" by Goodfellow et al. (1284 citations since 2014.)
Understand how the model learns to generate realistic images
In ~two hours.
5.1 Downloading the datasets and previewing data
End of explanation
class Generator:
The generator network
the generator network takes as input a vector z of dimension input_dim, and transforms it
to a vector of size output_dim. The network has one hidden layer of size hidden_dim.
We will define the following methods:
__init__: initializes all variables by using tf.get_variable(...)
and stores them to the class, as well a list in self.theta
forward: defines the forward pass of the network - how do the variables
interact with respect to the inputs
def __init__(self, input_dim, hidden_dim, output_dim):
Constructor for the generator network. In the constructor, we will
just initialize all the variables in the network.
Args:
input_dim: The dimension of the input data vector (z).
hidden_dim: The dimension of the hidden layer of the neural network (h)
output_dim: The dimension of the output layer (equivalent to the size of the image)
with tf.variable_scope("generator"):
pass
def forward(self, z):
The forward pass of the network -- here we will define the logic of how we combine
the variables through multiplication and activation functions in order to get the
output.
pass
Explanation: What are we going to do with the data?
We have $70000$ images of hand-written digits generated from some distribution $X \sim P_{real}$
We have $70000$ labels $y_i \in {0,..., 9}$ indicating which digit is written on the image $x_i$
Problem: Imagine that the number of images we have is not enough - a common issue in computer vision and machine learning.
We can pay experts to create new images
Expensive
Slow
Realiable
We can generate new images ourselves
Cheap
Fast
Unreliable?
Problem: Not every image that we generate is going to be perfect (or even close to perfect). Therefore, we need some method to determine which images are realistic.
We can pay experts to determine which images are good enough
Expensive
Slow
Reliable
We can train a model to determine which images are good enough
Cheap
Fast
Unreliable?
Formalization
$X \sim P_{real}$ : existing images of shape $s$
$Z \sim P_z$ : a $k$-dimensional random vector
$G(z; \theta_G): Z \to \hat{X}$ : the generator, a function that transforms the random vector $z$ into an image of shape $s$
$D(x, \theta_D): X \to (Real, Fake)$ : the discriminator a function that given an image of shape $s$ decides if the image is real or fake
Details
The existing images $X$ in our setup are images from the mnist dataset. We will arbitrarily decide that vectors $z$ will be sampled from a uniform distribution, and $G$ and $D$ will both be 'deep' neural networks.
For simplicity, and since we are using the mnist dataset, both $G$ and $D$ will be multi-layer perceptrons (and not deep convolutional networks) with one hidden layer. The generated images $G(z) \sim P_{fake}$ as well as real images $x \sim P_{real}$ will be passed on to the discriminator, which will classify them into $(Real, Fake)$.
<center>
<img src="data/img/gan_general_layout.png">
<strong>Figure 1. </strong> General adversarial network architecture
</center>
Discriminator
The goal of the discriminator is to successfully recognize which image is sampled from the true distribution, and which image is sampled from the generator.
<center>
<img src="data/img/discriminator.png">
<strong>Figure 2.</strong> Discriminator network sketch
</center>
Generator
The goal of the generator is that the discriminator missclassifies the images that the generator generated as if they were generated by the true distribution.
<center>
<img src="data/img/generator.png">
<strong>Figure 3.</strong> Generator network sketch
</center>
5.2 Data transformation
Since we are going to use a fully connected network (we are not going to use local convolutional filters), we are going to flatten the input images for simplicity. Also, the pixel values are scaled to the interval $[0,1]$ (this was already done beforehand).
We will also use a pre-made Dataset class to iterate over the dataset in batches. The class is defined in util.py, and only consists of a constructor and a method next_batch.
Question: Having seen the architecture of the network, why are we the pixels scaled to $[0,1]$ and not, for example, $[-1, 1]$, or left at $[0, 255]$?
Answer:
5.3 The generator network
End of explanation
class Discriminator:
The discriminator network
the discriminator network takes as input a vector x of dimension input_dim, and transforms it
to a vector of size output_dim. The network has one hidden layer of size hidden_dim.
You will define the following methods:
__init__: initializes all variables by using tf.get_variable(...)
and stores them to the class, as well a list in self.theta
forward: defines the forward pass of the network - how do the variables
interact with respect to the inputs
def __init__(self, input_dim, hidden_dim, output_dim):
with tf.variable_scope("discriminator"):
pass
def forward(self, x):
The forward pass of the network -- here we will define the logic of how we combine
the variables through multiplication and activation functions in order to get the
output.
Along with the probabilities, also return the unnormalized probabilities
(the values in the output layer before being passed through the sigmoid function)
pass
Explanation: 5.4 The basic network for the discriminator
End of explanation
image_dim = # The dimension of the input image vector to the discrminator
discriminator_hidden_dim = # The dimension of the hidden layer of the discriminator
discriminator_output_dim = # The dimension of the output layer of the discriminator
random_sample_dim = # The dimension of the random noise vector z
generator_hidden_dim = # The dimension of the hidden layer of the generator
generator_output_dim = # The dimension of the output layer of the generator
Explanation: Intermezzo: Xavier initialization of weights
Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 249-256).
Implemented in tensorflow, as part of the standard library: https://www.tensorflow.org/api_docs/python/tf/contrib/layers/xavier_initializer
1. Idea:
If the weights in a network are initialized to too small values, then the signal shrinks as it passes through each layer until it’s too tiny to be useful.
If the weights in a network are initialized to too large, then the signal grows as it passes through each layer until it’s too massive to be useful.
2. Goal:
We need initial weight values that are just right for the signal not to explode or vanish during the forward pass
3. Math
Trivial
4. Solution
$v = \frac{2}{n_{in} + n_{out}}$
In the case of a Gaussian distribution, we set the variance to $v$.
In the case of a uniform distribution, we set the interval to $\pm v$ (the default distr. in tensorflow is the uniform).
<sub>http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization</sub>
5.5 Define the model parameters
We will take a brief break to set the values for the parameters of the model. Since we know the dataset we are working with, as well as the shape of the generator and discriminator networks, your task is to fill in the values of the following variables.
End of explanation
d = Discriminator(image_dim, discriminator_hidden_dim, discriminator_output_dim)
for param in d.theta:
print (param)
g = Generator(random_sample_dim, generator_hidden_dim, generator_output_dim)
for param in g.theta:
print (param)
Explanation: 5.6 Check the implementation of the classes
End of explanation
def sample_Z(m, n):
pass
plt.imshow(sample_Z(16, 100), cmap='gray')
Explanation: Drawing samples from the latent space
End of explanation
X = tf.placeholder(tf.float32, name="input", shape=[None, image_dim])
Z = tf.placeholder(tf.float32, name="latent_sample", shape=[None, random_sample_dim])
G_sample, D_loss, G_loss = gan_model_loss(X, Z, d, g)
with tf.variable_scope('optim'):
D_solver = tf.train.AdamOptimizer(name='discriminator').minimize(D_loss, var_list=d.theta)
G_solver = tf.train.AdamOptimizer(name='generator').minimize(G_loss, var_list=g.theta)
saver = tf.train.Saver()
# Some runtime parameters predefined for you
minibatch_size = 128 # The size of the minibatch
num_epoch = 500 # For how many epochs do we run the training
plot_every_epochs = 5 # After this many epochs we will save & display samples of generated images
print_every_batches = 1000 # After this many minibatches we will print the losses
restore = True
checkpoint = 'fc_2layer_e100_2.170.ckpt'
model = 'gan'
model_save_folder = os.path.join('data', 'chkp', model)
print ("Model checkpoints will be saved to:", model_save_folder)
image_save_folder = os.path.join('data', 'model_output', model)
print ("Image samples will be saved to:", image_save_folder)
minibatch_counter = 0
epoch_counter = 0
d_losses = []
g_losses = []
with tf.device("/gpu:0"), tf.Session() as sess:
sess.run(tf.global_variables_initializer())
if restore:
saver.restore(sess, os.path.join(model_save_folder, checkpoint))
print("Restored model:", checkpoint, "from:", model_save_folder)
while epoch_counter < num_epoch:
new_epoch, X_mb = mnist.next_batch(minibatch_size)
_, D_loss_curr = sess.run([D_solver, D_loss],
feed_dict={
X: X_mb,
Z: sample_Z(minibatch_size, random_sample_dim)
})
_, G_loss_curr = sess.run([G_solver, G_loss],
feed_dict={
Z: sample_Z(minibatch_size, random_sample_dim)
})
# Plotting and saving images and the model
if new_epoch and epoch_counter % plot_every_epochs == 0:
samples = sess.run(G_sample, feed_dict={Z: sample_Z(16, random_sample_dim)})
fig = util.plot(samples)
figname = '{}.png'.format(str(minibatch_counter).zfill(3))
plt.savefig(os.path.join(image_save_folder, figname), bbox_inches='tight')
plt.show()
plt.close(fig)
im = util.plot_single(samples[0], epoch_counter)
plt.savefig(os.path.join(image_save_folder, 'single_' + figname), bbox_inches='tight')
plt.show()
chkpname = "fc_2layer_e{}_{:.3f}.ckpt".format(epoch_counter, G_loss_curr)
saver.save(sess, os.path.join(model_save_folder, chkpname))
# Printing runtime statistics
if minibatch_counter % print_every_batches == 0:
print('Epoch: {}/{}'.format(epoch_counter, num_epoch))
print('Iter: {}/{}'.format(mnist.position_in_epoch, mnist.n))
print('Discriminator loss: {:.4}'. format(D_loss_curr))
print('Generator loss: {:.4}'.format(G_loss_curr))
print()
# Bookkeeping
minibatch_counter += 1
if new_epoch:
epoch_counter += 1
d_losses.append(D_loss_curr)
g_losses.append(G_loss_curr)
# Save the final model
chkpname = "fc_2layer_e{}_{:.3f}.ckpt".format(epoch_counter, G_loss_curr)
saver.save(sess, os.path.join(model_save_folder, chkpname))
disc_line, = plt.plot(range(len(d_losses[:10000])), d_losses[:10000], c='b', label="Discriminator loss")
gen_line, = plt.plot(range(len(d_losses[:10000])), g_losses[:10000], c='r', label="Generator loss")
plt.legend([disc_line, gen_line], ["Discriminator loss", "Generator loss"])
Explanation: 5.5 Define the model loss -- Vanilla GAN
The objective for the vanilla version of the GAN was defined as follows:
<center>
$\min_G \max_D V(D, G) = \mathbb{E}{x \sim p{real}} [log(D(x))] + \mathbb{E}{z \sim p{z}} [log(1 -D(G(z)))]$
</center>
The function contains a minimax formulation, and cannot be directly optimized. However, if we freeze $D$, we can derive the loss for $G$ and vice versa.
Discriminator loss:
<center>
$p_{fake} = G(p_z)$
</center>
<center>
$D_{loss} = \mathbb{E}{x \sim p{real}} [log(D(x))] + \mathbb{E}{\hat{x} \sim p{fake}} [log(1 -D(\hat{x}))]$
</center>
We estimate the expectation over each minibatch and arrive to the following formulation:
<center>
$D_{loss} = \frac{1}{m}\sum_{i=0}^{m} log(D(x_i)) + \frac{1}{m}\sum_{i=0}^{m} log(1 -D(\hat{x_i}))$
</center>
Generator loss:
<center>
$G_{loss} = - \mathbb{E}{z \sim p{z}} [log(1 -D(G(z)))]$
</center>
<center>
$G_{loss} = \frac{1}{m}\sum_{i=0}^{m} [log(D(G(z)))]$
</center>
Model loss, translated from math
The discriminator wants to:
- maximize the (log) probability of a real image being classified as real,
- minimize the (log) probability of a fake image being classified as real.
The generator wants to:
- maximize the (log) probability of a fake image being classified as real.
Model loss, translated to practical machine learning
The output of the discriminator is a scalar, $p$, which we interpret as the probability that an input image is real ($1-p$ is the probability that the image is fake).
The discriminator takes as input:
a minibatch of images from our training set with a vector of ones for class labels: $D_{loss_real}$.
a minibatch of images from the generator with a vector of zeros for class labels: $D_{loss_fake}$.
a minibatch of images from the generator with a vector of ones for class labels: $G_{loss}$.
The generator takes as input:
a minibatch of vectors sampled from the latent space and transforms them to a minibatch of generated images
Intermezzo: sigmoid cross entropy with logits
We defined the loss of the model as the log of the probability, but we are not using a $log$ function or the model probablities anywhere?
Enter sigmoid cross entropy with logits: https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits
<center>
<img src="data/img/logitce.png">
From the tensorflow documentation
</center>
Putting it all together
End of explanation |
14,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Default Animations
By passing animate=True to b.show(), b.savefig(), or the final call to b.plot() along with save=filename or show=True will create an animation instead of a static plot.
Alternatively, you can call afig.animate() on the returned afig object returned by b.plot().
Step2: Note that like the rest of the examples below, this is simply the animated version of the exact same call to plot
Providing Times
To override the default times explained above, pass a list or array to the times keyword. For synthetic models, highlight mode will be enabled by default and the provided time does not need to be one that is computed - the value will be interpolated if it is not. However, for plotting meshes, the exact time must be stored in the synthetic meshes or they will not be drawn.
This is especially usefully in cases where you may not want to repeat the first and last frame for a looping gif, or where you want a smoother animation by interpolation. In this example we'll plot all but the last time so that the loop doesn't have a repeated frame.
In this example, times[
Step3: Plotting Options
By default, time highlighting is turned on. See the plotting tutorial for details on 'highlight' and 'uncover' options.
Any additional arguments (colors, linestyle, etc) are passed to the plot call for EACH frame and for EVERY plotting call.
Step4:
Step5: Disabling Fixed Limits
By default, as can be seen above in the mesh animation, the limits of the axes are automatically set so that they are fixed throughout the animation.
Sometimes this may not be desired. By setting xlim='frame' (and/or ylim='frame'), the axes limits are determined automatically per-frame instead of fixed throughout the animation.
For more information and other options see the autofig tutorial on limits
Step6: 3D axes
Plotting to 3D axes are supported. In addition to the options for static plots, animations also support passing a list for the range of elevation/azimuth (in degrees) throughout the animation. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
times = np.linspace(0,1,51)
b.add_dataset('lc', compute_times=times, dataset='lc01')
b.add_dataset('orb', compute_times=times, dataset='orb01')
b.add_dataset('mesh', compute_times=times, dataset='mesh01', columns=['teffs'])
b.run_compute(irrad_method='none')
Explanation: Advanced: Animations
NOTE: this tutorial may take a while to load in a browser as there are many embedded animations and also takes significant time to run and create all animations.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
afig, mplanim = b.plot(y={'orb': 'ws'},
animate=True, save='animations_1.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Default Animations
By passing animate=True to b.show(), b.savefig(), or the final call to b.plot() along with save=filename or show=True will create an animation instead of a static plot.
Alternatively, you can call afig.animate() on the returned afig object returned by b.plot().
End of explanation
afig, mplanim = b.plot(y={'orb': 'ws'},
times=times[:-1:2], animate=True, save='animations_2.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Note that like the rest of the examples below, this is simply the animated version of the exact same call to plot
Providing Times
To override the default times explained above, pass a list or array to the times keyword. For synthetic models, highlight mode will be enabled by default and the provided time does not need to be one that is computed - the value will be interpolated if it is not. However, for plotting meshes, the exact time must be stored in the synthetic meshes or they will not be drawn.
This is especially usefully in cases where you may not want to repeat the first and last frame for a looping gif, or where you want a smoother animation by interpolation. In this example we'll plot all but the last time so that the loop doesn't have a repeated frame.
In this example, times[:-1:2] means skip the last time and only use every-other time.
This option is not available from run_compute - a frame will be drawn for each computed time.
End of explanation
afig, mplanim = b['lc01@model'].plot(times=times[:-1], uncover=True,\
c='r', linestyle=':',\
highlight_marker='s', highlight_color='g',
animate=True, save='animations_3.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Plotting Options
By default, time highlighting is turned on. See the plotting tutorial for details on 'highlight' and 'uncover' options.
Any additional arguments (colors, linestyle, etc) are passed to the plot call for EACH frame and for EVERY plotting call.
End of explanation
afig, mplanim = b['mesh01@model'].plot(times=times[:-1], fc='teffs', ec='None',
animate=True, save='animations_4.gif', save_kwargs={'writer': 'imagemagick'})
Explanation:
End of explanation
afig, mplanim = b['lc01@model'].plot(times=times[:-1], uncover=True, xlim='frame',
animate=True, save='animations_5.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: Disabling Fixed Limits
By default, as can be seen above in the mesh animation, the limits of the axes are automatically set so that they are fixed throughout the animation.
Sometimes this may not be desired. By setting xlim='frame' (and/or ylim='frame'), the axes limits are determined automatically per-frame instead of fixed throughout the animation.
For more information and other options see the autofig tutorial on limits
End of explanation
afig, mplanim = b['orb01@model'].plot(times=times[:-1], projection='3d', azim=[0, 360], elev=[-20,20],
animate=True, save='animations_6.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: 3D axes
Plotting to 3D axes are supported. In addition to the options for static plots, animations also support passing a list for the range of elevation/azimuth (in degrees) throughout the animation.
End of explanation |
14,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Phylogenetics Tutorial
Run a phylogenetics project, all in one place!
This tutorial will cover
Step1: 1. Inititialize a phylogenetics project
This creates a folder in your current working directory (here called 'project1') where your project data will be automatically saved.
This also initializes the project object (you can name it whatever you want).
The project object is a phylopandas dataframe that you can view within the notebook at any point.
As you proceed, it will house all of your sequence, alignment, tree, and ancestor data.
Step2: 2. Read in your starting sequence(s)
You'll need at least one sequence to start with. In this example, we're reading in a single sequence - a human protein called MD2.
Step3: 3. Use BLAST to search for orthologs similar to your seed sequence(s)
The default search returns 100 hits with an e-value cutoff of 0.01 and default BLAST gap penalties.
These parameters can be modified as you wish (to view options, run project.compute_blast? cell below or check out Biopython's NCBI BLAST module)
Step4: 4. Build a phylogenetic tree using PhyML
5. Reconstruct ancestral proteins using PAML
####### start working here ######
- add docs to alignment, clustering, tree building, ASR, gblocks, df_editor, etc.
- rd 1 tutorial - blast, align, tree, ancestors
- rd 2 - add in QC | Python Code:
# import packages
import phylogenetics as phy
import phylogenetics.tools as tools
import phylopandas as ph
import pandas as pd
from phylovega import TreeChart
Explanation: Phylogenetics Tutorial
Run a phylogenetics project, all in one place!
This tutorial will cover:
Project initialization and data input/output
BLASTing and aligning sequences
Building a phylogenetic tree
Reconstructing ancestral proteins (ASR)
Quality control and evaluation for each of the steps above
End of explanation
# intitialize project object and create project folder
project = phy.PhylogeneticsProject(project_dir='tutorial', overwrite=True)
Explanation: 1. Inititialize a phylogenetics project
This creates a folder in your current working directory (here called 'project1') where your project data will be automatically saved.
This also initializes the project object (you can name it whatever you want).
The project object is a phylopandas dataframe that you can view within the notebook at any point.
As you proceed, it will house all of your sequence, alignment, tree, and ancestor data.
End of explanation
# read in seed sequence(s) to project object
project.read_data("md2_seed_sequence.txt", schema="fasta")
Explanation: 2. Read in your starting sequence(s)
You'll need at least one sequence to start with. In this example, we're reading in a single sequence - a human protein called MD2.
End of explanation
# run BLAST search with default settings, returning 100 hits
project.compute_blast(hitlist_size=100)
Explanation: 3. Use BLAST to search for orthologs similar to your seed sequence(s)
The default search returns 100 hits with an e-value cutoff of 0.01 and default BLAST gap penalties.
These parameters can be modified as you wish (to view options, run project.compute_blast? cell below or check out Biopython's NCBI BLAST module)
End of explanation
project.compute_clusters()
project.compute_alignment()
project.compute_gblocks()
project.compute_tree()
project.compute_reconstruction()
# Visualize tree and ancestors using phylovega
from phylovega import TreeChart
# Construct Vega Specification
chart = TreeChart.from_phylopandas(
project.data,
height_scale=300,
# Node attributes
node_size=300,
node_color="#ccc",
# Leaf attributes
leaf_labels="id",
# Edge attributes
edge_width=2,
edge_color="#000",
)
chart
Explanation: 4. Build a phylogenetic tree using PhyML
5. Reconstruct ancestral proteins using PAML
####### start working here ######
- add docs to alignment, clustering, tree building, ASR, gblocks, df_editor, etc.
- rd 1 tutorial - blast, align, tree, ancestors
- rd 2 - add in QC:
1) look at alignment (outsource to aliview)
- remove bad seqs (df_editor)
3) look at tree (outsource to figtree? use viewer?)
- remove long branches (df_editor)
4) ancestors - look at PP, compare before and after QC
End of explanation |
14,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script shows how to use the existing code in opengrid
to create (a) a timeseries plot and (b) a load curve of gas, water or elektricity usage.
Todo
Step1: Script settings
Step2: Fill in here (chosen type [0-2]) what type of data you'd like to plot
Step3: Available data is loaded in one big dataframe, the columns are the sensors of chosen type.
Also, it is rescaled to more "managable" units (to be verified!)
Step4: Tests with the tmpo-based approach | Python Code:
import os
import sys
import inspect
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.dates import HourLocator, DateFormatter, AutoDateLocator
import datetime as dt
import pytz
import pandas as pd
import pdb
import tmpo
from opengrid import config
from opengrid.library import houseprint
c = config.Config()
try:
if os.path.exists(c.get('tmpo', 'data')):
path_to_tmpo_data = c.get('tmpo', 'data')
except:
path_to_tmpo_data = None
%matplotlib inline
plt.rcParams['figure.figsize']=14,8
Explanation: This script shows how to use the existing code in opengrid
to create (a) a timeseries plot and (b) a load curve of gas, water or elektricity usage.
Todo: cleanup this notebook. See Demo_Units_and_Conversions.ipynb
Change numeric "chosen_type" to a textual choice, with lookupvalue of UtilityType in Utilitytypes.
End of explanation
hp = houseprint.load_houseprint_from_file('new_houseprint.pkl')
hp.init_tmpo(path_to_tmpo_data=path_to_tmpo_data)
Explanation: Script settings
End of explanation
chosen_type = 2
# 0 =water, 1 = gas, 2 = electricity
UtilityTypes = ['water', 'gas','electricity'] # {'water','gas','electricity'}
utility = UtilityTypes[chosen_type] # here 'electricity'
#default values:
FL_units = ['l/day', 'm^3/day ~ 10 kWh/day','Ws/day'] #TODO, to be checked!!
Base_Units = ['l/min', 'kW','kW']
Base_Corr = [1/24.0/60.0, 1/100.0/24.0/3.600 , 3.600/1000.0/24 ] #TODO,check validity of conversions!! # water => (l/day) to (l/hr), gas: (l/day) to (kW), elektr Ws/d to kW
tInt_Units = ['l', 'kWh','kWh'] #units after integration
tInt_Corr = [1/60, 3600/60, 3600/60] #TODO, to be checked!! # water => (l/hr) to (l_cumul/min), gas: kW to (kWh/min)
# units for this utility type
bUnit = Base_Units[chosen_type]
bCorr = Base_Corr[chosen_type]
fl_unit = FL_units[chosen_type]
tiUnit = tInt_Units[chosen_type]
tiCorr = tInt_Corr[chosen_type]
Explanation: Fill in here (chosen type [0-2]) what type of data you'd like to plot:
End of explanation
#load data, only for last 4 weeks
start = pd.Timestamp('now') - pd.Timedelta(days=1)
print('Loading', utility ,'-data and converting from ',fl_unit ,' to ',bUnit,':')
df = hp.get_data(sensortype=utility, resample='min')
df = df.diff() #data is cumulative, we need to take the derivative
df = df[df>0] #filter out negative values
# conversion dependent on type of utility (to be checked!!)
df = df*bCorr
df.info()
# plot timeseries and load duration for each retained sensor
for sensor in df.columns:
FL = hp.find_sensor(sensor).device.key
plt.figure()
ax1=plt.subplot(121)
plt.plot_date(df.index, df[sensor], '-', label="{}".format(FL))
plt.ylabel("{}-usage [{}]".format(utility,bUnit) )
plt.legend()
ax2=plt.subplot(122)
plt.plot(np.sort(df[sensor])[::-1], label=sensor)
plt.ylabel("{}-load curve [{}]".format(utility,bUnit) )
plt.legend()
#Date/Time library
from arrow import Arrow
#Prepare NVD3.js dependencies
from IPython import display as d
import nvd3
nvd3.ipynb.initialize_javascript(use_remote=True)
#Filter sensors and period
sensorlist = ['b28509eb97137e723995838c393d49df', '2923b75daf93e539e37ce5177c0008c5', 'a926bc966f178fc5d507a569a5bfc3d7']
df_water= df[sensorlist][Arrow(2015, 4, 1).datetime:Arrow(2015, 4, 2).datetime].dropna()
#Prepare chart name and timescale in epoch
chart_name = "{}-usage [{}]".format(utility,bUnit)
df_water["epoch"] = [(Arrow.fromdatetime(o) - Arrow(1970, 1, 1)).total_seconds()*1000 for o in df_water.index]
#Create NVD3 chart
water_chart = nvd3.lineChart(x_is_date=True,name=chart_name,height=450,width=800)
for sensor in sensorlist: # df.columns:
series_name = name="{}".format(hp.find_sensor(sensor).device.key)
water_chart.add_serie(name=series_name, x=list(df_water["epoch"]), y=list(df_water[sensor]))
water_chart
Explanation: Available data is loaded in one big dataframe, the columns are the sensors of chosen type.
Also, it is rescaled to more "managable" units (to be verified!)
End of explanation
start = pd.Timestamp('20150201')
end = pd.Timestamp('20150301')
dfcum = hp.get_data(sensortype='electricity', head= start, tail = end)
dfcum.shape
dfcum.columns
dfcum.tail()
dfi = dfcum.resample(rule='900s', how='max')
dfi = dfi.interpolate(method='time')
dfi=dfi.diff()*3600/900
dfi.plot()
#dfi.ix['20150701'].plot()
# This works, but is a bad idea if you have multiple sensors for a FLM: you obtain identical column names.
# df.rename(columns = hp.get_flukso_from_sensor, inplace=True)
# Getting a single sensor
dfi['1a1dac9c2ac155f95c58bf1d4f4b7d01'].plot()
Explanation: Tests with the tmpo-based approach
End of explanation |
14,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generated Data Extrapolation
In this example you will be generating some example data and extrapolate this
using the basic potential extrapolator.
You can start by importing the necessary module components.
Step1: You also need the ability to convert astropyunits and use MayaVi for
visualisation.
Step2: You are going to try and define a 3D cuboid grid of 20x22x20 with ranges in
arcseconds, these parameters can be stored in the following lists and astropy
quantities.
Step3: The generated data will consist of a 2D space with 2 Gaussian spots, one
positive and one negative, on a background of 0.0.
solarbextrapolation.example_data_generator provides many ways to achieve this,
including letting it randomly generate the position, magnitude and size of
each spot.
In this case you will manually define the parameters of each spot as a list,
using percentage units so that the spots will be inside the given ranges of
any generated data
Step4: You generate the data using generate_example_data(...) and create a map with
this using dummyDataToMap(...).
Step5: You can check the resulting generated data by using peek().
Step6: You now simply want to extrapolate using this boundary data, this is achieved
by first creating a potential extrapolator object and then by running the
extrapolate on this to return a Map3D object with the resulting vector field.
Step7: Note that you used enable_numba=True to speed up the computation on systems
with Anaconda numba installed.
You can now get a quick and easy visualisation using the
solarbextrapolation.example_data_generator.visualise tools | Python Code:
# Module imports
from solarbextrapolation.map3dclasses import Map3D
#from solarbextrapolation.potential_field_extrapolator import PotentialExtrapolator
from solarbextrapolation.extrapolators import PotentialExtrapolator
from solarbextrapolation.example_data_generator import generate_example_data, dummyDataToMap
from solarbextrapolation.visualisation_functions import visualise
Explanation: Generated Data Extrapolation
In this example you will be generating some example data and extrapolate this
using the basic potential extrapolator.
You can start by importing the necessary module components.
End of explanation
# General imports
import astropy.units as u
from mayavi import mlab
import numpy as np
Explanation: You also need the ability to convert astropyunits and use MayaVi for
visualisation.
End of explanation
# Input parameters:
arr_grid_shape = [ 20, 22, 20 ] # [ y-size, x-size ]
xrange = u.Quantity([ -10.0, 10.0 ] * u.arcsec)
yrange = u.Quantity([ -11.0, 11.0 ] * u.arcsec)
zrange = u.Quantity([ 0, 20.0 ] * u.arcsec)
Explanation: You are going to try and define a 3D cuboid grid of 20x22x20 with ranges in
arcseconds, these parameters can be stored in the following lists and astropy
quantities.
End of explanation
# Manual Pole Details
#arrA# = [ position, size, maximum strength ]
arrA0 = [ u.Quantity([ 25, 25 ] * u.percent), 10.0 * u.percent, 0.2 * u.T ]
arrA1 = [ u.Quantity([ 75, 75 ] * u.percent), 10.0 * u.percent, -0.2 * u.T ]
Explanation: The generated data will consist of a 2D space with 2 Gaussian spots, one
positive and one negative, on a background of 0.0.
solarbextrapolation.example_data_generator provides many ways to achieve this,
including letting it randomly generate the position, magnitude and size of
each spot.
In this case you will manually define the parameters of each spot as a list,
using percentage units so that the spots will be inside the given ranges of
any generated data:
End of explanation
# Generate the data and make into a map
arr_data = generate_example_data(arr_grid_shape[0:2], xrange, yrange, arrA0, arrA1)
map_boundary = dummyDataToMap(arr_data, xrange, yrange)
Explanation: You generate the data using generate_example_data(...) and create a map with
this using dummyDataToMap(...).
End of explanation
map_boundary.peek()
Explanation: You can check the resulting generated data by using peek().
End of explanation
# Use potential extrapolator to generate field
aPotExt = PotentialExtrapolator(map_boundary, zshape=arr_grid_shape[2], zrange=zrange)
aMap3D = aPotExt.extrapolate(enable_numba=True)
# The Extrapolations run time is stored in the meta
floSeconds = np.round(aMap3D.meta['extrapolator_duration'],3)
print('\nextrapolation duration: ' + str(floSeconds) + ' s\n')
Explanation: You now simply want to extrapolate using this boundary data, this is achieved
by first creating a potential extrapolator object and then by running the
extrapolate on this to return a Map3D object with the resulting vector field.
End of explanation
# Visualise the 3D vector field
fig = visualise(aMap3D,
boundary=map_boundary,
volume_units=[1.0*u.arcsec, 1.0*u.arcsec, 1.0*u.Mm],
show_boundary_axes=False,
boundary_units=[1.0*u.arcsec, 1.0*u.arcsec],
show_volume_axes=True,
debug=False)
mlab.show()
Explanation: Note that you used enable_numba=True to speed up the computation on systems
with Anaconda numba installed.
You can now get a quick and easy visualisation using the
solarbextrapolation.example_data_generator.visualise tools:
End of explanation |
14,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Langevin Integrator Check
The toy_dynamics subpackage provides an integrator called LangevinBAOABIntegrator, which is based on a paper by Leimkuhler and Matthews. This notebook uses the toy_dynamics package to check that the integrator gives the correct position and velocity distribution for a harmonic oscillator.
Note that this particular test does not make use of the trajectory storage tools. It is mainly to show how to use the toy_dynamics subpackage, and has little connection to the main package. The trajectory generated here is extremely long, so in this case we choose not to store it. For an example using toy_dynamics with the storage tools, see ???(to be added later)???.
Imports
Step1: Set up the simulation
This the potential energy surface is $V(x,y) = \frac{A[0]}{2}m[0] \omega[0]^2 (x-x_0[0])^2 + \frac{A[1]}{2}m[1] \omega[1]^2 (y-x_0[1])^2$
Step2: Set the initial conditions for the system, and initialize the sample storage.
Step3: Run the simulation
This might take a while...
Step4: Run analysis calculation
Build the 1D histograms we'll use
Step5: Build the 2D histograms
Step6: Run the analysis of the kinetic energy
Step7: Plot our results
Imports for the plots we'll use, as well as some parameter adjustment.
Step8: Now we plot the distributions of the positions and velocities. These should match the exact Gaussians they're paired with.
Step9: In the above, you should see that the exact answer (black line) matches up reasonably well with the calculated results (red line). You might notice that the left graph, for position, doesn't match quite as well as the right graph, for velocities. This is as expected
Step10: Next we plot the 2D histograms for each degree of freedom. These should be reasonably circular.
Step11: The two plots above should look reasonably similar to each other, although the axes will depend on your choice of $m$ and $\omega$.
The final plot is of the kinetic energy information | Python Code:
import openpathsampling.engines.toy as toys
import openpathsampling as paths
import numpy as np
Explanation: Langevin Integrator Check
The toy_dynamics subpackage provides an integrator called LangevinBAOABIntegrator, which is based on a paper by Leimkuhler and Matthews. This notebook uses the toy_dynamics package to check that the integrator gives the correct position and velocity distribution for a harmonic oscillator.
Note that this particular test does not make use of the trajectory storage tools. It is mainly to show how to use the toy_dynamics subpackage, and has little connection to the main package. The trajectory generated here is extremely long, so in this case we choose not to store it. For an example using toy_dynamics with the storage tools, see ???(to be added later)???.
Imports
End of explanation
my_pes = toys.HarmonicOscillator(A=[1.0, 1.0], omega=[2.0, 1.0], x0=[0.0, 0.0])
topology=toys.Topology(n_spatial=2, masses=[1.0,2.0], pes=my_pes)
my_integ = toys.LangevinBAOABIntegrator(dt=0.02, temperature=0.5, gamma=1.0)
sim = toys.Engine(options={'integ' : my_integ, 'n_steps_per_frame' : 10}, topology=topology)
template = toys.Snapshot(coordinates=np.array([[0.0, 0.0]]),
velocities=np.array([[0.1, 0.0]]),
engine=sim)
nframes = 250000
Explanation: Set up the simulation
This the potential energy surface is $V(x,y) = \frac{A[0]}{2}m[0] \omega[0]^2 (x-x_0[0])^2 + \frac{A[1]}{2}m[1] \omega[1]^2 (y-x_0[1])^2$
End of explanation
sim.current_snapshot = template
x1 = []
x2 = []
v1 = []
v2 = []
Explanation: Set the initial conditions for the system, and initialize the sample storage.
End of explanation
for i in range(nframes):
# generate the next frame (which is sim.n_steps_per_frame timesteps)
snap = sim.generate_next_frame()
# sample the information desired to check distributions
pos = snap.coordinates[0]
vel = snap.velocities[0]
x1.append(pos[0])
x2.append(pos[1])
v1.append(vel[0])
v2.append(vel[1])
Explanation: Run the simulation
This might take a while...
End of explanation
nbins = 50
rrange = (-2.5, 2.5)
rrangex1 = ((min(x1)), (max(x1)))
rrangev1 = ((min(v1)), (max(v1)))
rrangex2 = (min(x2), max(x2))
rrangev2 = (min(v2), max(v2))
dens = True
(x1hist, binsx1) = np.histogram(x1, bins=nbins, range=rrange, density=dens)
(x2hist, binsx2) = np.histogram(x2, bins=nbins, range=rrange, density=dens)
(v1hist, binsv1) = np.histogram(v1, bins=nbins, range=rrange, density=dens)
(v2hist, binsv2) = np.histogram(v2, bins=nbins, range=rrange, density=dens)
Explanation: Run analysis calculation
Build the 1D histograms we'll use:
End of explanation
(hist1, xb1, yb1) = np.histogram2d(x1, v1, [nbins/2, nbins/2], [rrangex1, rrangev1])
(hist2, xb2, yb2) = np.histogram2d(x2, v2, [nbins/2, nbins/2], [rrangex2, rrangev2])
Explanation: Build the 2D histograms:
End of explanation
instantaneous_ke = []
cumulative_ke_1 = []
cumulative_ke_2 = []
tot_ke_1 = 0.0
tot_ke_2 = 0.0
for v in zip(v1, v2):
local_ke_1 = 0.5*sim.mass[0]*v[0]*v[0]
local_ke_2 = 0.5*sim.mass[1]*v[1]*v[1]
instantaneous_ke.append(local_ke_1+local_ke_2)
tot_ke_1 += local_ke_1
tot_ke_2 += local_ke_2
cumulative_ke_1.append(tot_ke_1 / (len(cumulative_ke_1)+1))
cumulative_ke_2.append(tot_ke_2 / (len(cumulative_ke_2)+1))
Explanation: Run the analysis of the kinetic energy:
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
from matplotlib.legend_handler import HandlerLine2D
import numpy as np
pylab.rcParams['figure.figsize'] = 12, 4
matplotlib.rcParams.update({'font.size' : 18})
Explanation: Plot our results
Imports for the plots we'll use, as well as some parameter adjustment.
End of explanation
# Boltzmann info as a in exp(-ax^2)
boltzmann_vel1 = 0.5*sim.integ.beta*sim.mass[0]
boltzmann_pos1 = 0.5*sim.integ.beta*sim.mass[0]*sim.pes.omega[0]**2
plotbinsx1 = [0.5*(binsx1[i]+binsx1[i+1]) for i in range(len(binsx1)-1)]
plotbinsx2 = [0.5*(binsx2[i]+binsx2[i+1]) for i in range(len(binsx2)-1)]
plotbinsv1 = [0.5*(binsv1[i]+binsv1[i+1]) for i in range(len(binsv1)-1)]
plotbinsv2 = [0.5*(binsv2[i]+binsv2[i+1]) for i in range(len(binsv2)-1)]
lx1 = np.linspace(min(plotbinsx1), max(plotbinsx1), 5*len(plotbinsx1))
lx2 = np.linspace(min(plotbinsx2), max(plotbinsx2), 5*len(plotbinsx2))
lv1 = np.linspace(min(plotbinsv1), max(plotbinsv1), 5*len(plotbinsv1))
lv2 = np.linspace(min(plotbinsv2), max(plotbinsv2), 5*len(plotbinsv2))
f, (ax1, av1) = plt.subplots(1,2, sharey=True)
px1 = ax1.plot(lx1, np.sqrt(boltzmann_pos1/np.pi)*np.exp(-boltzmann_pos1*lx1**2), 'k-', plotbinsx1, x1hist, 'r-')
px1 = ax1.set_xlabel('$x$')
pv1 = av1.plot(lv1, np.sqrt(boltzmann_vel1/np.pi)*np.exp(-boltzmann_vel1*lv1**2), 'k-', plotbinsv1, v1hist, 'r-')
pv1 = av1.set_xlabel('$v_x$')
Explanation: Now we plot the distributions of the positions and velocities. These should match the exact Gaussians they're paired with.
End of explanation
boltzmann_vel2 = 0.5*sim.integ.beta*sim.mass[1]
boltzmann_pos2 = 0.5*sim.integ.beta*sim.mass[1]*sim.pes.omega[1]**2
f, (ax2, av2) = plt.subplots(1,2, sharey=True)
px2 = ax2.plot(lx2, np.sqrt(boltzmann_pos2/np.pi)*np.exp(-boltzmann_pos2*lx2**2), 'k-', plotbinsx2, x2hist, 'r-')
px2 = ax2.set_xlabel('$y$')
pv2 = av2.plot(lv2, np.sqrt(boltzmann_vel2/np.pi)*np.exp(-boltzmann_vel2*lv2**2), 'k-', plotbinsv2, v2hist, 'r-')
pv2 = av2.set_xlabel('$v_y$')
Explanation: In the above, you should see that the exact answer (black line) matches up reasonably well with the calculated results (red line). You might notice that the left graph, for position, doesn't match quite as well as the right graph, for velocities. This is as expected: the integrator should impose the correct velocity distribution, but sampling space correctly requires more time to converge.
The plots above check the $x$ degree of freedom; the plots below do the same for $y$.
End of explanation
f, (ah1, ah2) = plt.subplots(1,2)
ah1.set_xlabel('$x$')
ah1.set_ylabel('$v_x$')
ah2.set_xlabel('$y$')
ah2.set_ylabel('$v_y$')
hist1plt = ah1.imshow(hist1.T, extent=[xb1[0],xb1[-1],yb1[0],yb1[-1]], interpolation='nearest')
hist2plt = ah2.imshow(hist2.T, extent=[xb2[0],xb2[-1],yb2[0],yb2[-1]], interpolation='nearest')
Explanation: Next we plot the 2D histograms for each degree of freedom. These should be reasonably circular.
End of explanation
timeseries = [sim.integ.dt*sim.n_steps_per_frame*i for i in range(nframes)]
inst_KE, = plt.plot(timeseries[::nframes//1000], instantaneous_ke[::nframes//1000], 'ko', label='instantaneous KE',markersize=2)
ke_1 = plt.plot(timeseries, cumulative_ke_1, 'r-', label='cumulative KE, x', linewidth=3)
ke_2 = plt.plot(timeseries, cumulative_ke_2, 'b-', label='cumulative KE, y', linewidth=3)
leg = plt.legend(prop={'size' : 12}, handler_map={inst_KE: HandlerLine2D(numpoints=1)})
plt.xlabel('time');
plt.ylabel('kinetic energy');
Explanation: The two plots above should look reasonably similar to each other, although the axes will depend on your choice of $m$ and $\omega$.
The final plot is of the kinetic energy information:
End of explanation |
14,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generators
A generator is essentially an iterator over an object (say a dataset). You get a small chunk of data obtained through "iterating over the larger object" every time you make a call to the generator. Generators might prove to be useful in your implementation of sequential training algorithms where you only require a few samples of your data. For example, in a mini batch stochastic gradient descent, you would need to generate random samples from the dataset for performing an update on your gradient. Generators can be used in such use cases to create memory efficient implementations of your algorithm, since they allow you to perform operations without loading the whole dataset.
Also see PEP 255 (https
Step1: This is a generator that yields the infinite Fibonnaci sequence. With every call to fib after the first call, the state of the generator gets updated and the value of b is returned.
To use a generator, we first create an instance of the generator. Use the next keywork to make calls to the generator. Once a generator has been consumed completely, a StopIteration is raised if you try to consume more elements from the generator.
Step2: This example shows how you can represent an infinte sequence in Python without using up all the memory in the world. Next, we will look at a more practical example.
Step3: Now, suppose you want to find the sum of squares of the first 1,000,000 (1 million) integers. You don't believe the analytical formula and want to calculate it directly by summing up all the requisite squares of integers. It is not memory efficient to create a list of 1 million integers just to compute a sum. This is where our custom generator comes to our rescue.
Step4: Although both snippets of code give the same result, the implementation with the generator is more scalable since it uses constant memory.
Generator expressions
See PEP 289 (https
Step5: Both generators and generator expressions can be passed to the tuple, set or list constructors to create equivalent tuples, sets or lists.
NOTE - I strongly recommend using finite generators in such use cases.
Step6: All the rules discussed in the previous sections about conditionals also apply to generator expressions
Step7: Advanced generator stuff
See PEP 380 for details. (https
Step8: Now, we define our master generator. | Python Code:
## Example from PEP 0255
def fib():
a, b = 0, 1
while 1:
yield b
a, b = b, a + b
Explanation: Generators
A generator is essentially an iterator over an object (say a dataset). You get a small chunk of data obtained through "iterating over the larger object" every time you make a call to the generator. Generators might prove to be useful in your implementation of sequential training algorithms where you only require a few samples of your data. For example, in a mini batch stochastic gradient descent, you would need to generate random samples from the dataset for performing an update on your gradient. Generators can be used in such use cases to create memory efficient implementations of your algorithm, since they allow you to perform operations without loading the whole dataset.
Also see PEP 255 (https://www.python.org/dev/peps/pep-0255/). The explanation presented here is quite thorough.
Behaviour of generators
A generator behaves like a function with states. Typically, functions in Python do not have any state information. The variables defined within the function scope are reset/destroyed at the end of every function call. A generator allows you store intermediate states between calls, so that every subsequent call can resume from the last state of execution. Generators introduced the yield keyword to Python. We will look at a few examples below.
NOTE
Although generators use the def keyword, they are not function objects. Generators are a class in their own right, but are slightly different from function objects.
We take a look at our first generator.
End of explanation
gen1 = fib()
# prints the first 10 fibonnaci numbers
for i in range(10):
print(next(gen1), end=', ')
print("\nPassed!")
Explanation: This is a generator that yields the infinite Fibonnaci sequence. With every call to fib after the first call, the state of the generator gets updated and the value of b is returned.
To use a generator, we first create an instance of the generator. Use the next keywork to make calls to the generator. Once a generator has been consumed completely, a StopIteration is raised if you try to consume more elements from the generator.
End of explanation
def nsquared(n):
while True:
yield n ** 2
n = n - 1
if n == 0:
return # correct way to terminate a generator
gen2 = nsquared(10)
for i in gen2:
print(i, end=', ')
try:
next(gen2) # should raise a StopIteration exception
except StopIteration:
print("\nWe hit the the end of the generator, no more elements can be consumed")
except Exception as e:
print("\nOops! Unexpected error", e)
finally:
print("Passed !")
Explanation: This example shows how you can represent an infinte sequence in Python without using up all the memory in the world. Next, we will look at a more practical example.
End of explanation
squared_sum1 = sum([i**2 for i in range(1000001)])
print(squared_sum1)
gen3 = nsquared(1000000)
squared_sum2 = sum(gen3)
print(squared_sum2)
assert squared_sum1 == squared_sum1, "Sums are not equal !"
print("Passed !")
Explanation: Now, suppose you want to find the sum of squares of the first 1,000,000 (1 million) integers. You don't believe the analytical formula and want to calculate it directly by summing up all the requisite squares of integers. It is not memory efficient to create a list of 1 million integers just to compute a sum. This is where our custom generator comes to our rescue.
End of explanation
gen4 = nsquared(10)
print(gen4)
gen5 = (i**2 for i in range(11))
print(gen5)
Explanation: Although both snippets of code give the same result, the implementation with the generator is more scalable since it uses constant memory.
Generator expressions
See PEP 289 (https://www.python.org/dev/peps/pep-0289/).
Generator expressions merge the concepts of both generators and list comprehensions. The syntax is almost similar to list comprehensions but the returned result is a generator instead of a list.
End of explanation
# note that the generator has to be reinitialized once it has been consumed
gen4 = nsquared(10)
print(tuple(gen4))
gen4 = nsquared(10)
print(list(gen4))
gen4 = nsquared(10)
print(set(gen4))
print(tuple(i**2 for i in range(11)))
print(list(i**2 for i in range(11)))
print(set(i**2 for i in range(11)))
Explanation: Both generators and generator expressions can be passed to the tuple, set or list constructors to create equivalent tuples, sets or lists.
NOTE - I strongly recommend using finite generators in such use cases.
End of explanation
import numpy as np
print(list(i**2 for i in range(11) if i <=5))
print(list(i**2 if i <=5 else 1 for i in range(11)))
mat = list(i**2 + j**2 if i < j else i + j for i in range(3) for j in range(3))
print(np.array(mat).reshape(3,3))
Explanation: All the rules discussed in the previous sections about conditionals also apply to generator expressions
End of explanation
# Same function, redefined here for clarity
def fib(n):
a, b = 0, 1
count = 0
while 1:
yield b
count += 1
if count == n:
return
a, b = b, a + b
def geom(n):
a = 1
count = 0
while True:
yield a
count += 1
if count == n:
return
a = a * 2
def constant(n):
count = 0
while True:
yield -1
count += 1
if count == n:
return
Explanation: Advanced generator stuff
See PEP 380 for details. (https://www.python.org/dev/peps/pep-0380/)
Python 3 introduced the concept of one generator delegating to sub-generators. This is achieved with the use of the yield from keyword.
Suppose, you want to create a fancy new sequence by concatenating 3 sequences - the Fibonnaci sequence, a geometric series and a constant series. You can do this by creating a generator that delegates each of the subsequences to their own generators. To do this, we first create our subsequence generators.
End of explanation
def master_sequence(n):
g1 = fib(n)
g2 = geom(n)
g3 = constant(n)
count = 0
yield from g1
yield from g2
yield from g3
master_gen = master_sequence(5) # creates a sequence of length 15
print(list(master_gen))
Explanation: Now, we define our master generator.
End of explanation |
14,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python analysis of output from MATLAB CNMF-E implementation
Analyze tif stacks using batch_cnmf.py and then open the resultant analysis products using a workflow like the one shown below.
These results were generated using the command
Step1: First load up all out.mat files that CNMF-E generated
Step2: Then let's plot all the results
All of our results are stored in the out.mat file that we loaded above into a dictionary stored within a list called results_files.
The most relevant keys in each results dictionary are | Python Code:
import sys
import os
from matplotlib import pyplot as plt
import scipy.sparse as sparse
import scipy.io as sio
import numpy as np
import python_utils as utils
%matplotlib inline
Explanation: Python analysis of output from MATLAB CNMF-E implementation
Analyze tif stacks using batch_cnmf.py and then open the resultant analysis products using a workflow like the one shown below.
These results were generated using the command:
python batch_cnmf.py '/home/deisseroth/Data/Test2/'
Contours are plotted using a function from Caiman: https://github.com/flatironinstitute/CaImAn.
Generally, Caiman analysis functions can be mostly used with results generated from batch_cnmf.py.
End of explanation
# What was the base directory where we ran the batch analysis?
base_path = '/home/deisseroth/Data/Test2/'
# Load up all processed files found in base directory
results_files = [] # Loaded mat files containing CNMF-E output
results_names = [] # Name of folders where results were saved
for root, dirs, files in os.walk(base_path):
if 'out.mat' in files:
idx = len(base_path.split(os.sep))
name = root.split(os.sep)[idx]
results_files.append(sio.loadmat(root + os.sep + 'out.mat'))
results_names.append(name)
print(name)
Explanation: First load up all out.mat files that CNMF-E generated
End of explanation
# Look at the path of the first loaded results file
print(results_names[0], results_files[0]['file'])
# Neurons to plot
neurons_idx = 10
# Frames to plot
frames = 2000
# Make a plot showing some time series traces
for name, results in zip(results_names, results_files):
plt.figure(figsize=(15,10))
plt.title(name)
plt.axis('off')
S = np.array(results['S'].todense()) # Inferred spikes
C = np.array(results['C']) # Denoised fluorescence
F = np.array(results['C_raw']) # Raw fluorescence
for idx in range(np.shape(F)[0]):
plt.plot(utils.normalize(S[idx, :frames], percentile=False) + idx, 'r')
plt.plot(utils.normalize(F[idx, :frames]) + idx, 'k')
plt.plot(utils.normalize(C[idx, :frames]) + idx, 'b')
if idx > neurons_idx:
break
# Make a plot showing contours from each dataset
for name, results in zip(results_names, results_files):
plt.figure(figsize=(10,10))
# Call contour plotting function from Caiman (with our results from MATLAB!)
coordinates = utils.plot_contours(results['A'].todense(),
results['Cn'],
display_numbers=False, maxthr=.6,
cmap='gray', colors='r')
plt.title(name)
plt.axis('off')
Explanation: Then let's plot all the results
All of our results are stored in the out.mat file that we loaded above into a dictionary stored within a list called results_files.
The most relevant keys in each results dictionary are:
- A -- Sparse matrix of spatial filters (e.g. for use plotting contours below)
- S -- Deconvolved spike trains estimated for each neuron
- C -- Denoised calcium signals computed for each neuron
- C_raw -- Raw calcium signals extracted from each neuron
- file -- File that was analyzed to generate this results file
Now that we've loaded it we can look at the results of our analysis as illustrated below.
End of explanation |
14,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation
Make sure to have all the required software installed after proceeding.
For installation help, please consult the school guide.
Python Basics
Step1: Basic Math Operations
Step2: Data Strutures
Step3: Exercise 0.1
Use L[i
Step4: Loops and Indentation
Step5: Exercise 0.2
Can you then predict the output of the following code?
Step6: Control Flow
Step7: Functions
Step8: Exercise 0.3
Note that the previous code allows the hour to be less than 0 or more than 24. Change the code in order to
indicate that the hour given as input is invalid. Your output should be something like
Step9: Profiling
Step10: Debugging in Python
Step11: Exceptions
for a complete list of built-in exceptions, see http
Step12: Extending basic Functionalities with Modules
Step13: Organizing your Code with your own modules
See details in guide
Matplotlib – Plotting in Python
Step14: Exercise 0.5
Try running the following on Jupyter, which will introduce you to some of the basic numeric and plotting
operations.
Step15: Exercise 0.6
Run the following example and lookup the ptp function/method (use the ? functionality in Jupyter)
Step16: Exercise 0.7
Consider the following approximation to compute an integral
\begin{equation}
\int_0^1 f(x) dx \approx \sum_{i=0}^{999} \frac{f(i/1000)}{1000}
\end{equation}
Use numpy to implement this for $f(x) = x^2$. You should not need to use any loops. Note that integer division in Python 2.x returns the floor division (use floats – e.g. 5.0/2.0 – to obtain rationals). The exact value is 1/3. How close
is the approximation?
Step17: Exercise 0.8
In the rest of the school we will represent both matrices and vectors as numpy arrays. You can create arrays
in different ways, one possible way is to create an array of zeros.
Step18: You can check the shape and the data type of your array using the following commands
Step19: This shows you that “a” is an 3*2 array of type float64. By default, arrays contain 64 bit6 floating point numbers. You
can specify the particular array type by using the keyword dtype.
Step20: You can also create arrays from lists of numbers
Step21: Exercise 0.9
You can multiply two matrices by looping over both indexes and multiplying the individual entries.
Step22: This is, however, cumbersome and inefficient. Numpy supports matrix multiplication with the dot function | Python Code:
print('Hello World!')
Explanation: Installation
Make sure to have all the required software installed after proceeding.
For installation help, please consult the school guide.
Python Basics
End of explanation
print(3 + 5)
print(3 - 5)
print(3 * 5)
print(3 ** 5)
# Observation: this code gives different results for python2 and python3
# because of the behaviour for the division operator
print(3 / 5.0)
print(3 / 5)
# for compatibility, make sure to use the follow statement
from __future__ import division
print(3 / 5.0)
print(3 / 5)
Explanation: Basic Math Operations
End of explanation
countries = ['Portugal','Spain','United Kingdom']
print(countries)
Explanation: Data Strutures
End of explanation
countries[0:2]
Explanation: Exercise 0.1
Use L[i:j] to return the countries in the Iberian Peninsula.
End of explanation
i = 2
while i < 10:
print(i)
i += 2
for i in range(2,10,2):
print(i)
a=1
while a <= 3:
print(a)
a += 1
Explanation: Loops and Indentation
End of explanation
a=1
while a <= 3:
print(a)
a += 1
Explanation: Exercise 0.2
Can you then predict the output of the following code?:
End of explanation
hour = 16
if hour < 12:
print('Good morning!')
elif hour >= 12 and hour < 20:
print('Good afternoon!')
else:
print('Good evening!')
Explanation: Control Flow
End of explanation
def greet(hour):
if hour < 12:
print('Good morning!')
elif hour >= 12 and hour < 20:
print('Good afternoon!')
else:
print('Good evening!')
Explanation: Functions
End of explanation
greet(50)
greet(-5)
Explanation: Exercise 0.3
Note that the previous code allows the hour to be less than 0 or more than 24. Change the code in order to
indicate that the hour given as input is invalid. Your output should be something like:
greet(50)
Invalid hour: it should be between 0 and 24.
greet(-5)
Invalid hour: it should be between 0 and 24.
End of explanation
%prun greet(22)
Explanation: Profiling
End of explanation
def greet2(hour):
if hour < 12:
print('Good morning!')
elif hour >= 12 and hour < 20:
print('Good afternoon!')
else:
import pdb; pdb.set_trace()
print('Good evening!')
# try: greet2(22)
Explanation: Debugging in Python
End of explanation
raise ValueError("Invalid input value.")
while True:
try:
x = int(input("Please enter a number: "))
break
except ValueError:
print("Oops! That was no valid number. Try again...")
Explanation: Exceptions
for a complete list of built-in exceptions, see http://docs.python.org/2/library/exceptions.html
End of explanation
import numpy as np
np.var?
np.random.normal?
Explanation: Extending basic Functionalities with Modules
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
X = np.linspace(-4, 4, 1000)
plt.plot(X, X**2*np.cos(X**2))
plt.savefig("simple.pdf")
Explanation: Organizing your Code with your own modules
See details in guide
Matplotlib – Plotting in Python
End of explanation
# This will import the numpy library
# and give it the np abbreviation
import numpy as np
# This will import the plotting library
import matplotlib.pyplot as plt
# Linspace will return 1000 points,
# evenly spaced between -4 and +4
X = np.linspace(-4, 4, 1000)
# Y[i] = X[i]**2
Y = X**2
# Plot using a red line ('r')
plt.plot(X, Y, 'r')
# arange returns integers ranging from -4 to +4
# (the upper argument is excluded!)
Ints = np.arange(-4,5)
# We plot these on top of the previous plot
# using blue circles (o means a little circle)
plt.plot(Ints, Ints**2, 'bo')
# You may notice that the plot is tight around the line
# Set the display limits to see better
plt.xlim(-4.5,4.5)
plt.ylim(-1,17)
plt.show()
import matplotlib.pyplot as plt
import numpy as np
X = np.linspace(0, 4 * np.pi, 1000)
C = np.cos(X)
S = np.sin(X)
plt.plot(X, C)
plt.plot(X, S)
plt.show()
Explanation: Exercise 0.5
Try running the following on Jupyter, which will introduce you to some of the basic numeric and plotting
operations.
End of explanation
A = np.arange(100)
# These two lines do exactly the same thing
print(np.mean(A))
print(A.mean())
np.ptp?
Explanation: Exercise 0.6
Run the following example and lookup the ptp function/method (use the ? functionality in Jupyter)
End of explanation
def f(x):
return(x**2)
sum([f(x*1./1000)/1000 for x in range(0,1000)])
Explanation: Exercise 0.7
Consider the following approximation to compute an integral
\begin{equation}
\int_0^1 f(x) dx \approx \sum_{i=0}^{999} \frac{f(i/1000)}{1000}
\end{equation}
Use numpy to implement this for $f(x) = x^2$. You should not need to use any loops. Note that integer division in Python 2.x returns the floor division (use floats – e.g. 5.0/2.0 – to obtain rationals). The exact value is 1/3. How close
is the approximation?
End of explanation
import numpy as np
m = 3
n = 2
a = np.zeros([m,n])
print(a)
Explanation: Exercise 0.8
In the rest of the school we will represent both matrices and vectors as numpy arrays. You can create arrays
in different ways, one possible way is to create an array of zeros.
End of explanation
print(a.shape)
print(a.dtype.name)
Explanation: You can check the shape and the data type of your array using the following commands:
End of explanation
a = np.zeros([m,n],dtype=int)
print(a.dtype)
Explanation: This shows you that “a” is an 3*2 array of type float64. By default, arrays contain 64 bit6 floating point numbers. You
can specify the particular array type by using the keyword dtype.
End of explanation
a = np.array([[2,3],[3,4]])
print(a)
Explanation: You can also create arrays from lists of numbers:
End of explanation
a = np.array([[2,3],[3,4]])
b = np.array([[1,1],[1,1]])
a_dim1, a_dim2 = a.shape
b_dim1, b_dim2 = b.shape
c = np.zeros([a_dim1,b_dim2])
for i in range(a_dim1):
for j in range(b_dim2):
for k in range(a_dim2):
c[i,j] += a[i,k]*b[k,j]
print(c)
Explanation: Exercise 0.9
You can multiply two matrices by looping over both indexes and multiplying the individual entries.
End of explanation
d = np.dot(a,b)
print(d)
a = np.array([1,2])
b = np.array([1,1])
np.dot(a,b)
np.outer(a,b)
I = np.eye(2)
x = np.array([2.3, 3.4])
print(I)
print(np.dot(I,x))
A = np.array([ [1, 2], [3, 4] ])
print(A)
print(A.T)
Explanation: This is, however, cumbersome and inefficient. Numpy supports matrix multiplication with the dot function:
End of explanation |
14,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
python introduction
analyzing patient data
Step1: data types
Step2: indices start at 0
and intervals exclude the last value
Step3: calculations on arrays of values
Step4: importing modules
what functions (or methods) are available? name. then press tab
help on functions (or methods)
Step5: stacking arrays
Step6: slicing strings
notation i
Step7: for loop syntax
python
for variable in collection
Step8: important
Step9: reverse a string
Step10: storing multiple values in a list
Step11: mutable and immutable objects
"name" contains the string 'Darwin', and strings are immutable.
lists and arrays are mutable. Functions that operate on them can change them in place
Step12: A list of lists is not the same as an array
Step13: deep copy versus simple copy
Step16: more on mutable versus immutable objects
Step17: functions can change mutable arguments in place
Step18: for R users, the following code does not do what you might think
Step19: more on lists and strings
splitting a string into a list
substitutions
Step20: tuples
tuples are immutable , unlike lists. useful for
- array sizes
Step21: adding and multiplying lists
and remember that lists are mutable
Step22: use list to copy (but not deep-copy)
Step23: operator overloading
Step24: list comprehension
general syntax | Python Code:
import numpy
data = numpy.loadtxt(fname='inflammation-01.csv', delimiter=',')
data
%whos
print(data)
Explanation: python introduction
analyzing patient data
End of explanation
print(type(data))
print(data.dtype)
print(data.shape)
Explanation: data types
End of explanation
print('first value in data:', data[0, 0])
small = data[:3, 36:]
small
data[:3, 36:]
print('small is:')
print(small)
Explanation: indices start at 0
and intervals exclude the last value: [i,j[
End of explanation
doubledata = data * 2.0
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
tripledata = doubledata + data
print('tripledata:')
print(tripledata[:3, 36:])
numpy.mean(data)
Explanation: calculations on arrays of values
End of explanation
import time
print(time.ctime())
time.
?time.strptime
time.strftime?
t1 = time.strptime("08/12/18 07:26:34 PM", '%m/%d/%y %H:%M:%S %p')
print(t1)
t2 = time.mktime(t1) # in seconds since 1970
print(t2/60/60/24) # printed: in hours since 1970
time.strftime('%Y-%m-%d', t1)
data.any?
numpy.mean?
maxval, minval, stdval = numpy.max(data), numpy.min(data), numpy.std(data)
print('maximum inflammation:', maxval)
print('minimum inflammation:', minval)
print('standard deviation:', stdval)
patient_0 = data[0, :] # 0 on the first axis, everything on the second
print('maximum inflammation for patient 0:', patient_0.max())
type(patient_0)
print('maximum inflammation for patient 2:', numpy.max(data[2, :]))
print(numpy.mean(data, axis=0))
print(numpy.mean(data, axis=0).shape)
print(numpy.mean(data, axis=1))
import matplotlib.pyplot
image = matplotlib.pyplot.imshow(data)
# % matplotlib inline
matplotlib.pyplot.show()
ave_inflammation = numpy.mean(data, axis=0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation)
matplotlib.pyplot.show()
max_plot = matplotlib.pyplot.plot(numpy.max(data, axis=0))
matplotlib.pyplot.show()
min_plot = matplotlib.pyplot.plot(numpy.min(data, axis=0))
matplotlib.pyplot.show()
import numpy
import matplotlib.pyplot
data = numpy.loadtxt(fname='inflammation-01.csv', delimiter=',')
datamin = numpy.min(data)
datamax = numpy.max(data)
fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1) # add_subplot is a method for a Figure object
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.set_ylim(datamin, datamax+0.1)
axes1.plot(numpy.mean(data, axis=0))
axes2.set_ylabel('max')
axes2.set_ylim(datamin, datamax+0.1)
axes2.plot(numpy.max(data, axis=0))
axes3.set_ylabel('min')
axes3.set_ylim(datamin, datamax+0.1)
axes3.plot(numpy.min(data, axis=0), drawstyle='steps-mid')
fig.tight_layout()
matplotlib.pyplot.show()
Explanation: importing modules
what functions (or methods) are available? name. then press tab
help on functions (or methods): name? or ?name
End of explanation
import numpy
A = numpy.array([[1,2,3], [4,5,6], [7, 8, 9]])
print('A = ')
print(A)
B = numpy.hstack([A, A])
print('B = ')
print(B)
C = numpy.vstack([A, A])
print('C = ')
print(C)
Explanation: stacking arrays
End of explanation
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print(element[:4])
print(element[4:]); print(element[:])
element[3:3]
data[3:3, 4:4]
data[3:3, :]
word = 'lead'
for char in word:
print(char)
Explanation: slicing strings
notation i:j (think [i,j[ to exclude the last) with indices starting at 0, like for arrays
End of explanation
length = 0
for vowel in 'aeiou':
length = length + 1
print('There are', length, 'vowels')
Explanation: for loop syntax
python
for variable in collection:
statement 1 to do something
statement 2 to do another thing
statement 3, indented similarly
End of explanation
print('The variable "vowel" still exists: equals', vowel)
print(len('aeiou'))
for i in range(1, 40):
print(i, end=" ")
print(type(range(1,40)))
range(3,1000)
for i in range(3, 15, 4):
print(i)
print(5 ** 3)
result = 1
for i in range(0, 3):
result = result * 5
print(result)
Explanation: important: variables created inside the for loop still exist outside after the loop is finished
End of explanation
newstring = ''
oldstring = 'Newton'
for char in oldstring:
newstring = char + newstring
print(newstring)
Explanation: reverse a string
End of explanation
odds = [1,3, 5, 7]
print('odds are:', odds)
print('first and last:', odds[0], odds[-1])
for number in odds:
print(number)
names = ['Newton', 'Darwing', 'Turing'] # typo in Darwin's name
print('names is originally:', names)
names[1] = 'Darwin' # correct the name
print('final value of names:', names)
name = 'Darwin'
print("letter indexed 0:", name[0])
name[0] = 'd'
name = "darwin"
name
Explanation: storing multiple values in a list
End of explanation
a = "Darwin"
b = a
print("b=",b)
b = "Turing" # does not change a, because a has immutable value
print("now b=",b,"\nand a=",a)
a = [10,11]
b = a
print("b[1]=", b[1]) # changes the value that b binds to, so changes a too
b[1] = 22
print("b=", b, "\nand a=",a)
import copy
a = [10,11]
b = copy.copy(a)
print("b[1]=", b[1]) # changes the value that b binds to, so changes a too
b[1] = 22
print("b=", b, "\nand a=",a)
Explanation: mutable and immutable objects
"name" contains the string 'Darwin', and strings are immutable.
lists and arrays are mutable. Functions that operate on them can change them in place
End of explanation
x = [['pepper', 'zucchini', 'onion'],
['cabbage', 'lettuce', 'garlic'],
['apple', 'pear', 'banana']]
print(x)
print(x[0])
print(x[0][0])
print([x[0]])
Explanation: A list of lists is not the same as an array
End of explanation
a = [[10,11],[20,21]]
print(a)
b = copy.copy(a)
b[0][0] = 50
print("b=",b,"and a=",a)
b[0] = [8,9]
print("b=",b,"and a=",a)
b = copy.deepcopy(a)
print("now b is back to a: ",b)
b[0][0] = 8
print("b=",b,"and a=",a)
Explanation: deep copy versus simple copy
End of explanation
def add1_scalar(x):
adds 1 to scalar input
x += 1
print("after add1_scalar:",x)
def add1_array(x):
adds 1 to the first element of array input
x[0] += 1
print("after add1_array:",x)
a=5; print(a)
add1_scalar(a)
print("and now a =",a) # a was not modified because it is immutable
b=[5]; print(b)
add1_array(b)
print("and now b =",b) # b was modified in place because it is mutable: array
add1_scalar?
Explanation: more on mutable versus immutable objects: functions can change mutable arguments in place. This is a huge deal!
End of explanation
print('odds before:', odds)
odds.append(11)
print('odds after adding a value:', odds)
Explanation: functions can change mutable arguments in place:
beware
opportunities to save a lot of memory (and time)
how to modify lists
End of explanation
odds = [odds, 11]
print('odds=',odds)
odds = [1, 3, 5, 7, 11]
del odds[0]
print('odds after removing the first element:', odds)
odds.reverse()
print('odds after reversing:', odds)
a = odds.pop()
print('odds after popping last element:', odds)
print("this last element was",a)
Explanation: for R users, the following code does not do what you might think:
End of explanation
taxon = "Drosophila melanogaster"
genus = taxon[0:10]
print("genus:", genus)
species = taxon[11:]
print("species:", species)
gslist = taxon.split(' ')
print(gslist)
print("after splitting at each space: genus=",
gslist[0],", species=",gslist[1], sep="")
print(taxon)
print(taxon.replace(' ','_'))
print(taxon) # has not changed
mystring = "\t hello world\n \n"
mystring
print('here is mystring: "' + mystring + '"')
print('here is mystring.strip(): "' + mystring.strip() + '"')
print('here is mystring.rstrip(): "' + mystring.rstrip() + '"') # tRailing only
" abc\n \n\t ".strip()
chromosomes = ["X", "Y", "2", "3", "4"]
autosomes = chromosomes[2:5]
print("autosomes:", autosomes)
last = chromosomes[-1]
print("last:", last)
last = 21
print("last:", last)
chromosomes # "last" was a scalar: immutable, so modifying it does not modify "chromosomes"
a = "Observation date: 02-Feb-2013"
b = [["fluorine", "F"], ["chlorine", "Cl"], ["bromine", "Br"], ["iodine", "I"], ["astatine", "At"]]
print(a[-4:])
print(b[-2:])
months = ["jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec"]
print("10:12 gives:", months[10:12])
print("10:len(months) gives:", months[10:len(months)])
print("10: gives", months[10:])
Explanation: more on lists and strings
splitting a string into a list
substitutions
End of explanation
left = 'L'
right = 'R'
temp = left
left = right
right = temp
print("left =",left,"and right =",right)
left = 'L'
right = 'R'
(left, right) = (right, left)
print("left =",left,"and right =",right)
left, right = right, left
print("now left =",left,"and right =",right)
Explanation: tuples
tuples are immutable , unlike lists. useful for
- array sizes: (60,40) earlier
- types of arguments to functions: like (float64, int64) for instance
- functions can return multiple objects in a tuple
- a tuple with a single value, say 6.5, is noted like this: (6.5,)
- they come in very handy for exchanges:
End of explanation
odds = [1, 3, 5, 7]
primes = odds
primes += [2]
print('primes:', primes)
print('odds:', odds)
Explanation: adding and multiplying lists
and remember that lists are mutable
End of explanation
odds = [1, 3, 5, 7]
primes = list(odds)
primes += [11]
print('primes:', primes)
print('odds:', odds)
a = [[10,11],[20,21]]
b = list(a)
b[0][0] = 50
print("b=",b,"\na=",a)
odds += [9,11]
print("add = concatenate for lists: odds =", odds)
counts = [2, 4, 6, 8, 10]
repeats = counts * 2
print("multiply = repeat for lists:\n", repeats)
Explanation: use list to copy (but not deep-copy):
End of explanation
print(sorted(repeats)) # all integers
print(sorted([10,2.5,4])) # all numerical
print(sorted(["jan","feb","mar","dec"])) # all strings
print(sorted(["jan",20,1,"dec"])) # error
Explanation: operator overloading: the same function does different things depending on its arguments.
here: + and * can do different things
End of explanation
[num+5 for num in counts]
Explanation: list comprehension
general syntax: [xxx for y in z], where xxx is typically some function of y.
shortcut that executes a for loop on one line. here is one example:
End of explanation |
14,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Intermediate Python - List Comprehension
In this Colab, we will discuss list comprehension, an extremely useful and idiomatic way to process lists in Python.
List Comprehension
List comprehension is a compact way to create a list of data. Say you want to create a list containing ten random numbers. One way to do this is to just hard-code a ten-element list.
Step2: Note
Step3: This looks much nicer. Less repetition is always a good thing.
Note
Step4: Let's start by looking at the "for _ in range()" part. This looks like the for loop that we are familiar with. In this case, it is a loop over the range from zero through nine.
The strange part is the for doesn't start the expression. We are used to seeing a for loop with a body of statements indented below it. In this case, the body of the for loop is to the left of the for keyword.
This is the signature of list comprehension. The body of the loop comes first and the for range comes last.
for isn't the only option for list comprehensions. You can also add an if condition.
Step5: You can add multiple if statements by using boolean operators.
Step6: You can even have multiple loops chained in a single list comprehension. The left-most loop is the outer loop and the subsequent loops are nested within. However, when cases become sufficiently complicated, we recommend using standard loop notation, to enhance code readability.
Step7: Exercises
Exercise 1
Create a list expansion that builds a list of numbers between 5 and 67 (inclusive) that are divisible by 7 but not divisible by 3.
Student Solution
Step8: Exercise 2
Use list comprehension to find the lengths of all the words in the following sentence.
Student Solution | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/00_prerequisites/01_intermediate_python/03-list-comprehension.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC.
End of explanation
import random
[
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
random.randint(0, 100),
]
Explanation: Intermediate Python - List Comprehension
In this Colab, we will discuss list comprehension, an extremely useful and idiomatic way to process lists in Python.
List Comprehension
List comprehension is a compact way to create a list of data. Say you want to create a list containing ten random numbers. One way to do this is to just hard-code a ten-element list.
End of explanation
import random
my_list = []
for _ in range(10):
my_list.append(random.randint(0, 100))
my_list
Explanation: Note: In the code above, we've introduced the random module. random is a Python package that comes as part of the standard Python distribution. To use Python packages we rely on the import keyword.
That's pretty intensive, and requires a bit of copy-paste work. We could clean it up with a for loop:
End of explanation
import random
my_list = [random.randint(0, 100) for _ in range(10)]
my_list
Explanation: This looks much nicer. Less repetition is always a good thing.
Note: Did you notice the use of the underscore to consume the value returned from range? You can use this when you don't actually need the range value, and it saves Python from assigning it to memory.
There is an even more idiomatic way of creating this list of numbers in Python. Here is an example of a list comprehension:
End of explanation
[x for x in range(10) if x % 2 == 0]
Explanation: Let's start by looking at the "for _ in range()" part. This looks like the for loop that we are familiar with. In this case, it is a loop over the range from zero through nine.
The strange part is the for doesn't start the expression. We are used to seeing a for loop with a body of statements indented below it. In this case, the body of the for loop is to the left of the for keyword.
This is the signature of list comprehension. The body of the loop comes first and the for range comes last.
for isn't the only option for list comprehensions. You can also add an if condition.
End of explanation
print([x for x in range(10) if x % 2 == 0 and x % 3 == 0])
print([x for x in range(10) if x % 2 == 0 or x % 3 == 0])
Explanation: You can add multiple if statements by using boolean operators.
End of explanation
[(x, y) for x in range(5) for y in range(3)]
Explanation: You can even have multiple loops chained in a single list comprehension. The left-most loop is the outer loop and the subsequent loops are nested within. However, when cases become sufficiently complicated, we recommend using standard loop notation, to enhance code readability.
End of explanation
### YOUR CODE HERE ###
Explanation: Exercises
Exercise 1
Create a list expansion that builds a list of numbers between 5 and 67 (inclusive) that are divisible by 7 but not divisible by 3.
Student Solution
End of explanation
sentence = "I love list comprehension so much it makes me want to cry"
words = sentence.split()
print(words)
### YOUR CODE GOES HERE ###
Explanation: Exercise 2
Use list comprehension to find the lengths of all the words in the following sentence.
Student Solution
End of explanation |
14,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
Intro
Regression
Data
Residuals and Cost
Regression (Scipy)
Polynomial Regression
Solve (SKlearn)
Gradient Descent
Training Animation
Intro
Exploratory notebook related to the theory and concepts behind linear regression. Includes toy examples implementation and visualization.
Regression
Regression is a supervised learning task concerned with the prediction of continuous numerical values. It contrasts with Classification, which is instead about the prediction of classes (categorical values).
Step1: Data
Step2: Residuals and Cost
Residuals are the differences between the true and predicted values along the sole prediction axis. Mean-squared-error is a common way to compute the quality of a prediction set compared to the true set.
Step3: Regression (Scipy)
Step4: Polynomial Regression
Using polynomial features can help to better model our data. Polynomial features can be obtained by raising to the power or by combining our original base features. Notice that the resulting polynomial function, even if not linear for the features, is a linear function of our target coefficients, so we are still dealing with a linear model.
Step5: Solve (SKlearn)
Step6: Gradient Descent
Fit regression line using gradient descent.
Link 1
Link 2
Step7: Training Animation | Python Code:
import numpy as np
import seaborn as sns
import pandas as pd
from matplotlib import pyplot as plt, animation, rc
%matplotlib notebook
#%matplotlib inline
Explanation: Table of Contents
Intro
Regression
Data
Residuals and Cost
Regression (Scipy)
Polynomial Regression
Solve (SKlearn)
Gradient Descent
Training Animation
Intro
Exploratory notebook related to the theory and concepts behind linear regression. Includes toy examples implementation and visualization.
Regression
Regression is a supervised learning task concerned with the prediction of continuous numerical values. It contrasts with Classification, which is instead about the prediction of classes (categorical values).
End of explanation
# class for generic line defined by a slope and intercept
class Line:
def __init__(self, slope, intercept):
self.slope = slope
self.intercept = intercept
self.line_fun = lambda x : x * self.slope + self.intercept
# get linspaces sample in the specified interval
def get_sample(self, n, start=0.0, stop=1.0, noise=None):
x = np.linspace(start, stop, n)
y = self.get_y(x, noise=noise)
return (x, y)
# get random sample in the specified interval
def get_rand_sample(self, n, start=0.0, stop=1.0, noise=None):
x = (stop-start) * np.random.random_sample(n) + start
y = self.get_y(x, noise=noise)
return (x, y)
def get_y(self, x, noise=None):
y = self.line_fun(x)
if noise:
y += ((np.random.normal(scale=0.5, size=len(y)))*noise)
return y
# plot
line = Line(slope=1.5, intercept=5)
(x, y) = line.get_sample(100, noise=1)
# plot (scatter) noisy sample
sns.regplot(x, y, fit_reg=False, label='Noisy Sample')
# plot line
plt.plot(*line.get_sample(100), label='True Line')
plt.legend()
plt.show()
Explanation: Data
End of explanation
# mean squared error between true and predicted values
def compute_mse(y_true, y_pred):
residuals = (y_pred-y_true)
ms = sum(residuals**2)/len(y_true)
return ms
# Test with different slopes
x, y_true = line.get_sample(100)
plt.plot(x, y_true, label='true_sample')
#sns.regplot(x, y, fit_reg=False, label="True")
for s in [-1, 0, 1, 1.5, 1.7, 2]:
l = Line(slope=s, intercept=5)
y_pred = l.get_y(x)
mse = compute_mse(y_true, y_pred)
plt.plot(x, y_pred, label="s={},c={:.3f}".format(s, mse))
plt.legend(loc='upper left')
plt.show()
# Cost, testing with different slopes
costs = []
for s in np.linspace(0, 3, 20):
l = Line(slope=s, intercept=5)
y_pred = l.get_y(x)
mse = compute_mse(y, y_pred)
costs.append((s, mse))
plt.scatter(x=[x[0] for x in costs], y=[x[1] for x in costs])
plt.ylabel("cost")
plt.xlabel("slope")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
slope = np.linspace(-2, 6, 20)
intercept = np.linspace(-100, 200, 20)
slope_s, intercept_s = np.meshgrid(slope, intercept)
slope_m, intercept_m, x_m = np.meshgrid(slope, intercept, x)
Y_pred = x_m * slope_m + intercept_m
cost = np.array([compute_mse(y, Y_pred[i][j]) for i in range(20) for j in range(20)]).reshape(20, 20)
#print(Y_pred.shape)
#print(slope.shape)
#print(intercept.shape)
# Plot the surface.
surf = ax.plot_surface(slope_s, intercept_s, cost, cmap=cm.coolwarm, linewidth=0, antialiased=False)
# Customize the z axis.
#ax.set_zlim(-1.01, 1.01)
#ax.zaxis.set_major_locator(LinearLocator(10))
#ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
#fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
Explanation: Residuals and Cost
Residuals are the differences between the true and predicted values along the sole prediction axis. Mean-squared-error is a common way to compute the quality of a prediction set compared to the true set.
End of explanation
# using scipy
from scipy import stats
line = Line(slope=2, intercept=6)
(x, y) = line.get_rand_sample(100, 0, 10, noise=1)
slope, intercept, r, p, _ = stats.linregress(x, y)
print('Slope = {:.3f} (r = {:.3f}, p = {:.5f})'.format(slope, r, p))
Explanation: Regression (Scipy)
End of explanation
# plot
line = Line(slope=1.5, intercept=5)
line.line_fun = lambda x : np.sin(2*np.pi*x)
(x, y) = line.get_sample(20, noise=1)
# plot (scatter) noisy sample
sns.regplot(x, y, fit_reg=False, label='Noisy Sample')
# plot line
plt.plot(*line.get_sample(100), label='True Line')
plt.legend()
plt.show()
Explanation: Polynomial Regression
Using polynomial features can help to better model our data. Polynomial features can be obtained by raising to the power or by combining our original base features. Notice that the resulting polynomial function, even if not linear for the features, is a linear function of our target coefficients, so we are still dealing with a linear model.
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
# create a polynomial features generator
M = 9 # order of the polynomial
poly = PolynomialFeatures(M)
# generate polynomial features from original training data
poly_x = poly.fit_transform(x.reshape(-1, 1))
# fir a linear regression model
lr = LinearRegression()
lr.fit(poly_x, y)
# plot results
# plot training data
sns.regplot(x, y, fit_reg=False, label='Noisy Sample')
# plot true line
plt.plot(*line.get_sample(100), label='True Line')
# plot predicted line
x_pred = np.linspace(0, 1, 40)
y_pred = lr.predict(poly.fit_transform(x_pred.reshape(-1, 1)))
plt.plot(x_pred, y_pred, label='Predicted Line (M={})'.format(M))
plt.legend()
plt.show()
Explanation: Solve (SKlearn)
End of explanation
# Squared error cost function
def compute_cost(X, y, theta):
m = len(y)
# compute predictions
pred = X.dot(theta).flatten()
# compute cost
mse = ((y - pred)** 2).sum()/(2*m)
return mse
# single gradient descent step
def gradient_descent_step(X, y, theta, alpha):
# compute predictions
pred = X.dot(theta).flatten()
# get error
errors = -np.sum((y-pred)*X.T, axis=1).reshape(2,1)
# With regularization (notice bias should not be regularized)
#theta -= alpha * ((errors/len(y)) + (lambda*theta)/len(y))
theta -= alpha * (errors/len(y))
return theta
# run an entire training cycle
def train(X, y, alpha, iters):
cost_history = np.zeros(shape=(iters, 1))
theta_history = []
# our parameters are slope and intercepts (bias)
theta = np.random.randn(2, 1)
for i in range(iters):
theta = gradient_descent_step(X, y, theta, alpha)
cost_history[i, 0] = compute_cost(X, y, theta)
theta_history.append(theta.copy())
return theta_history, cost_history
# target line
true_line = Line(slope=1.5, intercept=5)
#training data
(x_data, y) = true_line.get_rand_sample(100, noise=1)
# train data including bias/intercept input (set to 1)
X = np.ones(shape=(len(y), 2))
X[:,1] = x_data
print(X.shape)
print(y.shape)
alpha = 0.01
epochs = 1000
theta_history, cost_history = train(X, y, alpha, epochs)
# Plot history
fig, axes = plt.subplots(2, 1)
# plot cost
axes[0].set_title('Cost History')
axes[0].plot(cost_history.reshape(-1))
axes[0].set_ylabel("cost")
# plot theta
axes[1].set_title('Theta History')
axes[1].plot([t[0] for t in theta_history], label='intercept')
axes[1].plot([t[1] for t in theta_history], label='slope')
axes[1].set_xlabel("epoch")
plt.legend()
plt.show()
Explanation: Gradient Descent
Fit regression line using gradient descent.
Link 1
Link 2
End of explanation
alpha = 0.01
epochs = 1000
# Plot SGD animation
fig, ax = sns.plt.subplots()
#ax.set_xlim(0, 2)
#ax.set_ylim(-10, 10)
# plot true line
ax.plot(*true_line.get_sample(2), label='target line')
ax.scatter(x_data, y, label='train data')
# initial fitted line
theta = np.random.randn(2, 1)
fit_line = Line(slope=theta[0], intercept=theta[1])
# plot initial fitted line
p_line, = ax.plot(*fit_line.get_sample(2), 'k-', label='regression line')
epoch_text = ax.text(0, 0, "Epoch 0")
def animate(i):
global X, y, theta, alpha
theta = gradient_descent_step(X, y, theta, alpha)
fit_line.intercept = theta[0]
fit_line.slope = theta[1]
l_x, l_y = fit_line.get_sample(2)
p_line.set_data(list(l_x), list(l_y))
cost = compute_cost(X, y, theta)
epoch_text.set_text("Epoch {}, Cost {:.4f}".format(i, cost))
return epoch_text, p_line
ani = animation.FuncAnimation(fig, animate, epochs, interval=10, repeat=False)
plt.legend(loc='lower right')
plt.show()
Explanation: Training Animation
End of explanation |
14,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Opreaciones Matematicas
Suma
Step1: Multiplicación
Step2: División
Step3: Potencia
Step4: Funciones Trigonometricas
En las siguientes celdas vamos a calcular los valores de funciones comunes en nuestras clases de matemáticas.
Para esto, necesitamos importar la libreria numpy.
Step5: Logaritmo y Exponencial
Step6: Reto de Programación
Variables
Una variable es un espacio para guardar valores modificables o constantes.
python
nombre_de_la_variable = valor_de_la_variable
Los distintos tipos de variables son
Step7: Variable tipo int
Step8: Variable tipo float
Step9: Variable tipo str
Step10: Variable tipo bool
Step11: Como averiguo el tipo de una variable ??
Utilizando la función type
Step12: Reto de Programación
Variables para guardar colecciones de datos
Python tiene otro 3 tipos de variables mas complejas que pueden almacenar colecciones de datos
como los visotos anteriormente
Listas
Tuplas
Diccionarios
Listas
Las listas permiten guardar colecciones de datos con diferentes tipos
Step13: Como puedo mirar un elemento o elementos de mi lista??
Para leer el elemento de la posición n, se usa
Step14: Como leer los elementos entre la posición n y m??
python
mi_lista[n
Step15: Reto de Programación
Tuplas
Las tuplas permiten guardar colecciones de datos de diferentes tipos
Step16: Reto de Programación
Diccionarios
Mientras que en las listas y tuplas se accede a los elementos por un número de indice, en los diccionarios se utilizan claves(numericas ó de texto) para acceder a los elementos. Los elementos guardados en cada clave son de diferentes tipos, incluso listas u otros diccionarios.
python
int; str; float; bool, list, dict
Una diccionario se crea de la siguiente forma
Step17: Reto de Programación
Estructuras de control condicionales
Las estructuras de control condicionales nos permiten evaluar si una o mas condiciones se cumplen, y respecto a esto
ejecutar la siguiente accion.
Primero usamos
Step18: Reto de Programación
Estructuras de control iterativas(cíclicas o bucles)
Estas estructuras nos permiten ejecutar un mismo codigo, de manera repetida, mientras se cumpla una condición.
Bucle While
Este bucle ejecuta una misma acción mientras determinada condición se cumpla
Step19: Reto de Programación
Bucle for
En Python el bucle for nos permite iterar sobre variables que guardan colecciones de datos, como | Python Code:
2+3
Explanation: Opreaciones Matematicas
Suma : $2+3$
End of explanation
2*3
Explanation: Multiplicación: $2x3$
End of explanation
2/3
Explanation: División: $\frac{2}{3}$
End of explanation
2**3
Explanation: Potencia: $ 2^{3}$
End of explanation
# Importar una libreria en Python
import numpy as np # el comando "as np" sirve para asignarle un codigo mas corto a la libreria y ser mas rapido.
np.sin(3)
(np.sin(3))*(np.sin(2))
Explanation: Funciones Trigonometricas
En las siguientes celdas vamos a calcular los valores de funciones comunes en nuestras clases de matemáticas.
Para esto, necesitamos importar la libreria numpy.
End of explanation
np.log(3)
np.exp(3)
Explanation: Logaritmo y Exponencial: $ln(3), e^{3}$
End of explanation
# Ejemplo
a = 5
print (a) # Imprimir mi variable
Explanation: Reto de Programación
Variables
Una variable es un espacio para guardar valores modificables o constantes.
python
nombre_de_la_variable = valor_de_la_variable
Los distintos tipos de variables son:
Enteros (int): 1, 2, 3, -10, -103
Números continuos (float): 0.666, -10.678
Cadena de texto (str): 'clubes', 'clubes de ciencia', 'Roberto'
Booleano (verdadero / Falso): True, False
End of explanation
b = -15
print (b)
Explanation: Variable tipo int
End of explanation
c = 3.1416
print (c)
Explanation: Variable tipo float
End of explanation
d = 'clubes de ciencia'
print (d)
Explanation: Variable tipo str
End of explanation
e = False
print (e)
Explanation: Variable tipo bool
End of explanation
print (type(a))
print (type(b))
print (type(c))
print (type(d))
print (type(e))
Explanation: Como averiguo el tipo de una variable ??
Utilizando la función type:
python
type(nombre_de_la_variable)
End of explanation
# Ejemplo
mi_lista = [1,2,3,5,6,-3.1416]
mi_lista_diversa = [1,2,'clubes', 'de', 'ciencia', 3.1416, False]
print (mi_lista)
print (mi_lista_diversa)
Explanation: Reto de Programación
Variables para guardar colecciones de datos
Python tiene otro 3 tipos de variables mas complejas que pueden almacenar colecciones de datos
como los visotos anteriormente
Listas
Tuplas
Diccionarios
Listas
Las listas permiten guardar colecciones de datos con diferentes tipos:
python
int; str; float; bool
Una lista se crea de la siguiente forma:
python
nombre_de_la_lista = [valor_1, valor_2, valor_3]
Los valores de la lista pueden ser modificados.
End of explanation
# Ejemplo
print (mi_lista[0]) # Leer el primer elemento que se encuentra en la posición n=0
print (mi_lista_diversa[0])
print (type(mi_lista[5])) # Leer el tipo de variable en la posición n=5
Explanation: Como puedo mirar un elemento o elementos de mi lista??
Para leer el elemento de la posición n, se usa:
python
mi_lista[n]
End of explanation
#Ejemplo
print (mi_lista[0:3]) # Leer entre n=0 y m=2
Explanation: Como leer los elementos entre la posición n y m??
python
mi_lista[n:m+1]
End of explanation
#Ejemplo
mi_lista = ('cadena de texto', 15, 2.8, 'otro dato', 25)
print (mi_lista)
print (mi_lista[2]) # leer el tercer elemento de la tupla
print (mi_lista[2:4]) # leer los dos ultimos elementos de la tupla
Explanation: Reto de Programación
Tuplas
Las tuplas permiten guardar colecciones de datos de diferentes tipos:
python
int; str; float; bool
Una tupla se crea de la siguiente forma:
python
mi_tupla = ('cadena de texto', 15, 2.8, 'otro dato', 25)
Los valores de una tupla no pueden ser modificados. Sus elementos se leen como en las listas
End of explanation
# Ejemplo 1
mi_diccionario = {'grupo_1':4, 'grupo_2':6, 'grupo_3':7, 'grupo_4':3}
print (mi_diccionario['grupo_2'])
# Ejemplo 2 con diferentes tipos de elementos
informacion_persona = {'nombres':'Elon', 'apellidos':'Musk', 'edad':45, 'nacionalidad':'Sudafricano',
'educacion':['Administracion de empresas','Física'],'empresas':['Zip2','PyPal','SpaceX','SolarCity']}
print (informacion_persona['educacion'])
print (informacion_persona['empresas'])
Explanation: Reto de Programación
Diccionarios
Mientras que en las listas y tuplas se accede a los elementos por un número de indice, en los diccionarios se utilizan claves(numericas ó de texto) para acceder a los elementos. Los elementos guardados en cada clave son de diferentes tipos, incluso listas u otros diccionarios.
python
int; str; float; bool, list, dict
Una diccionario se crea de la siguiente forma:
python
mi_diccionario = {'grupo_1':4, 'grupo_2':6, 'grupo_3':7, 'grupo_4':3}
Acceder al valor de la clave grupo_2:
python
print (mi_diccionario['grupo_2'])
End of explanation
# Ejemplo
color_semaforo = 'amarillo'
if color_semaforo == 'verde':
print ("Cruzar la calle")
else:
print ("Esperar")
# ejemplo
dia_semana = 'lunes'
if dia_semana == 'sabado' or dia_semana == 'domingo':
print ('Me levanto a las 10 de la mañana')
else:
print ('Me levanto antes de las 7am')
# Ejemplo
costo_compra = 90
if costo_compra <= 100:
print ("Pago en efectivo")
elif costo_compra > 100 and costo_compra < 300:
print ("Pago con tarjeta de débito")
else:
print ("Pago con tarjeta de crédito")
Explanation: Reto de Programación
Estructuras de control condicionales
Las estructuras de control condicionales nos permiten evaluar si una o mas condiciones se cumplen, y respecto a esto
ejecutar la siguiente accion.
Primero usamos:
python
if
Despues algun operador relacional para comparar
```python
== igual que
!= diferente de
< menor que
mayor que
<= menor igual que
= mayor igual que
```
Cuando se evalua mas de una conición:
```python
and, & (y)
or, | (ó)
```
End of explanation
# ejemplo
anio = 2001
while anio <= 2012:
print ("Informes del Año", str(anio))
anio = anio + 1 # aumentamos anio en 1
# ejemplo
cuenta = 10
while cuenta >= 0:
print ('faltan '+str(cuenta)+' minutos')
cuenta += -1
Explanation: Reto de Programación
Estructuras de control iterativas(cíclicas o bucles)
Estas estructuras nos permiten ejecutar un mismo codigo, de manera repetida, mientras se cumpla una condición.
Bucle While
Este bucle ejecuta una misma acción mientras determinada condición se cumpla:
python
anio = 2001
while anio <= 2012:
print ("Informes del Año", str(anio))
anio = anio + 1 # aumentamos anio en 1
En este ejemplo la condición es menor que 2012
End of explanation
# Ejemplo
mi_tupla = ('rosa', 'verde', 'celeste', 'amarillo')
for color in mi_tupla:
print (color)
# Ejemplo
dias_semana = ['lunes','martes','miercoles','jueves','viernes','sabado','domingo']
for i in dias_semana:
if (i == dias_semana[-1]) or (i == dias_semana[-2]):
print ('Hoy seguire aprendiendo de programación')
else:
print ('Hoy tengo que ir al colegio')
Explanation: Reto de Programación
Bucle for
En Python el bucle for nos permite iterar sobre variables que guardan colecciones de datos, como : tuplas y listas.
python
mi_lista = ['Juan', 'Antonio', 'Pedro', 'Herminio']
for nombre in mi_lista:
print (nombre)
En el codigo vemos que la orden es ir por cada uno de los elementos de la lista para imprimirlos.
End of explanation |
14,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
14,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Measuring monotonic relationships
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards
Reference
Step1: Spearman Rank Correlation
Intuition
The intution is now that instead of looking at the relationship between the two variables, we look at the relationship between the ranks. This is robust to outliers and the scale of the data.
Definition
The argument method='average' indicates that when we have a tie, we average the ranks that the numbers would occupy. For example, the two 5's above, which would take up ranks 1 and 2, each get assigned a rank of $1.5$.
To compute the Spearman rank correlation for two data sets $X$ and $Y$, each of size $n$, we use the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference between the ranks of the $i$th pair of observations, $X_i - Y_i$.
The result will always be between $-1$ and $1$. A positive value indicates a positive relationship between the variables, while a negative value indicates an inverse relationship. A value of 0 implies the absense of any monotonic relationship. This does not mean that there is no relationship; for instance, if $Y$ is equal to $X$ with a delay of 2, they are related simply and precisely, but their $r_S$ can be close to zero
Step2: Let's take a look at the distribution of measured correlation coefficients and compare the spearman with the regular metric.
Step3: Now let's see how the Spearman rank and Regular coefficients cope when we add more noise to the situation.
Step4: We can see that the Spearman rank correlation copes with the non-linear relationship much better at most levels of noise. Interestingly, at very high levels, it seems to do worse than regular correlation.
Delay in correlation
Of you might have the case that one process affects another, but after a time lag. Now let's see what happens if we add the delay.
Step5: Sure enough, the relationship is not detected. It is important when using both regular and spearman correlation to check for lagged relationships by offsetting your data and testing for different offset values.
Built-In Function
We can also use the spearmanr function in the scipy.stats library
Step6: We now have ourselves an $r_S$, but how do we interpret it? It's positive, so we know that the variables are not anticorrelated. It's not very large, so we know they aren't perfectly positively correlated, but it's hard to say from a glance just how significant the correlation is. Luckily, spearmanr also computes the p-value for this coefficient and sample size for us. We can see that the p-value here is above 0.05; therefore, we cannot claim that $X$ and $Y$ are correlated.
Real World Example
Step7: Our p-value is below the cutoff, which means we accept the hypothesis that the two are correlated. The negative coefficient indicates that there is a negative correlation, and that more expensive mutual funds have worse sharpe ratios. However, there is some weird clustering in the data, it seems there are expensive groups with low sharpe ratios, and a main group whose sharpe ratio is unrelated to the expense. Further analysis would be required to understand what's going on here.
Real World Use Case | Python Code:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
# Example of ranking data
l = [10, 9, 5, 7, 5]
print 'Raw data: ', l
print 'Ranking: ', list(stats.rankdata(l, method='average'))
Explanation: Measuring monotonic relationships
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards
Reference: DeFusco, Richard A. "Tests Concerning Correlation: The Spearman Rank Correlation Coefficient." Quantitative Investment Analysis. Hoboken, NJ: Wiley, 2007
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.
The Spearman Rank Correlation Coefficient allows us to determine whether or not two data series move together; that is, when one increases (decreases) the other also increases (decreases). This is more general than a linear relationship; for instance, $y = e^x$ is a monotonic function, but not a linear one. Therefore, in computing it we compare not the raw data but the ranks of the data.
This is useful when your data sets may be in different units, and therefore not linearly related (for example, the price of a square plot of land and its side length, since the price is more likely to be linear in the area). It's also suitable for data sets which not satisfy the assumptions that other tests require, such as the observations being normally distributed as would be necessary for a t-test.
End of explanation
## Let's see an example of this
n = 100
def compare_correlation_and_spearman_rank(n, noise):
X = np.random.poisson(size=n)
Y = np.exp(X) + noise * np.random.normal(size=n)
Xrank = stats.rankdata(X, method='average')
# n-2 is the second to last element
Yrank = stats.rankdata(Y, method='average')
diffs = Xrank - Yrank # order doesn't matter since we'll be squaring these values
r_s = 1 - 6*sum(diffs*diffs)/(n*(n**2 - 1))
c_c = np.corrcoef(X, Y)[0,1]
return r_s, c_c
experiments = 1000
spearman_dist = np.ndarray(experiments)
correlation_dist = np.ndarray(experiments)
for i in range(experiments):
r_s, c_c = compare_correlation_and_spearman_rank(n, 1.0)
spearman_dist[i] = r_s
correlation_dist[i] = c_c
print 'Spearman Rank Coefficient: ' + str(np.mean(spearman_dist))
# Compare to the regular correlation coefficient
print 'Correlation coefficient: ' + str(np.mean(correlation_dist))
Explanation: Spearman Rank Correlation
Intuition
The intution is now that instead of looking at the relationship between the two variables, we look at the relationship between the ranks. This is robust to outliers and the scale of the data.
Definition
The argument method='average' indicates that when we have a tie, we average the ranks that the numbers would occupy. For example, the two 5's above, which would take up ranks 1 and 2, each get assigned a rank of $1.5$.
To compute the Spearman rank correlation for two data sets $X$ and $Y$, each of size $n$, we use the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference between the ranks of the $i$th pair of observations, $X_i - Y_i$.
The result will always be between $-1$ and $1$. A positive value indicates a positive relationship between the variables, while a negative value indicates an inverse relationship. A value of 0 implies the absense of any monotonic relationship. This does not mean that there is no relationship; for instance, if $Y$ is equal to $X$ with a delay of 2, they are related simply and precisely, but their $r_S$ can be close to zero:
Experiment
Let's see what happens if we draw $X$ from a poisson distribution (non-normal), and then set $Y = e^X + \epsilon$ where $\epsilon$ is drawn from another poisson distribution. We'll take the spearman rank and the correlation coefficient on this data and then run the entire experiment many times. Because $e^X$ produces many values that are far away from the rest, we can this of this as modeling 'outliers' in our data. Spearman rank compresses the outliers and does better at measuring correlation. Normal correlation is confused by the outliers and on average will measure less of a relationship than is actually there.
End of explanation
plt.hist(spearman_dist, bins=50, alpha=0.5)
plt.hist(correlation_dist, bins=50, alpha=0.5)
plt.legend(['Spearman Rank', 'Regular Correlation'])
plt.xlabel('Correlation Coefficient')
plt.ylabel('Frequency');
Explanation: Let's take a look at the distribution of measured correlation coefficients and compare the spearman with the regular metric.
End of explanation
n = 100
noises = np.linspace(0, 3, 30)
experiments = 100
spearman = np.ndarray(len(noises))
correlation = np.ndarray(len(noises))
for i in range(len(noises)):
# Run many experiments for each noise setting
rank_coef = 0.0
corr_coef = 0.0
noise = noises[i]
for j in range(experiments):
r_s, c_c = compare_correlation_and_spearman_rank(n, noise)
rank_coef += r_s
corr_coef += c_c
spearman[i] = rank_coef/experiments
correlation[i] = corr_coef/experiments
plt.scatter(noises, spearman, color='r')
plt.scatter(noises, correlation)
plt.legend(['Spearman Rank', 'Regular Correlation'])
plt.xlabel('Amount of Noise')
plt.ylabel('Average Correlation Coefficient')
Explanation: Now let's see how the Spearman rank and Regular coefficients cope when we add more noise to the situation.
End of explanation
n = 100
X = np.random.rand(n)
Xrank = stats.rankdata(X, method='average')
# n-2 is the second to last element
Yrank = stats.rankdata([1,1] + list(X[:(n-2)]), method='average')
diffs = Xrank - Yrank # order doesn't matter since we'll be squaring these values
r_s = 1 - 6*sum(diffs*diffs)/(n*(n**2 - 1))
print r_s
Explanation: We can see that the Spearman rank correlation copes with the non-linear relationship much better at most levels of noise. Interestingly, at very high levels, it seems to do worse than regular correlation.
Delay in correlation
Of you might have the case that one process affects another, but after a time lag. Now let's see what happens if we add the delay.
End of explanation
# Generate two random data sets
np.random.seed(161)
X = np.random.rand(10)
Y = np.random.rand(10)
r_s = stats.spearmanr(X, Y)
print 'Spearman Rank Coefficient: ', r_s[0]
print 'p-value: ', r_s[1]
Explanation: Sure enough, the relationship is not detected. It is important when using both regular and spearman correlation to check for lagged relationships by offsetting your data and testing for different offset values.
Built-In Function
We can also use the spearmanr function in the scipy.stats library:
End of explanation
mutual_fund_data = local_csv('mutual_fund_data.csv')
expense = mutual_fund_data['Annual Expense Ratio'].values
sharpe = mutual_fund_data['Three Year Sharpe Ratio'].values
plt.scatter(expense, sharpe)
plt.xlabel('Expense Ratio')
plt.ylabel('Sharpe Ratio')
r_S = stats.spearmanr(expense, sharpe)
print 'Spearman Rank Coefficient: ', r_S[0]
print 'p-value: ', r_S[1]
Explanation: We now have ourselves an $r_S$, but how do we interpret it? It's positive, so we know that the variables are not anticorrelated. It's not very large, so we know they aren't perfectly positively correlated, but it's hard to say from a glance just how significant the correlation is. Luckily, spearmanr also computes the p-value for this coefficient and sample size for us. We can see that the p-value here is above 0.05; therefore, we cannot claim that $X$ and $Y$ are correlated.
Real World Example: Mutual Fund Expense Ratio
Now that we've seen how Spearman rank correlation works, we'll quickly go through the process again with some real data. For instance, we may wonder whether the expense ratio of a mutual fund is indicative of its three-year Sharpe ratio. That is, does spending more money on administration, management, etc. lower the risk or increase the returns? Quantopian does not currently support mutual funds, so we will pull the data from Yahoo Finance. Our p-value cutoff will be the usual default of 0.05.
Data Source
Thanks to Matthew Madurski for the data. To obtain the same data:
Download the csv from this link. https://gist.github.com/dursk/82eee65b7d1056b469ab
Upload it to the 'data' folder in your research account.
End of explanation
symbol_list = ['A', 'AA', 'AAC', 'AAL', 'AAMC', 'AAME', 'AAN', 'AAOI', 'AAON', 'AAP', 'AAPL', 'AAT', 'AAU', 'AAV', 'AAVL', 'AAWW', 'AB', 'ABAC', 'ABAX', 'ABB', 'ABBV', 'ABC', 'ABCB', 'ABCD', 'ABCO', 'ABCW', 'ABDC', 'ABEV', 'ABG', 'ABGB']
# Get the returns over the lookback window
start = '2014-12-01'
end = '2015-01-01'
historical_returns = get_pricing(symbol_list, fields='price', start_date=start, end_date=end).pct_change()[1:]
# Compute our stock score
scores = np.mean(historical_returns)
print 'Our Scores\n'
print scores
print '\n'
start = '2015-01-01'
end = '2015-02-01'
walk_forward_returns = get_pricing(symbol_list, fields='price', start_date=start, end_date=end).pct_change()[1:]
walk_forward_returns = np.mean(walk_forward_returns)
print 'The Walk Forward Returns\n'
print walk_forward_returns
print '\n'
plt.scatter(scores, walk_forward_returns)
plt.xlabel('Scores')
plt.ylabel('Walk Forward Returns')
r_s = stats.spearmanr(scores, walk_forward_returns)
print 'Correlation Coefficient: ' + str(r_s[0])
print 'p-value: ' + str(r_s[1])
Explanation: Our p-value is below the cutoff, which means we accept the hypothesis that the two are correlated. The negative coefficient indicates that there is a negative correlation, and that more expensive mutual funds have worse sharpe ratios. However, there is some weird clustering in the data, it seems there are expensive groups with low sharpe ratios, and a main group whose sharpe ratio is unrelated to the expense. Further analysis would be required to understand what's going on here.
Real World Use Case: Evaluating a Ranking Model
The lectures on Spearman Rank Correlation and Factor Analysis now cover this topic in much greater detail
Let's say that we have some way of ranking securities and that we'd like to test how well our ranking performs in practice. In this case our model just takes the mean daily return for the last month and ranks the stocks by that metric.
We hypothesize that this will be predictive of the mean returns over the next month. To test this we score the stocks based on a lookback window, then take the spearman rank correlation of the score and the mean returns over the walk forward month.
End of explanation |
14,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using GeoPandas with Rasterio to sample point data
This example shows how to use GeoPandas with Rasterio. Rasterio is a package for reading and writing raster data.
In this example a set of vector points is used to sample raster data at those points.
The raster data used is Copernicus Sentinel data 2018 for Sentinel data.
Step1: Create example vector data
Generate a geodataframe from a set of points
Step2: The GeoDataFrame looks like this
Step3: Open the raster data
Use rasterio to open the raster data to be sampled
Step4: Let's see the raster data with the point data overlaid.
Step5: Sampling the data
Rasterio requires a list of the coordinates in x,y format rather than as the points that are in the geomentry column.
This can be achieved using the code below
Step6: Carry out the sampling of the data and store the results in a new column called value. Note that if the image has more than one band, a value is returned for each band. | Python Code:
import geopandas
import rasterio
import matplotlib.pyplot as plt
from shapely.geometry import Point
Explanation: Using GeoPandas with Rasterio to sample point data
This example shows how to use GeoPandas with Rasterio. Rasterio is a package for reading and writing raster data.
In this example a set of vector points is used to sample raster data at those points.
The raster data used is Copernicus Sentinel data 2018 for Sentinel data.
End of explanation
# Create sampling points
points = [Point(625466, 5621289), Point(626082, 5621627), Point(627116, 5621680), Point(625095, 5622358)]
gdf = geopandas.GeoDataFrame([1, 2, 3, 4], geometry=points, crs=32630)
Explanation: Create example vector data
Generate a geodataframe from a set of points
End of explanation
gdf.head()
Explanation: The GeoDataFrame looks like this:
End of explanation
src = rasterio.open('s2a_l2a_fishbourne.tif')
Explanation: Open the raster data
Use rasterio to open the raster data to be sampled
End of explanation
from rasterio.plot import show
fig, ax = plt.subplots()
# transform rasterio plot to real world coords
extent=[src.bounds[0], src.bounds[2], src.bounds[1], src.bounds[3]]
ax = rasterio.plot.show(src, extent=extent, ax=ax, cmap='pink')
gdf.plot(ax=ax)
Explanation: Let's see the raster data with the point data overlaid.
End of explanation
coord_list = [(x,y) for x,y in zip(gdf['geometry'].x , gdf['geometry'].y)]
Explanation: Sampling the data
Rasterio requires a list of the coordinates in x,y format rather than as the points that are in the geomentry column.
This can be achieved using the code below
End of explanation
gdf['value'] = [x for x in src.sample(coord_list)]
gdf.head()
Explanation: Carry out the sampling of the data and store the results in a new column called value. Note that if the image has more than one band, a value is returned for each band.
End of explanation |
14,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step25: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters
Step26: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step27: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step28: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step29: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom training tabular regression model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for online prediction.
Dataset
The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.
End of explanation
local_model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
DEPLOYED_NAME = "boston-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
Explanation: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
instances_list = [test_item.tolist()]
prediction = endpoint.predict(instances_list)
print(prediction)
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
14,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rock, Paper, Scissors or... People are Predictable
The NY Times created a Rock, Paper, Scissors bot. If you try it, chances are it'll win handily. No matter how hard you try, you're going to fall into patterns that the computer is going to be able to identify and then exploit. As the article notes, if you were capable of producing truly random throws, then on average you'd win about as much as you lose, but humans are really bad at acting truly randomly.
For example, people seriously underestimate the probability of streaks. Let's say I throw Rock/Paper/Scissors 100 times, trying to be random. What do you think is the likelihood that I (a person trying to be random) would throw 4 in a row at some point? How about 5 in a row?
I haven't conducted that study, but my guess is that in both cases, it would be very uncommon
Step2: How to compute the probabilities
How do we compute the probability of 4 in a row in a stream of 100 throws? We'll model it as a random walk around 4 possible states. After each throw, the possibilities will be that...
No streak. The last throw is different from the one before
2 element streak. The last two throws (but not third) are the same
3 element streak. The last three throws (but not the fourth) are the same.
4 element streak somewhere. At some point we've seen 4 in a row.
Some things to note
* After 1 throw, we obviously start in State 1.
* If we ever reach state 4, we stay there forever.
* The probability of moving from State 1 to State 2, or State 2 to State 3, or State 3 to State 4 is 1/3
* The probability of moving from State 1,2,3 back to State 1 is 2/3.
Put that all together into a matrix of transition probabilities, where M[i,j] is the probability of going to state i given state j, and you get this...
Step3: Given the first bullet point, The vector of state probabilities after the first throw is simply [1,0,0,0]
Step4: Then the probabilities for N throws is tm^N * [1,0,0,0]'
Step5: The probability of a streak of length 4 is just the last element of that vector.
Below you have the results for streaks of various lengths, and different throw counts. | Python Code:
import numpy as np
from numpy.linalg import matrix_power
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Rock, Paper, Scissors or... People are Predictable
The NY Times created a Rock, Paper, Scissors bot. If you try it, chances are it'll win handily. No matter how hard you try, you're going to fall into patterns that the computer is going to be able to identify and then exploit. As the article notes, if you were capable of producing truly random throws, then on average you'd win about as much as you lose, but humans are really bad at acting truly randomly.
For example, people seriously underestimate the probability of streaks. Let's say I throw Rock/Paper/Scissors 100 times, trying to be random. What do you think is the likelihood that I (a person trying to be random) would throw 4 in a row at some point? How about 5 in a row?
I haven't conducted that study, but my guess is that in both cases, it would be very uncommon: maybe 10-25% of the people.
But how likely is it that a truly random computer throws a streak of 4? Well that's something we can calculate. And it turns out the odds are 92%. Even 5 in a row is likely to happen 56% of the time.
End of explanation
def transition_matrix(streak_length):
TM[i,j] = Prob[transitioning to streak length i from streak length j]
tm = np.zeros((streak_length, streak_length))
tm[0,0:streak_length-1] = 2/3.0
tm[1:streak_length, 0:streak_length-1] = np.eye(streak_length-1) * 1/3.0
tm[streak_length-1, streak_length-1] = 1.0
return np.matrix(tm)
tm = transition_matrix(4); tm
Explanation: How to compute the probabilities
How do we compute the probability of 4 in a row in a stream of 100 throws? We'll model it as a random walk around 4 possible states. After each throw, the possibilities will be that...
No streak. The last throw is different from the one before
2 element streak. The last two throws (but not third) are the same
3 element streak. The last three throws (but not the fourth) are the same.
4 element streak somewhere. At some point we've seen 4 in a row.
Some things to note
* After 1 throw, we obviously start in State 1.
* If we ever reach state 4, we stay there forever.
* The probability of moving from State 1 to State 2, or State 2 to State 3, or State 3 to State 4 is 1/3
* The probability of moving from State 1,2,3 back to State 1 is 2/3.
Put that all together into a matrix of transition probabilities, where M[i,j] is the probability of going to state i given state j, and you get this...
End of explanation
starting_vec = np.matrix([1,0,0,0]).transpose()
tm * starting_vec
Explanation: Given the first bullet point, The vector of state probabilities after the first throw is simply [1,0,0,0]: 100% probability of being in state 1.
If we want the state probabilities after 2 throws, we multiply this vector by the transition matrix tm, like so...
End of explanation
def prob_of_run(streak_length, num_throws):
starting_vec = np.zeros((streak_length,1))
starting_vec[0] = 1.0
tm_n = matrix_power(transition_matrix(streak_length), num_throws - 1)
return (tm_n * starting_vec)[streak_length-1, 0]
Explanation: Then the probabilities for N throws is tm^N * [1,0,0,0]'
End of explanation
streak_lengths = range(2,10)
num_throws = [10, 25, 100, 500]
probs = [[prob_of_run(i, curr_throws) for i in streak_lengths] for curr_throws in num_throws]
line_fmts = ["go-","bo-","ro-", 'mo-']
for throws, prob, fmt in zip(num_throws, probs, line_fmts):
plt.plot(streak_lengths, prob, fmt)
plt.ylabel("probability")
plt.xlabel("streak length")
plt.grid()
plt.legend(["%d throws" % i for i in num_throws])
Explanation: The probability of a streak of length 4 is just the last element of that vector.
Below you have the results for streaks of various lengths, and different throw counts.
End of explanation |
14,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
Step12: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
len(reviews)
reviews[0]
labels[0]
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
from collections import Counter
import numpy as np
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
import re
for index, review in enumerate(reviews):
label = labels[index]
review = re.sub('\s+', ' ', review) # condense whitespacechars
words = review.split(' ')
total_counts += Counter(words)
if label == 'POSITIVE':
positive_counts += Counter(words)
if label == 'NEGATIVE':
negative_counts += Counter(words)
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()[:10]
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()[:10]
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
common_words = [x for x in total_counts.most_common() if x[1] > 100]
for word, count in common_words:
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word]+1)
for word, ratio in pos_neg_ratios.most_common():
if ratio > 1:
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1 / (ratio + 0.01))
print(pos_neg_ratios.most_common()[:15])
print(list(reversed(pos_neg_ratios.most_common()[:15])))
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
# TODO: Convert ratios to logs
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()[:10]
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:10]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = [x for x in total_counts]
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
vocab_size = len(vocab)
print(vocab_size)
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1,vocab_size))
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i, word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
review = re.sub('\s+', ' ', review) # condense whitespace
words = review.split(' ')
for word in words:
layer_0[0][word2index[word]] += 1
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
update_input_layer(reviews[0])
layer_0
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if label == 'POSITIVE':
return 1
else:
return 0
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
labels[0]
get_target_for_label(labels[0])
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
labels[1]
get_target_for_label(labels[1])
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0 # clear out previous state
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0 # clear out previous state
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews_raw)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, min_count = 10, polarity_cutoff = 0.1, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, min_count, polarity_cutoff)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given revlayer_2_erroriews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews_raw)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = layer_2 - self.get_target_for_label(label)
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 -= np.dot(layer_1.T, layer_2_delta) * self.learning_rate
self.weights_0_1 -= np.dot(self.layer_0.T, layer_1_delta) * self.learning_rate
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if layer_2_error < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance. It should be
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance.
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation |
14,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Initializing a new function
There are two ways to create a function $f
Step2: Notice how the print built-in and the to_latex() method will show human-readable output.
With a table of values
Functions on $\mathbb{Z}_\mathbf{p}$ can be defined using a table of values, if $p_i \geq 1$ for every $p_i \in \mathbf{p}$.
Step3: Function evaluation
A function $f \in \mathbb{C}^G$ is callable.
To call (i.e. evaluate) a function,
pass a group element.
Step4: The sample() method can be used to sample a function on a list of group elements in the domain.
Step5: Shifts
Let $f
Step6: Pullbacks
Let $\phi
Step7: We now sample the functions and plot them.
Step9: Pushforwards
Let $\phi
Step10: First we do a pushforward with only one term. Not enough terms are present in the sum to capture what the pushforward would look like if the sum went to infinity.
Step11: Next we do a pushforward with more terms in the sum, this captures what the pushforward would look like if the sum went to infinity. | Python Code:
# Imports from abelian
from abelian import LCA, HomLCA, LCAFunc
# Other imports
import math
import matplotlib.pyplot as plt
from IPython.display import display, Math
def show(arg):
return display(Math(arg.to_latex()))
Explanation: Tutorial: Functions on LCAs
This is an interactive tutorial written with real code.
We start by setting up $\LaTeX$ printing, and importing the classes LCA, HomLCA and LCAFunc.
End of explanation
def gaussian(vector_arg, k = 0.1):
return math.exp(-sum(i**2 for i in vector_arg)*k)
# Gaussian function on Z
Z = LCA([0])
gauss_on_Z = LCAFunc(gaussian, domain = Z)
print(gauss_on_Z) # Printing
show(gauss_on_Z) # LaTeX output
# Gaussian function on T
T = LCA([1], [False])
gauss_on_T = LCAFunc(gaussian, domain = T)
show(gauss_on_T) # LaTeX output
Explanation: Initializing a new function
There are two ways to create a function $f: G \to \mathbb{C}$:
On general LCAs $G$, the function is represented by an analytical expression.
If $G = \mathbb{Z}_{\mathbf{p}}$ with $p_i \geq 1$ for every $i$ ($G$ is a direct sum of discrete groups with finite period), a table of values (multidimensional array) can also be used.
With an analytical representation
If the representation of the function is given by an analytical expression, initialization is simple.
Below we define a Gaussian function on $\mathbb{Z}$, and one on $T$.
End of explanation
# Create a table of values
table_data = [[1,2,3,4,5],
[2,3,4,5,6],
[3,4,5,6,7]]
# Create a domain matching the table
domain = LCA([3, 5])
table_func = LCAFunc(table_data, domain)
show(table_func)
print(table_func([1, 1])) # [1, 1] maps to 3
Explanation: Notice how the print built-in and the to_latex() method will show human-readable output.
With a table of values
Functions on $\mathbb{Z}_\mathbf{p}$ can be defined using a table of values, if $p_i \geq 1$ for every $p_i \in \mathbf{p}$.
End of explanation
# An element in Z
element = [0]
# Evaluate the function
gauss_on_Z(element)
Explanation: Function evaluation
A function $f \in \mathbb{C}^G$ is callable.
To call (i.e. evaluate) a function,
pass a group element.
End of explanation
# Create a list of sample points [-6, ..., 6]
sample_points = [[i] for i in range(-6, 7)]
# Sample the function, returns a list of values
sampled_func = gauss_on_Z.sample(sample_points)
# Plot the result of sampling the function
plt.figure(figsize = (8, 3))
plt.title('Gaussian function on $\mathbb{Z}$')
plt.plot(sample_points, sampled_func, '-o')
plt.grid(True)
plt.show()
Explanation: The sample() method can be used to sample a function on a list of group elements in the domain.
End of explanation
# The group element to shift by
shift_by = [3]
# Shift the function
shifted_gauss = gauss_on_Z.shift(shift_by)
# Create sample poits and sample
sample_points = [[i] for i in range(-6, 7)]
sampled1 = gauss_on_Z.sample(sample_points)
sampled2 = shifted_gauss.sample(sample_points)
# Create a plot
plt.figure(figsize = (8, 3))
ttl = 'Gaussians on $\mathbb{Z}$, one is shifted'
plt.title(ttl)
plt.plot(sample_points, sampled1, '-o')
plt.plot(sample_points, sampled2, '-o')
plt.grid(True)
plt.show()
Explanation: Shifts
Let $f: G \to \mathbb{C}$ be a function. The shift operator (or translation operator) $S_{h}$ is defined as
$$S_{h}[f(g)] = f(g - h).$$
The shift operator shifts $f(g)$ by $h$, where $h, g \in G$.
The shift operator is implemented as a method called shift.
End of explanation
def linear(arg):
return sum(arg)
# The original function
f = LCAFunc(linear, LCA([10]))
show(f)
# A homomorphism phi
phi = HomLCA([2], target = [10])
show(phi)
# The pullback of f along phi
g = f.pullback(phi)
show(g)
Explanation: Pullbacks
Let $\phi: G \to H$ be a homomorphism and let $f:H \to \mathbb{C}$ be a function. The pullback of $f$ along $\phi$, denoted $\phi^*(f)$,
is defined as
$$\phi^*(f) := f \circ \phi.$$
The pullback "moves" the domain of the function $f$ to $G$, i.e. $\phi^*(f) : G \to \mathbb{C}$. The pullback is of f is calculated using the pullback method, as shown below.
End of explanation
# Sample the functions and plot them
sample_points = [[i] for i in range(-5, 15)]
f_sampled = f.sample(sample_points)
g_sampled = g.sample(sample_points)
# Plot the original function and the pullback
plt.figure(figsize = (8, 3))
plt.title('Linear functions')
label = '$f \in \mathbb{Z}_{10}$'
plt.plot(sample_points, f_sampled, '-o', label = label)
label = '$g \circ \phi \in \mathbb{Z}$'
plt.plot(sample_points, g_sampled, '-o', label = label)
plt.grid(True)
plt.legend(loc = 'best')
plt.show()
Explanation: We now sample the functions and plot them.
End of explanation
# We create a function on Z and plot it
def gaussian(arg, k = 0.05):
A gaussian function.
return math.exp(-sum(i**2 for i in arg)*k)
# Create gaussian on Z, shift it by 5
gauss_on_Z = LCAFunc(gaussian, LCA([0]))
gauss_on_Z = gauss_on_Z.shift([5])
# Sample points and sampled function
s_points = [[i] for i in range(-5, 15)]
f_sampled = gauss_on_Z.sample(s_points)
# Plot it
plt.figure(figsize = (8, 3))
plt.title('A gaussian function on $\mathbb{Z}$')
plt.plot(s_points, f_sampled, '-o')
plt.grid(True)
plt.show()
# Use a pushforward to periodize the function
phi = HomLCA([1], target = [10])
show(phi)
Explanation: Pushforwards
Let $\phi: G \to H$ be a epimorphism and let $f:G \to \mathbb{C}$ be a function. The pushforward of $f$ along $\phi$, denoted $\phi_*(f)$,
is defined as
$$(\phi_*(f))(g) := \sum_{k \in \operatorname{ker}\phi} f(k + h), \quad \phi(g) = h$$
The pullback "moves" the domain of the function $f$ to $H$, i.e. $\phi_*(f) : H \to \mathbb{C}$. First a solution is obtained, then we sum over the kernel. Since such a sum may contain an infinite number of terms, we bound it using a norm. Below is an example where we:
Define a Gaussian $f(x) = \exp(-kx^2)$ on $\mathbb{Z}$
Use pushforward to "move" it with $\phi(g) = g \in \operatorname{Hom}(\mathbb{Z}, \mathbb{Z}_{10})$
End of explanation
terms = 1
# Pushforward of the function along phi
gauss_on_Z_10 = gauss_on_Z.pushforward(phi, terms)
# Sample the functions and plot them
pushforward_sampled = gauss_on_Z_10.sample(sample_points)
plt.figure(figsize = (8, 3))
label = 'A gaussian function on $\mathbb{Z}$ and \
pushforward to $\mathbb{Z}_{10}$ with few terms in the sum'
plt.title(label)
plt.plot(s_points, f_sampled, '-o', label ='Original')
plt.plot(s_points, pushforward_sampled, '-o', label ='Pushforward')
plt.legend(loc = 'best')
plt.grid(True)
plt.show()
Explanation: First we do a pushforward with only one term. Not enough terms are present in the sum to capture what the pushforward would look like if the sum went to infinity.
End of explanation
terms = 9
gauss_on_Z_10 = gauss_on_Z.pushforward(phi, terms)
# Sample the functions and plot them
pushforward_sampled = gauss_on_Z_10.sample(sample_points)
plt.figure(figsize = (8, 3))
plt.title('A gaussian function on $\mathbb{Z}$ and \
pushforward to $\mathbb{Z}_{10}$ with enough terms')
plt.plot(s_points, f_sampled, '-o', label ='Original')
plt.plot(s_points, pushforward_sampled, '-o', label ='Pushforward')
plt.legend(loc = 'best')
plt.grid(True)
plt.show()
Explanation: Next we do a pushforward with more terms in the sum, this captures what the pushforward would look like if the sum went to infinity.
End of explanation |
14,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 1 - Geometry
In this tutorial we explore how simulated geometries can be defined and initial magnetisation states specified. The package we use to define finite difference meshes and fields is discretisedfield.
Step1: Defining the geometry
Let us say that we need to define a nanocube mesh with edge length $L=100\,\text{nm}$ and discretisation cell $(d, d, d)$, with $d=10 \,\text{nm}$. For that we need to define two points $p_{1}$ and $p_{2}$ between which the mesh spans and provide them (together with the discretisation cell) to the Mesh class
Step2: We can then inspect some basic parameters of the mesh
Step3: Number of discretisation cells in all three directions
Step4: Minimum mesh domain coordinate
Step5: Maximum mesh domain coordinate
Step6: Or we can visualise the mesh domain and a discretisation cell
Step7: Defining a field on a geometry
After we defined a mesh, we can define different finite difference fields. For that, we use Field class. We need to provide the mesh, dimension of the data values, and the value of the field. Let us define a 3d-vector field (dim=3) that is uniform in the $(1, 0, 0)$ direction.
Step8: A simple slice visualisation of the mesh in the $z$ direction at $L/2$ is
Step9: Spatially varying field
When we defined a uniform vector field, we used a tuple (1, 0, 0) to define its value. However, we can also provide a Python function if we want to define a non-uniform field. This function takes the position in the mesh as input, and returns a value that the field should have at that point
Step10: The field object can be treated as a mathematical function - if we pass a position tuple to the function, it will return the vector value of the field at that location
Step11: In micromagnetics, the saturation magnetisation $M_\mathrm{s}$ is typically constant (at least for each position). The Field constructor accepts an additional parameter norm which we can use for that
Step12: Spatially varying norm $M_\mathrm{s}$
By defining different norms, we can specify different geometries, so that $M_\text{s}=0$ outside the mesh. For instance, let us assume we want to define a sphere of radius $L/2$ and magnetise it in the negative $y$ direction.
Step13: Exercise 1a
The code below defines as thin film (thickness $t$) in the x-y plane. Extend the code in the following cell so that the magnetisation $M_\mathrm{s}$ is $10^7\mathrm{A/m}$ in a disk of thickness $t = 10 \,\text{nm}$ and diameter $d = 120 \,\text{nm}$. The disk is centred around the origin (0, 0, 0). The magnetisation should be $\mathbf{m} = (1, 0, 0)$.
Step14: Exercise 1b
Extend the previous example in the next cell so that the magnetisation is
Step15: Exercise 2
Extend the code sample provided below to define the following geometry with $10\,\text{nm}$ thickness | Python Code:
import discretisedfield as df
%matplotlib inline
Explanation: Tutorial 1 - Geometry
In this tutorial we explore how simulated geometries can be defined and initial magnetisation states specified. The package we use to define finite difference meshes and fields is discretisedfield.
End of explanation
L = 100e-9 # edge length (m)
d = 10e-9 # cell size (m)
p1 = (0, 0, 0) # first point of cuboid containing simulation geometry
p2 = (L, L, L) # second point
cell = (d, d, d) # discretisation cell
mesh = df.Mesh(p1=p1, p2=p2, cell=cell) # mesh definition
Explanation: Defining the geometry
Let us say that we need to define a nanocube mesh with edge length $L=100\,\text{nm}$ and discretisation cell $(d, d, d)$, with $d=10 \,\text{nm}$. For that we need to define two points $p_{1}$ and $p_{2}$ between which the mesh spans and provide them (together with the discretisation cell) to the Mesh class:
End of explanation
mesh.l # edge length
Explanation: We can then inspect some basic parameters of the mesh:
Edge length:
End of explanation
mesh.n # number of cells
Explanation: Number of discretisation cells in all three directions:
End of explanation
mesh.pmin # minimum mesh domain coordinate
Explanation: Minimum mesh domain coordinate:
End of explanation
mesh.pmax # maximum mesh domain coordinate
Explanation: Maximum mesh domain coordinate:
End of explanation
mesh
Explanation: Or we can visualise the mesh domain and a discretisation cell:
End of explanation
m = df.Field(mesh, dim=3, value=(1, 0, 0))
Explanation: Defining a field on a geometry
After we defined a mesh, we can define different finite difference fields. For that, we use Field class. We need to provide the mesh, dimension of the data values, and the value of the field. Let us define a 3d-vector field (dim=3) that is uniform in the $(1, 0, 0)$ direction.
End of explanation
m.plot_slice("z", L/2);
Explanation: A simple slice visualisation of the mesh in the $z$ direction at $L/2$ is:
End of explanation
def m_value(pos):
x, y, z = pos # unpack position into individual components
if x > L/4:
return (1, 1, 0)
else:
return (-1, 0, 0)
m = df.Field(mesh, dim=3, value=m_value)
m.plot_slice("z", L/2);
Explanation: Spatially varying field
When we defined a uniform vector field, we used a tuple (1, 0, 0) to define its value. However, we can also provide a Python function if we want to define a non-uniform field. This function takes the position in the mesh as input, and returns a value that the field should have at that point:
End of explanation
point = (0, 0, 0)
m(point)
m([90e-9, 0, 0])
Explanation: The field object can be treated as a mathematical function - if we pass a position tuple to the function, it will return the vector value of the field at that location:
End of explanation
Ms = 8e6 # saturation magnetisation (A/m)
m = df.Field(mesh, dim=3, value=m_value, norm=Ms)
m([0, 0, 0])
Explanation: In micromagnetics, the saturation magnetisation $M_\mathrm{s}$ is typically constant (at least for each position). The Field constructor accepts an additional parameter norm which we can use for that:
End of explanation
mesh = df.Mesh(p1=(-L/2, -L/2, -L/2), p2=(L/2, L/2, L/2), cell=(d, d, d))
def Ms_value(pos):
x, y, z = pos
if (x**2 + y**2 + z**2)**0.5 < L/2:
return Ms
else:
return 0
m = df.Field(mesh, dim=3, value=(0, -1, 0), norm=Ms_value)
m.plot_slice("z", 0);
Explanation: Spatially varying norm $M_\mathrm{s}$
By defining different norms, we can specify different geometries, so that $M_\text{s}=0$ outside the mesh. For instance, let us assume we want to define a sphere of radius $L/2$ and magnetise it in the negative $y$ direction.
End of explanation
t = 10e-9 # thickness (m)
d = 120e-9 # diameter (m)
cell = (5e-9, 5e-9, 5e-9) # discretisation cell size (m)
Ms = 1e7 # saturation magnetisation (A/m)
mesh = df.Mesh(p1=(-d/2, -d/2, 0), p2=(d/2, d/2, t), cell=cell)
def Ms_value(pos):
x, y, z = pos
# insert missing code here
return Ms
m = df.Field(mesh, value=(1, 0, 0), norm=Ms_value)
m.plot_slice("z", 0);
Explanation: Exercise 1a
The code below defines as thin film (thickness $t$) in the x-y plane. Extend the code in the following cell so that the magnetisation $M_\mathrm{s}$ is $10^7\mathrm{A/m}$ in a disk of thickness $t = 10 \,\text{nm}$ and diameter $d = 120 \,\text{nm}$. The disk is centred around the origin (0, 0, 0). The magnetisation should be $\mathbf{m} = (1, 0, 0)$.
End of explanation
t = 10e-9 # thickness (m)
d = 120e-9 # diameter (m)
cell = (5e-9, 5e-9, 5e-9) # discretisation cell size (m)
Ms = 1e7 # saturation magnetisation (A/m)
mesh = df.Mesh(p1=(-d/2, -d/2, 0), p2=(d/2, d/2, t), cell=cell)
def Ms_value(pos):
x, y, z = pos
# Copy code from exercise 1a.
return Ms
def m_value(pos):
x, y, z = pos
# Insert missing code here to get the right magnetisation.
return (1, 0, 0)
m = df.Field(mesh, value=m_value, norm=Ms_value)
m.plot_slice("z", 0);
Explanation: Exercise 1b
Extend the previous example in the next cell so that the magnetisation is:
$$\mathbf{m} = \begin{cases} (-1, 0, 0) & \text{for } y \le 0 \ (1, 0, 0) & \text{for } y > 0 \end{cases}$$
with saturation magnetisation $10^{7} \,\text{A}\,\text{m}^{-1}$.
End of explanation
cell = (5e-9, 5e-9, 5e-9) # discretisation cell size (m)
Ms = 8e6 # saturation magnetisation (A/m)
mesh = df.Mesh(p1=(0, 0, 0), p2=(100e-9, 50e-9, 10e-9), cell=cell)
def Ms_value(pos):
x, y, z = pos
# Insert missing code here to get the right shape of geometry.
return Ms
def m_value(pos):
x, y, z = pos
if 20e-9 < x <= 30e-9:
return (1, 1, -1)
else:
return (1, 1, 1)
m = df.Field(mesh, value=m_value, norm=Ms_value)
m.plot_slice("z", 0);
Explanation: Exercise 2
Extend the code sample provided below to define the following geometry with $10\,\text{nm}$ thickness:
<img src="geometry_exercise2.png",width=400>
The magnetisation saturation is $8 \times 10^{6} \,\text{A}\,\text{m}^{-1}$ and the magnetisation direction is as shown in the figure.
End of explanation |
14,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bar Chart Race in Python with Matplotlib
~In roughly less than 50 lines of code.
How easy would it be to re-create bar chart race in Python using Jupyter and Matplotlib?
Turns out, in less than 50 lines of code, you can reasonably re-create reusable bar chart race in Python with Matplotlib.
import the dependent libraries
Step1: Data
Read the city populations dataset with pandas.
We only need 4 columns to work with 'name', 'group', 'year', 'value'.
Typically, a name is mapped to a group and each year has one value.
Step2: Color, Labels
We'll user colors and group_lk to add color to the bars.
Step3: Run below cell draw_barchart(2018) draws barchart for year=2018
Step4: Animate
To animate, we will use FuncAnimation from matplotlib.animation.
FuncAnimation makes an animation by repeatedly calling a function (that draws on canvas).
In our case, it'll be draw_barchart.
frames arguments accepts on what values you want to run draw_barchart -- we'll
run from year 1900 to 2018.
Run below cell.
Step5: xkcd-style
Turning your matplotlib plots into xkcd styled ones is pretty easy.
You can simply turn on xkcd sketch-style drawing mode with plt.xkcd.
Step6: Step by step
Step7: Basic chart
Now, let's plot a basic bar chart. We start by creating a figure and an axes.
Then, we use ax.barh(x, y) to draw horizontal barchart.
Step8: Color, Labels
Next, let's add group labels and colors based on groups.
We'll user colors and group_lk to add color to the bars.
Step9: Polish Style
For convenience let's move our code to draw_barchart function.
We need to style following items | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
from IPython.display import HTML
Explanation: Bar Chart Race in Python with Matplotlib
~In roughly less than 50 lines of code.
How easy would it be to re-create bar chart race in Python using Jupyter and Matplotlib?
Turns out, in less than 50 lines of code, you can reasonably re-create reusable bar chart race in Python with Matplotlib.
import the dependent libraries
End of explanation
url = 'https://gist.githubusercontent.com/johnburnmurdoch/4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'
df = pd.read_csv(url, usecols=['name', 'group', 'year', 'value'])
df.head(3)
Explanation: Data
Read the city populations dataset with pandas.
We only need 4 columns to work with 'name', 'group', 'year', 'value'.
Typically, a name is mapped to a group and each year has one value.
End of explanation
colors = dict(zip(
["India", "Europe", "Asia", "Latin America", "Middle East", "North America", "Africa"],
["#adb0ff", "#ffb3ff", "#90d595", "#e48381", "#aafbff", "#f7bb5f", "#eafb50"]
))
group_lk = df.set_index('name')['group'].to_dict()
Explanation: Color, Labels
We'll user colors and group_lk to add color to the bars.
End of explanation
fig, ax = plt.subplots(figsize=(15, 8))
def draw_barchart(current_year):
dff = df[df['year'].eq(current_year)].sort_values(by='value', ascending=True).tail(10)
ax.clear()
ax.barh(dff['name'], dff['value'], color=[colors[group_lk[x]] for x in dff['name']])
dx = dff['value'].max() / 200
for i, (value, name) in enumerate(zip(dff['value'], dff['name'])):
ax.text(value-dx, i, name, size=14, weight=600, ha='right', va='bottom')
ax.text(value-dx, i-.25, group_lk[name], size=10, color='#444444', ha='right', va='baseline')
ax.text(value+dx, i, f'{value:,.0f}', size=14, ha='left', va='center')
ax.text(1, 0.4, current_year, transform=ax.transAxes, color='#777777', size=46, ha='right', weight=800)
ax.text(0, 1.06, 'Population (thousands)', transform=ax.transAxes, size=12, color='#777777')
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
ax.xaxis.set_ticks_position('top')
ax.tick_params(axis='x', colors='#777777', labelsize=12)
ax.set_yticks([])
ax.margins(0, 0.01)
ax.grid(which='major', axis='x', linestyle='-')
ax.set_axisbelow(True)
ax.text(0, 1.15, 'The most populous cities in the world from 1500 to 2018',
transform=ax.transAxes, size=24, weight=600, ha='left', va='top')
ax.text(1, 0, 'by @pratapvardhan; credit @jburnmurdoch', transform=ax.transAxes, color='#777777', ha='right',
bbox=dict(facecolor='white', alpha=0.8, edgecolor='white'))
plt.box(False)
draw_barchart(2018)
Explanation: Run below cell draw_barchart(2018) draws barchart for year=2018
End of explanation
fig, ax = plt.subplots(figsize=(15, 8))
animator = animation.FuncAnimation(fig, draw_barchart, frames=range(1900, 2019))
HTML(animator.to_jshtml())
# or use animator.to_html5_video() or animator.save()
Explanation: Animate
To animate, we will use FuncAnimation from matplotlib.animation.
FuncAnimation makes an animation by repeatedly calling a function (that draws on canvas).
In our case, it'll be draw_barchart.
frames arguments accepts on what values you want to run draw_barchart -- we'll
run from year 1900 to 2018.
Run below cell.
End of explanation
with plt.xkcd():
fig, ax = plt.subplots(figsize=(15, 8))
draw_barchart(2018)
Explanation: xkcd-style
Turning your matplotlib plots into xkcd styled ones is pretty easy.
You can simply turn on xkcd sketch-style drawing mode with plt.xkcd.
End of explanation
current_year = 2018
dff = df[df['year'].eq(current_year)].sort_values(by='value', ascending=False).head(10)
dff
Explanation: Step by step: Details
We'll now go over the ouput from sratch.
Data transformations
We are interested to see top values are a given year.
Using pandas transformations, we will get top 10 values.
End of explanation
fig, ax = plt.subplots(figsize=(15, 8))
ax.barh(dff['name'], dff['value'])
Explanation: Basic chart
Now, let's plot a basic bar chart. We start by creating a figure and an axes.
Then, we use ax.barh(x, y) to draw horizontal barchart.
End of explanation
fig, ax = plt.subplots(figsize=(15, 8))
dff = dff[::-1]
ax.barh(dff['name'], dff['value'], color=[colors[group_lk[x]] for x in dff['name']])
for i, (value, name) in enumerate(zip(dff['value'], dff['name'])):
ax.text(value, i, name, ha='right')
ax.text(value, i-.25, group_lk[name], ha='right')
ax.text(value, i, value, ha='left')
ax.text(1, 0.4, current_year, transform=ax.transAxes, size=46, ha='right')
Explanation: Color, Labels
Next, let's add group labels and colors based on groups.
We'll user colors and group_lk to add color to the bars.
End of explanation
fig, ax = plt.subplots(figsize=(15, 8))
draw_barchart(2018)
Explanation: Polish Style
For convenience let's move our code to draw_barchart function.
We need to style following items:
Text: font sizes, color, orientation
Format: comma separated values and axes tickers
Axis: Move to top, color, add subtitle
Grid: Add lines behind bars
Remove box frame
Add title, credit
End of explanation |
14,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Teknisk Tirsdag
Step1: Hvordan ser vores originale datasæt ud?
nb! Der kommer en advarsel når dette køres, og dette er måske en ikke dårlig ting.
Step3: Resning af data
Du konstatere hurtigt at dit datasæt er noget værre skrammel!
Derfor er du, som det første, nødt til at rense data for at forsøge at få noget mening ud af dine analyser.
Til dette har vi lavet følgende metoder, som henter spillerne ind fra en csv fil.
Step4: Opgave 0
Step5: Hvilke datatyper har vi i vores datasæt?
Det næste vigtige apsekt i en renselsesprocess, er at undersøge hvilke datatyper som vores datasæt indeholder, og om nogle felter er blanke dvs. er None, Null eller NaN.
Til dette kan man i Pandas bruge følgende simple kommandoer.
Step6: Nogle vigtige ting at tage med fra denne analyse er
Step7: Ok - rigtig mange af de kolonner som repræsentere de fysiske egenskaber har skrald i sig. Det må vi lige undersøge nærmere
Step8: Ok vores mærkelig værdier skyldes faktisk, at man i Fifa har tillægs-værdier på nogle af attributterne. For nemhedens skylds fjerner vi blot disse ekstra elementer. En anden fremgangsmetode kunne være at fjerne rækken med det "dårlige" data i. Dette gælder dog kun for de fysiske attributer.
Step9: Fantastisk! Nu har du fået converteret de fysiske attributter. Du bemærket tidligere, at klubberne også var markeret til at have blandet typer. Dette kan undersøges ret let
Step10: Din mavefornemmelse virkede! Der er nogle af spillerne som ikke har klubber, dvs. de er arbejdsløse. Dette er ikke nogen katastrofe. Vi kan enten fjerne dem (de må simpelthen være så dårlige at ingen gider ansætte dem), eller også kan vi lade dem være i (de fortjener en chance). I dette forsøg vælges det sidste udvalg.
Du bemærket også at Wage (Løn) og Value (Værdi) er object typer. Hvis vi undersøger disse kolonner nærmere ser vi det skyldes dels, at vi regner i € (Euro) og at dataindsamlerne har været så venlig - at erstatte antallet af 0'er med hhv. K for 000 og M 000000.
Step11: Fedt! nu kan du konstatere at de eneste object-type variable er
Step12: Sådan! Det eneste vi mangler nu, er at lave de egentlige datasæt vi skal bruge til træningen af vores Machine Learning algoritme.
Vi laver 3 sæt hhv. | Python Code:
#PURE PYTHON!!!!
from IPython.display import display, Markdown
import numpy as np
import pandas as pd
import os
import re
# path = %pwd
# path += '/fifa-18-demo-player-dataset/CompleteDataset.csv'
# Til Windows
path = '.\\Downloads\\Fifa2018-master\\Fifa2018-master'
path += '\\fifa-18-demo-player-dataset\\CompleteDataset.csv'
Explanation: Teknisk Tirsdag: Data Cleaning
<img src="https://imgs.xkcd.com/comics/data.png"
align="center"
width="20%">
Tillykke!
Du er hermed blevet ansat som Data Scientist for en norsk virksomhed, der arbejder med at rådgive internationale fodboldklubber med hvilke spillere de skal købe.
I dag begynder I på at undersøge det danske spillermarked for potentielle kandidater til de allerstørste klubber i Europa, og som nyudnævnt data scientist er det din opgave at finde de skjulte talenter i Danmark.
Du har fået udleveret et datasæt for fodboldspillere i 2018 og du skal lave nogle analyser...
End of explanation
input_data_frame = pd.DataFrame().from_csv(path=path, encoding='utf-8')
input_data_frame
Explanation: Hvordan ser vores originale datasæt ud?
nb! Der kommer en advarsel når dette køres, og dette er måske en ikke dårlig ting.
End of explanation
def clean_raw_data(data_frame, *args):
Denne metode fjerner uønskede kolonner samt indsætter 0 på målmændendes ikke-målmænd attributter
@input: data_frame: Det datasæt vi ønsker at fjerne unønsket kolonner i.
@input: *args: De uønsket kolonner skrives som streng argumenter fx. 'col_x', 'col_y', '...', etc.
@output: En dataframe hvor vi kun har de ønsket kolonner tilbage.
false_cols = [i for i in args if i not in data_frame.columns]
if len(false_cols) != 0:
print('The folloing column(s) are not in the Dataframe: '+', '.join(false_cols))
return data_frame[[i for i in data_frame.columns if i not in args]]
Explanation: Resning af data
Du konstatere hurtigt at dit datasæt er noget værre skrammel!
Derfor er du, som det første, nødt til at rense data for at forsøge at få noget mening ud af dine analyser.
Til dette har vi lavet følgende metoder, som henter spillerne ind fra en csv fil.
End of explanation
df = clean_raw_data(input_data_frame,'**INSÆT KOLONNENAVNE HER SOM STRENG ARGUMENTER!**')
df
Explanation: Opgave 0: Rens dit data!
Som den aller første opgave, vil din leder gerne have at du fjerner uønsket kolonner i dit datasæt, da de er irrelevante. Heldigvis, har nogle af dine kollegaer lavet en metode til at fjerne uønsket kolonner, så det Du skal gøre er, at identificere de kolonner, som er irelevante for denne analyse.
HINT: Læs metodedokumentationen for at finde ud af hvordan man giver kolonnenavne til metoden.
End of explanation
g = df.columns.to_series().groupby(df.dtypes).groups
d = {key.name: list(val) for key, val in g.items()}
for navn, antal, dtype in list(zip(df.columns,df.count().tolist(), df.dtypes.tolist())):
print('Kolonnenavn: {:20s} antal fyldte felter: {:<9.0f} datatype: {}'.format(navn, antal, dtype))
Explanation: Hvilke datatyper har vi i vores datasæt?
Det næste vigtige apsekt i en renselsesprocess, er at undersøge hvilke datatyper som vores datasæt indeholder, og om nogle felter er blanke dvs. er None, Null eller NaN.
Til dette kan man i Pandas bruge følgende simple kommandoer.
End of explanation
def is_number_or_string(x):
try:
float(x)
return 'number'
except ValueError as va:
return 'string'
df_string_float = df.applymap(is_number_or_string)
dict_types = []
for name in d['object']:
test_df = df_string_float.groupby(name)[name].count()
dict_types = dict_types + list(zip([name]*2,test_df.keys(),test_df.values))
data_frame_types = pd.DataFrame(dict_types,columns=['name','dtypes','count'])
list_of_types = (data_frame_types
.pivot(index='name', columns='dtypes', values='count')
.reset_index()
.fillna(0)
.sort_values(['number','string'],ascending=False))
list_of_types
Explanation: Nogle vigtige ting at tage med fra denne analyse er:
* Vi kan med stor sikkerhed sige at der: {{len(df)}} spiller i vores datasæt, men der er kolonner som kun har {{df.CAM.count()}}.
* Vi har en masse felter som tilhører float64 og int64, hvilket er godt, men følgende kolonner tilhører klassen object: {{', '.join(d['object'])}}
Dette er ikke så godt, da vi kunne være interesseret i at bruge mange af kolonnerne. Dette må vi lige rette op på. Første skridt er at identificere alle typer der er i hver enkel af object kolonnerne. En simple, men effektiv måde er at "undersøge" om elementet i kolonnen er et tal eller en streng.
End of explanation
def contains_not_number(x):
matched = re.findall(r'[^\d\. ]+',str(x),re.IGNORECASE)
if len(matched) != 0:
return x
mixed_type_cols = list(list_of_types.loc[list_of_types['number'] != 0.0].name)
for col in mixed_type_cols:
print((col, list(filter(lambda x: contains_not_number(x) ,df[col].unique()))))
Explanation: Ok - rigtig mange af de kolonner som repræsentere de fysiske egenskaber har skrald i sig. Det må vi lige undersøge nærmere
End of explanation
def convert_columns(data_frame, mixed_cols):
for col in mixed_cols:
try:
data_frame[col] = data_frame[col].str[:2].astype(np.float64)
except Exception as e:
print(e)
return data_frame
df = convert_columns(df,mixed_type_cols[:-1])
df.dtypes
Explanation: Ok vores mærkelig værdier skyldes faktisk, at man i Fifa har tillægs-værdier på nogle af attributterne. For nemhedens skylds fjerner vi blot disse ekstra elementer. En anden fremgangsmetode kunne være at fjerne rækken med det "dårlige" data i. Dette gælder dog kun for de fysiske attributer.
End of explanation
df['Club'].sort_values(ascending=False, na_position='first')
Explanation: Fantastisk! Nu har du fået converteret de fysiske attributter. Du bemærket tidligere, at klubberne også var markeret til at have blandet typer. Dette kan undersøges ret let:
End of explanation
df[['Wage','Value']]
def parse_of_wage(val):
val = re.sub('€', '', val)
valdict = {'M': 1000000, 'K': 1000, '': 0}
try:
splitter = re.findall('(\d*\.?\d)([MK]?)', val)[0]
return float(splitter[0])*valdict[splitter[1]]
except IndexError as e:
print(splitter)
df['Value'] = df['Value'].apply(lambda x: parse_of_wage(x))
df['Wage'] = df['Wage'].apply(lambda x: parse_of_wage(x))
df = df.fillna(value=0.0)
g = df.columns.to_series().groupby(df.dtypes).groups
d = {key.name: list(val) for key, val in g.items()}
for key, val in df.dtypes.items():
print('Variablenavn: {:20s} Variabletype: {}'.format(key,val))
Explanation: Din mavefornemmelse virkede! Der er nogle af spillerne som ikke har klubber, dvs. de er arbejdsløse. Dette er ikke nogen katastrofe. Vi kan enten fjerne dem (de må simpelthen være så dårlige at ingen gider ansætte dem), eller også kan vi lade dem være i (de fortjener en chance). I dette forsøg vælges det sidste udvalg.
Du bemærket også at Wage (Løn) og Value (Værdi) er object typer. Hvis vi undersøger disse kolonner nærmere ser vi det skyldes dels, at vi regner i € (Euro) og at dataindsamlerne har været så venlig - at erstatte antallet af 0'er med hhv. K for 000 og M 000000.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sb
antal_bins = 2###JEG SKAL ÆNDRES TIL NOGET FORNUFTIGT! ###
overall_performance = (df
.groupby('Club',as_index=False)['Overall'].mean()
.sort_values(by='Overall',ascending = False))
ax = sb.distplot(overall_performance['Overall'], bins= antal_bins ,kde=False)
ax.set_xlabel('Gennemsnitlig overall performance')
ax.set_ylabel('Antal klubber')
plt.show()
top_klub_ratio = None### Fjern NONE og UDFYLD MIG ###
top_clubs = overall_performance[overall_performance['Overall'] >= top_klub_ratio]['Club']
Explanation: Fedt! nu kan du konstatere at de eneste object-type variable er: {{', '.join(d['object'][:-1])}} og {{d['object'][-1]}}
Indførsel af labels i datasættet
For at vi kan kende forskel på hvilke spillere, der kan være kandidater, er vi nødt til at tildele alle spillere en label. Da vores klientel er topklubber i Europa vælger vi at tage udgangspunkt i de klubbers spillere, som skal udgøre lablen: 1. For at vores model kan kende forskel på disse spillere og andre spillere, er modellen nødt til at have spillere, som ikke har noget med disse klubber at gøre. Disse spillere vil få label-værdien: 0.
Dine kunder er:
Barcelona, Real Madrid, Juventus, AC Milan, Bayern München, Arsenal og Manchester City.
Opgave 1: Vælg topklubber
Som nørdet data scientist, vil du dog sikre dig at du får det bedste resultat, og derfor tænker du at vi er nødt til at finde en mere generel population af klubber, som kan klassificeres som topklub.
Vi starter med at kigge på hvordan de enkelte klubbers overordnet præstation, givet ved Overall forholder sig.
Din opgave er
* Leg lidt med nedenstående graf og vælg et passende antal bins (spande på dansk), så vi får et retvisende billedet af hvilke klubber, der er toppen af poppen i Europa.
* På baggrund af din analyse skal du vælge den værdi, som skiller topklubberne fra alle de andre.
End of explanation
from sklearn.model_selection import train_test_split
dansker_set = df[df['Nationality'] == 'Denmark']
topklub_set = df[df['Club'].isin(top_clubs)]
ikke_topklub_set = df[(~df['Club'].isin(top_clubs)) & (df['Nationality'] != 'Denmark')].sample(len(topklub_set))
overall_set = pd.concat([topklub_set, ikke_topklub_set])
print('Træningsæt størrelse: {}'.format(len(overall_set)))
Explanation: Sådan! Det eneste vi mangler nu, er at lave de egentlige datasæt vi skal bruge til træningen af vores Machine Learning algoritme.
Vi laver 3 sæt hhv.:
* Danskersættet. Et datasæt med alle danske spillere. Som vi skal bruge til sidst, for at lave vores anbefalinger til kunden.
* topklub_set. Et datasæt kun med topklubberne
* overall_set. Et datasæt med de udvalgte topklubber og et udvalg af ikke topklubbet sat sammen til ét. Dette er vores træningssæt.
End of explanation |
14,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:00
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
14,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Backscattering Efficiency Validation
Scott Prahl
Apr 2021
If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)
Step1: Wiscombe tests
Since the backscattering efficiency is $|2S_1(-180^\circ)/x|^2$, it is easy to see that that backscattering
should be the best comparison. For example, the asymmetry factor for this test case only has three significant
digits and the scattering efficiency only has two!
A typical test result looks like this
Step2: Spheres with a smaller refractive index than their environment
Step3: Non-absorbing spheres
Step4: Water droplets
Step5: Moderately absorbing spheres
Step6: Spheres with really big index of refraction
Step7: Backscattering Efficiency for Large Absorbing Spheres
For large spheres with absorption, backscattering efficiency should just be equal to the reflection for perpendicular light on a planar surface. | Python Code:
#!pip install --user miepython
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
Explanation: Backscattering Efficiency Validation
Scott Prahl
Apr 2021
If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)
End of explanation
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.55, 0.0)
x = 2*3.1415926535*0.525/0.6328
ref = 2.92534
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
m=complex(0.0, -1000.0)
x=0.099
ref = (4.77373E-07*4.77373E-07 + 1.45416E-03*1.45416E-03)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=0.101
ref = (5.37209E-07*5.37209E-07 + 1.54399E-03*1.54399E-03)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = (4.35251E+01*4.35251E+01 + 2.45587E+01*2.45587E+01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(2.91013E+03-4.06585E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
Explanation: Wiscombe tests
Since the backscattering efficiency is $|2S_1(-180^\circ)/x|^2$, it is easy to see that that backscattering
should be the best comparison. For example, the asymmetry factor for this test case only has three significant
digits and the scattering efficiency only has two!
A typical test result looks like this:
```
MIEV0 Test Case 12: Refractive index: real 1.500 imag -1.000E+00, Mie size parameter = 0.055
NUMANG = 7 angles symmetric about 90 degrees
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 7.67526E-05 8.34388E-05 7.67526E-05 8.34388E-05 1.28530E-08 0.0000
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
30.00 0.866025 7.67433E-05 8.34349E-05 6.64695E-05 7.22517E-05 1.12447E-08 -0.1428
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
60.00 0.500000 7.67179E-05 8.34245E-05 3.83825E-05 4.16969E-05 8.02857E-09 -0.5999
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
90.00 0.000000 7.66833E-05 8.34101E-05 3.13207E-08 -2.03740E-08 6.41879E-09 -1.0000
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
120.00 -0.500000 7.66486E-05 8.33958E-05 -3.83008E-05 -4.17132E-05 8.01841E-09 -0.6001
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
150.00 -0.866025 7.66233E-05 8.33853E-05 -6.63499E-05 -7.22189E-05 1.12210E-08 -0.1429
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
180.00 -1.000000 7.66140E-05 8.33814E-05 -7.66140E-05 -8.33814E-05 1.28222E-08 0.0000
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
Angle S-sub-1 T-sub-1 T-sub-2
0.00 7.67526E-05 8.34388E-05 3.13207E-08 -2.03740E-08 7.67213E-05 8.34592E-05
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
180.00 7.66140E-05 8.33814E-05 3.13207E-08 -2.03740E-08 7.66453E-05 8.33611E-05
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
Efficiency factors for Asymmetry
Extinction Scattering Absorption Factor
0.101491 0.000011 0.101480 0.000491
( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000)
```
Perfectly conducting spheres
End of explanation
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(0.75, 0.0)
x=0.099
ref = (1.81756E-08*1.81756E-08 + 1.64810E-04*1.64810E-04)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=0.101
ref = (2.04875E-08*2.04875E-08 + 1.74965E-04*1.74965E-04)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10.0
ref = (1.07857E+00*1.07857E+00 + 3.60881E-02*3.60881E-02)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=1000.0
ref = (1.70578E+01*1.70578E+01 + 4.84251E+02* 4.84251E+02)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
Explanation: Spheres with a smaller refractive index than their environment
End of explanation
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.5, 0)
x=10
ref = abs(4.322E+00 + 4.868E+00*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = abs(4.077E+01 + 5.175E+01*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=1000
ref = abs(5.652E+02 + 1.502E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
Explanation: Non-absorbing spheres
End of explanation
print(" old")
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.33, -0.00001)
x=1
ref = (2.24362E-02*2.24362E-02 + 1.43711E-01*1.43711E-01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = (5.65921E+01*5.65921E+01 + 4.65097E+01*4.65097E+01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(-1.82119E+02 -9.51912E+02*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
Explanation: Water droplets
End of explanation
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.5, -1.0)
x=0.055
ref = abs(7.66140E-05 + 8.33814E-05*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=0.056
ref = (8.08721E-05*8.08721E-05 + 8.80098E-05*8.80098E-05)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=1.0
ref = (3.48844E-01*3.48844E-01 + 1.46829E-01*1.46829E-01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100.0
ref = (2.02936E+01*2.02936E+01 + 4.38444E+00*4.38444E+00)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(-2.18472E+02 -2.06461E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
Explanation: Moderately absorbing spheres
End of explanation
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(10, -10.0)
x=1
ref = abs(4.48546E-01 + 7.91237E-01*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = abs(-4.14538E+01 -1.82181E+01*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(2.25248E+03 -3.92447E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
Explanation: Spheres with really big index of refraction
End of explanation
x = np.logspace(1, 5, 20) # also in microns
kappa=1
m = 1.5 - kappa*1j
R = abs(m-1)**2/abs(m+1)**2
Qbig = R * np.ones_like(x)
qext, qsca, qback, g = miepython.mie(m,x)
plt.semilogx(x, qback, '+')
plt.semilogx(x, Qbig, ':')
plt.text(x[-1],Qbig[-1],"$\kappa$=%.3f" % kappa,va="bottom",ha='right')
kappa=0.001
m = 1.5 - kappa*1j
R = abs(m-1)**2/abs(m+1)**2
Qbig = R * np.ones_like(x)
qext, qsca, qback, g = miepython.mie(m,x)
plt.semilogx(x, qback, '+')
plt.semilogx(x, Qbig, ':')
plt.text(x[-1],Qbig[-1],"$\kappa$=%.3f" % kappa,va="bottom",ha='right')
plt.ylim(0,0.2)
plt.title("Backscattering Efficiency for m=1.5 - i $\kappa$")
plt.xlabel("Size Parameter")
plt.ylabel("$Q_{back}$")
plt.grid()
Explanation: Backscattering Efficiency for Large Absorbing Spheres
For large spheres with absorption, backscattering efficiency should just be equal to the reflection for perpendicular light on a planar surface.
End of explanation |
14,471 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am trying to groupby counts of dates per month and year in a specific output. I can do it per day but can't get the same output per month/year. | Problem:
import pandas as pd
d = ({'Date': ['1/1/18','1/1/18','1/1/18','2/1/18','3/1/18','1/2/18','1/3/18','2/1/19','3/1/19'],
'Val': ['A','A','B','C','D','A','B','C','D']})
df = pd.DataFrame(data=d)
def g(df):
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%y')
y = df['Date'].dt.year
m = df['Date'].dt.month
w = df['Date'].dt.weekday
df['Count_d'] = df.groupby('Date')['Date'].transform('size')
df['Count_m'] = df.groupby([y, m])['Date'].transform('size')
df['Count_y'] = df.groupby(y)['Date'].transform('size')
df['Count_w'] = df.groupby(w)['Date'].transform('size')
df['Count_Val'] = df.groupby(['Date','Val'])['Val'].transform('size')
return df
df = g(df.copy()) |
14,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports
Step1: Read the data
This a breast cancer diagnostic dataset
Step2: Train/test split
Step3: Modelling with standard train/test split
Step4: Modelling with k-fold cross validation | Python Code:
# Import pandas and numpy
import pandas as pd
import numpy as np
# Import the classifiers we will be using
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
# Import train/test split function
from sklearn.model_selection import train_test_split
# Import cross validation scorer
from sklearn.model_selection import cross_val_score
# Import ROC AUC scoring function
from sklearn.metrics import roc_auc_score
Explanation: Imports
End of explanation
# Read in our dataset, using the parameter 'index_col' to select the index
# Let's see the header
# And the shape
# Assign the features and the target
Explanation: Read the data
This a breast cancer diagnostic dataset: these features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass.
"diagnosis" is our target: 0 for benign, 1 for malignant.
End of explanation
# Create the train/test split
Explanation: Train/test split
End of explanation
# Choose the Decision Tree model
# Fit the model
# Make the predictions
# Score the predictions
# Print the score
# Choose the K-Neareast Neighbors model
# Fit the model
# Make the predictions
# Score the predictions
# Print the score
# Choose the Naive Bayes model
# Fit the model
# Make the predictions
# Score the predictions
# Print the score
# Choose the Random Forest model
# Fit the model
# Make the predictions
# Score the predictions
# Print the score
Explanation: Modelling with standard train/test split
End of explanation
# Choose the Decision Tree model
# Fit, predict and score in one step, using cross_val_score()
# Print the scores
# Print the mean score
# Choose the K-Neareast Neighbors model
# Fit, predict and score in one step, using cross_val_score()
# Print the scores
# Print the mean score
# Choose the Naive Bayes model
# Fit, predict and score in one step, using cross_val_score()
# Print the scores
# Print the mean score
# Choose the Random Forest model
# Fit, predict and score in one step, using cross_val_score()
# Print the scores
# Print the mean score
Explanation: Modelling with k-fold cross validation
End of explanation |
14,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
shellOneLiner モジュールの紹介
Python を通じて大量のデータを扱う場合には、 Unix コマンドを利用する事で素早く処理を行う事が出来る場合がある。
shellOneLiner モジュールは、 Python コード中からシェルのワンライナーを呼び出し、 Python から Unix コマンドへのデータの受け渡し、ファイルからのデータの読み出し、 Unix コマンドによるデータの処理、 Python へのデータの受け渡し、を行う事ができる。
shellOneLiner モジュールと usp Tukubai コマンド(以下 Tukubai コマンド)を合せて使用する事で、大量のデータを効率的に処理する事が可能である。 usp Tukubai コマンドにより、ファイルシステムを SQL データベースのように使用する事も可能である。
本稿では shellOneLiner モジュールと usp Tukubai コマンドの使用例を簡単に紹介する。
shellOneLiner モジュールの概要
shellOneLiner モジュールは以下のように動作する。 shellOneLiner オブジェクトのインスタンスを作成すると、データ処理の為の Unix コマンド群が起動され、必要であれば Python 処理系からデータの受け渡しをするスレッドが起動される。 shellOneLiner オブジェクトはイテレータ型オブジェクトとして振る舞い、 for 文等を使用して処理の結果を読み出す事が出来る。
+--------------------+ +----------------+
| ==[Input Data]==> |
| Python Interpreter | | Unix Processes |
| <=[Output Data]=> |
|+------------------+| +---^------------+
||shellOneLiner || /
||module ---(Dispatch)------/
|+------------------+|
+--------------------+
shellOneLiner モジュールの基本的な使用方法
shellOneLiner モジュールのクラス shellOneLiner のインスタンス生成時に任意のシェルスクリプトを文字列として受け取り、実行する[*1]。シェルスクリプトからの出力は、イテレータ型のオブジェクトとして返される。
<出力オブジェクト> = shellOneLiner.ShellOneLiner(<シェルスクリプト>)
[*1] つまり、 shellOneLiner モジュールは直接シェルコマンドを起動する。セキュリティ上の問題が発生する可能性があるので、使用の際には細心の注意が必要である。
Step1: インスタンス生成時に input オプションにイテレータ型オブジェクトを設定する事で、シェルスクリプトの標準入力に対する入力を設定する事ができる。shellOneLiner のインスタンス、および input オプションで指定されるオブジェクトは、デフォルトでは配列のイテレータ型である[*2]。
<出力オブジェクト> = shellOneLiner.ShellOneLiner(<シェルスクリプト>, input=<入力オブジェクト>)
[*2] シェルスクリプトへの入力、またシェルスクリプトからの出力をどのように Python オブジェクトに解釈するかは、インスタンスについては reader オプション、input オプションで指定されるオブジェクトについては writer オプションで解釈を行なう関数を指定する事で変更可能である。
Step2: データ
以下の例ではファイルに記録された、以下のような時系列データを処理対象とする。
20150101000004 A 1
20150101000007 A 10
20150101000008 A 70
20150101000009 A 85
20150101000010 A 69
20150101000012 A 2
...
データは1行1レコードで、1レコードはスペースで区切られた複数のフィールドから成る。それぞれのフィールドの内容は以下のとおりである。
第一フィールド
Step3: ここでは、 Tukubai コマンド、 dmerge を使用している。
dmerge コマンドは、入力データの指定されたフィールドをソート済みキーとして、入力データのマージを行うコマンドである[*1]。第一引数 'key=1' により、第一フィールドをキーとしてマージを行うよう指定している。続く引数、 set_A 、 set_B 、 set_C はそれぞれ入力データの格納されたファイルである。読み出されたデータは shellOneLiner モジュールにより行指向スペース区切りデータとして解釈され、1行毎にpythonのリストオブジェクトに変換される。
[*1] 入力データはキーによりソートされている必要がある。
データの抽出(1)
仮にマージされた状態でデータを受け取った場合に特定の地点のデータのみを取り出す事も用意である。上記のマージされたファイルから、地点 A のレコードのみを取り出す例を示す。
Step4: ここでは、 Tukubai コマンド、 selr を使用している。
selr コマンドは、入力データの指定されたフィールドが指定された文字列であるデータを抽出して出力するコマンドである。第一引数、 1' 及び第二第二引数A' により、第一フィールドが `A' である行を抜き出すよう指定している。入力データは、標準入力か、第三引数で指定されるファイルから渡される。
データの抽出(2)
より複雑なデータの抽出も可能である。以下では、 2015 年 1 月 2 日に到着したレコードのみを取り出す例を示す。
Step5: ここで用いているのは、 Unix 標準コマンド grep である。
「20150102」で始まる行を抜き出す事で、特定の日に到着したレコードを抜き出す事ができる。
次に、到着したレコードのうち、深夜0時に到着したデータのみを取り出す例を示す。
まず、元のデータに時刻のフィールドを追加する例を示す。
Step6: ここでは、 Tukubai コマンド self を使用している。
self コマンドの名前は select field の略であり、入力データからフィールドの選択、及び部分文字列の取り出しを行うコマンドである。 self コマンドの第三引数は入力ファイルである。 第一引数「0」は、入力データの一レコード全てを出力するという意味である。 第二引数、 「1.9.2」は、入力データの第一フィールドの部分文字列、九文字目から二文字を出力するという意味である。 self コマンドの適用により、元のデータは以下の第四フィールドを付加したフォーマットに加工される。
第四フィールド
Step7: ここでは、 Tukubai コマンド delf を使用している。
delf コマンドにより、第四フィールドを削除している。
delf コマンドの名前は、 delete field の略であり、入力データから指定されたフィールドの削除を行なうコマンドである。
ここでは第一引数で「4」を指定しており、標準入力から読み込んだデータの第4フィールドを削除している[*1]。
[*1] 同様の記述方法により、先ほどの grep を用いた、特定の日のレコードを抽出する処理も記述する事ができる。逆に grep の正規表現を駆使して、特定の時刻に到着したレコードを抽出する処理を記述する事も可能である。しかし複雑な正規表現は解読が難しくなりがちであり、処理性能の低下にも繋がる。固定文字列の検索には、 self と selr の組み合わせが望ましい。
データの抽出(3)
さらに複雑な例として、以下では到着データの値が80以上であるレコードを取り出す例を示す。
Step8: ここでは、 Unix 標準コマンド awk を用いている。
第三フィールドを比較するパターンを記述する事で、到着データの値が80以上であるレコードを取り出している。
データの抽出(4)
こういった抽出方法を組み合わせる事で、複雑な抽出条件を簡潔に記述する事が出来る。
以下では、 A 地点に深夜0時に到着したレコードのうち、到着データが 40 未満であるレコードを抽出する例を示す。
Step9: まず最初の selr コマンドにより、 A 地点に到着したレコードのみを抽出している。続く2段目〜4段目のコマンドにて、深夜0時に到着したレコードを抜き出している。最後に5段目のコマンドにて到着データが40以下であるデータを抽出している。
このように複雑な抽出条件であっても、抽出方法を組み合わせる事で簡潔に記述する事ができる。
データの付加
時系列データになにがしかの情報を付加したい場合もあるだろう。以下では入力データに時間帯に応じた情報を付加して読み出す例を示す。
処理に先立ち、以下のようなファイル weight を用意する。
00 MidNight 1
01 MidNight 1
02 MidNight 1
03 MidNight 1
04 EarlyMorning 2
05 EarlyMorning 2
06 EarlyMorning 2
07 Morning 3
08 Morning 3
09 Morning 3
10 MidDay 5
11 MidDay 5
12 MidDay 5
13 MidDay 5
14 MidDay 5
15 MidDay 5
16 Evening 3
17 Evening 3
18 Evening 3
19 Night 2
20 Night 2
21 Night 2
22 MidNight 1
23 MidNight 1
それぞれのフィールドの内容は以下のとおりである。
第一フィールド
Step10: ここでは、新たな Tukubai コマンド cjoin2 を使用している。
cjoin2 コマンドは、指定されたマスタと入力データを、指定されたフィールドをキーとして突き合わせるコマンドである[*1]。第一引数``key=4''により、入力データの第四フィールドをキーとして、第二引数で指定されるファイル weight と入力データを突き合わせる。
[*1] 入力データはキーによりソートされている必要がある。
データの集約
大量のデータをそのまま Python で扱うのは困難が伴う事が多い。データを読み出す前に、コマンドを用いて基本的なデータの集約を行う事で、軽快に分析を行う事が出来る。
データ到着数の計数
以下では、それぞれの日の到着データ件数を数える例を示す。
Step11: ここでは、新たな Tukubai コマンド count を使用している。
self コマンドにより、元のデータは以下のようなフォーマットに加工される。
第一フィールド
Step12: ここでは、新たな Tukubai コマンド psort および getlast を使用している。
self コマンドにより、元のデータは以下のようなフォーマットに加工される。
第一フィールド
Step13: ここでは、新たな Tukubai コマンド sm2 を使用している。
sm2 コマンドは、入力データの指定されたフィールドをソート済みキーとして、指定されたフィールドの合計を算出するコマンドである。ここでは第一フィールドから第二フィールドをキーとして、第三フィールドの値を合計している。
情報の付加と集約
shellOneLiner モジュールと Tukubai コマンドを組み合わせて、より複雑な処理を行う事も出来る。以下では時間帯毎の重みをつけた、日毎のパラメータの合計を算出する例を示す。
時間帯毎の重みには、``データの付加''の節で用いたファイル weight を用いる。
Step14: ここでは、新たな Tukubai コマンド lcalc を使用している。
lcalc コマンドは入力データについて指定された演算を行うコマンドである。ここでは,第一フィールドと第二フィールドを出力データの第一フィールドと第二フィールドとし、第三フィールドと第六フィールドの積を出力データの第三フィールドとしている。
lcalc コマンドの出力を先ほど使用した sm2 コマンドを使用して合計する事で、日毎の重み付き平均を算出している。
Python 上のデータを使用した処理
ここまででは、 Tukubai コマンドを使用してファイルを読み込み、処理をして、 Python 上のデータとして利用する例を見て来た。 shellOnerLiner モジュールは、 Python 上のイテレータ型データを入力として Unix コマンドにて処理をする用途でも使用する事ができる。
以下では、特定の範囲のパラメータをとるレコードを抜き出す例を示す。パラメータの範囲の指定は、 Python 上のデータにより指定される。まず、パラメータ範囲を指定する為のデータを作成する。
Step15: shellOneLiner モジュールはイテレータ型の Python データを入力として受け取る。 | Python Code:
ol = shellOneLiner.ShellOneLiner('echo Hello; LANG=C date; cat datafile')
head(ol,5)
Explanation: shellOneLiner モジュールの紹介
Python を通じて大量のデータを扱う場合には、 Unix コマンドを利用する事で素早く処理を行う事が出来る場合がある。
shellOneLiner モジュールは、 Python コード中からシェルのワンライナーを呼び出し、 Python から Unix コマンドへのデータの受け渡し、ファイルからのデータの読み出し、 Unix コマンドによるデータの処理、 Python へのデータの受け渡し、を行う事ができる。
shellOneLiner モジュールと usp Tukubai コマンド(以下 Tukubai コマンド)を合せて使用する事で、大量のデータを効率的に処理する事が可能である。 usp Tukubai コマンドにより、ファイルシステムを SQL データベースのように使用する事も可能である。
本稿では shellOneLiner モジュールと usp Tukubai コマンドの使用例を簡単に紹介する。
shellOneLiner モジュールの概要
shellOneLiner モジュールは以下のように動作する。 shellOneLiner オブジェクトのインスタンスを作成すると、データ処理の為の Unix コマンド群が起動され、必要であれば Python 処理系からデータの受け渡しをするスレッドが起動される。 shellOneLiner オブジェクトはイテレータ型オブジェクトとして振る舞い、 for 文等を使用して処理の結果を読み出す事が出来る。
+--------------------+ +----------------+
| ==[Input Data]==> |
| Python Interpreter | | Unix Processes |
| <=[Output Data]=> |
|+------------------+| +---^------------+
||shellOneLiner || /
||module ---(Dispatch)------/
|+------------------+|
+--------------------+
shellOneLiner モジュールの基本的な使用方法
shellOneLiner モジュールのクラス shellOneLiner のインスタンス生成時に任意のシェルスクリプトを文字列として受け取り、実行する[*1]。シェルスクリプトからの出力は、イテレータ型のオブジェクトとして返される。
<出力オブジェクト> = shellOneLiner.ShellOneLiner(<シェルスクリプト>)
[*1] つまり、 shellOneLiner モジュールは直接シェルコマンドを起動する。セキュリティ上の問題が発生する可能性があるので、使用の際には細心の注意が必要である。
End of explanation
l = map((lambda n: ['%s' % str(n)]),range(80,100))
print l
di = list2iter(l)
ol = shellOneLiner.ShellOneLiner('echo Hello; LANG=C date; head', input=di)
head(ol,5)
Explanation: インスタンス生成時に input オプションにイテレータ型オブジェクトを設定する事で、シェルスクリプトの標準入力に対する入力を設定する事ができる。shellOneLiner のインスタンス、および input オプションで指定されるオブジェクトは、デフォルトでは配列のイテレータ型である[*2]。
<出力オブジェクト> = shellOneLiner.ShellOneLiner(<シェルスクリプト>, input=<入力オブジェクト>)
[*2] シェルスクリプトへの入力、またシェルスクリプトからの出力をどのように Python オブジェクトに解釈するかは、インスタンスについては reader オプション、input オプションで指定されるオブジェクトについては writer オプションで解釈を行なう関数を指定する事で変更可能である。
End of explanation
ol = shellOneLiner.ShellOneLiner('dmerge key=1 set_A set_B set_C')
head(ol, 3)
Explanation: データ
以下の例ではファイルに記録された、以下のような時系列データを処理対象とする。
20150101000004 A 1
20150101000007 A 10
20150101000008 A 70
20150101000009 A 85
20150101000010 A 69
20150101000012 A 2
...
データは1行1レコードで、1レコードはスペースで区切られた複数のフィールドから成る。それぞれのフィールドの内容は以下のとおりである。
第一フィールド: データの到着時刻。 YYYYMMDDHHMMSS 形式。
第二フィールド: データ観測地点。文字列。
第ニフィールド: データ。 0 ~ 100 の整数値。
このようなデータが、データ観測地点 A 、 B 、 C のそれぞれについて、 set_A 、 set_B 、 set_C の三つのファイルに記録されている。
また、三つの地点の観測データがマージされたデータが、ファイル set に記録されているとする。
レコードは第一フィールドの、データ到着時刻、及び第二フィールドのデータ観測地点にてソートされている。
データの統合と抽出
様々なデータを扱う場合、まず一つのデータにまとめる事でその後の処理の見通しが良くなる事がある。また、必要なデータのみを抽出することで、後段の計算量を削減する事ができる。
以下ではいくつかのデータをまとめたり、特定の条件で抽出する際の処理の例を示す。
データのマージ
まず、複数の時系列データ set_A 、 set_B 、 set_C を、時系列順にマージする例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('selr 2 A set')
head(ol, 3)
Explanation: ここでは、 Tukubai コマンド、 dmerge を使用している。
dmerge コマンドは、入力データの指定されたフィールドをソート済みキーとして、入力データのマージを行うコマンドである[*1]。第一引数 'key=1' により、第一フィールドをキーとしてマージを行うよう指定している。続く引数、 set_A 、 set_B 、 set_C はそれぞれ入力データの格納されたファイルである。読み出されたデータは shellOneLiner モジュールにより行指向スペース区切りデータとして解釈され、1行毎にpythonのリストオブジェクトに変換される。
[*1] 入力データはキーによりソートされている必要がある。
データの抽出(1)
仮にマージされた状態でデータを受け取った場合に特定の地点のデータのみを取り出す事も用意である。上記のマージされたファイルから、地点 A のレコードのみを取り出す例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('grep \'^20150102\' set')
head(ol, 3)
Explanation: ここでは、 Tukubai コマンド、 selr を使用している。
selr コマンドは、入力データの指定されたフィールドが指定された文字列であるデータを抽出して出力するコマンドである。第一引数、 1' 及び第二第二引数A' により、第一フィールドが `A' である行を抜き出すよう指定している。入力データは、標準入力か、第三引数で指定されるファイルから渡される。
データの抽出(2)
より複雑なデータの抽出も可能である。以下では、 2015 年 1 月 2 日に到着したレコードのみを取り出す例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('self 0 1.9.2 set')
head(ol, 3)
Explanation: ここで用いているのは、 Unix 標準コマンド grep である。
「20150102」で始まる行を抜き出す事で、特定の日に到着したレコードを抜き出す事ができる。
次に、到着したレコードのうち、深夜0時に到着したデータのみを取り出す例を示す。
まず、元のデータに時刻のフィールドを追加する例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('self 0 1.9.2 set | selr 4 00 | delf 4')
head(ol, 3)
Explanation: ここでは、 Tukubai コマンド self を使用している。
self コマンドの名前は select field の略であり、入力データからフィールドの選択、及び部分文字列の取り出しを行うコマンドである。 self コマンドの第三引数は入力ファイルである。 第一引数「0」は、入力データの一レコード全てを出力するという意味である。 第二引数、 「1.9.2」は、入力データの第一フィールドの部分文字列、九文字目から二文字を出力するという意味である。 self コマンドの適用により、元のデータは以下の第四フィールドを付加したフォーマットに加工される。
第四フィールド: データの到着時刻。 HH 形式。
これに、先ほど説明した selr コマンドを適用して深夜0時に到着したデータを抽出する。
End of explanation
ol = shellOneLiner.ShellOneLiner('awk \'$3>80\' set')
head(ol, 3)
Explanation: ここでは、 Tukubai コマンド delf を使用している。
delf コマンドにより、第四フィールドを削除している。
delf コマンドの名前は、 delete field の略であり、入力データから指定されたフィールドの削除を行なうコマンドである。
ここでは第一引数で「4」を指定しており、標準入力から読み込んだデータの第4フィールドを削除している[*1]。
[*1] 同様の記述方法により、先ほどの grep を用いた、特定の日のレコードを抽出する処理も記述する事ができる。逆に grep の正規表現を駆使して、特定の時刻に到着したレコードを抽出する処理を記述する事も可能である。しかし複雑な正規表現は解読が難しくなりがちであり、処理性能の低下にも繋がる。固定文字列の検索には、 self と selr の組み合わせが望ましい。
データの抽出(3)
さらに複雑な例として、以下では到着データの値が80以上であるレコードを取り出す例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('selr 2 A set | self 0 1.9.2 | selr 4 00 | delf 4 | awk \'$3<40\' ')
head(ol, 3)
Explanation: ここでは、 Unix 標準コマンド awk を用いている。
第三フィールドを比較するパターンを記述する事で、到着データの値が80以上であるレコードを取り出している。
データの抽出(4)
こういった抽出方法を組み合わせる事で、複雑な抽出条件を簡潔に記述する事が出来る。
以下では、 A 地点に深夜0時に到着したレコードのうち、到着データが 40 未満であるレコードを抽出する例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('self 0 1.9.2 set_A | cjoin2 key=4 weight')
head(ol, 3)
Explanation: まず最初の selr コマンドにより、 A 地点に到着したレコードのみを抽出している。続く2段目〜4段目のコマンドにて、深夜0時に到着したレコードを抜き出している。最後に5段目のコマンドにて到着データが40以下であるデータを抽出している。
このように複雑な抽出条件であっても、抽出方法を組み合わせる事で簡潔に記述する事ができる。
データの付加
時系列データになにがしかの情報を付加したい場合もあるだろう。以下では入力データに時間帯に応じた情報を付加して読み出す例を示す。
処理に先立ち、以下のようなファイル weight を用意する。
00 MidNight 1
01 MidNight 1
02 MidNight 1
03 MidNight 1
04 EarlyMorning 2
05 EarlyMorning 2
06 EarlyMorning 2
07 Morning 3
08 Morning 3
09 Morning 3
10 MidDay 5
11 MidDay 5
12 MidDay 5
13 MidDay 5
14 MidDay 5
15 MidDay 5
16 Evening 3
17 Evening 3
18 Evening 3
19 Night 2
20 Night 2
21 Night 2
22 MidNight 1
23 MidNight 1
それぞれのフィールドの内容は以下のとおりである。
第一フィールド: 時間帯。 HH 形式。
第二フィールド: 時間帯名。文字列。
第ニフィールド: 重み。 0 ~ 9 の整数値。
このファイルの内容を、ファイル set_A のデータに付加する。
まず、先ほどと同じように、 self の部分文字列参照機能を用いて元のデータに以下の第四フィールドを付加したフォーマットに加工する。
第四フィールド: データの到着時間。 HH 形式。
この第四フィールドと、先ほどのファイル Weight の第一フィールドを付き合わせて、情報を付加する。
End of explanation
ol = shellOneLiner.ShellOneLiner('self 1.1.8 2 set_A | count key=1@2')
head(ol, 3)
Explanation: ここでは、新たな Tukubai コマンド cjoin2 を使用している。
cjoin2 コマンドは、指定されたマスタと入力データを、指定されたフィールドをキーとして突き合わせるコマンドである[*1]。第一引数``key=4''により、入力データの第四フィールドをキーとして、第二引数で指定されるファイル weight と入力データを突き合わせる。
[*1] 入力データはキーによりソートされている必要がある。
データの集約
大量のデータをそのまま Python で扱うのは困難が伴う事が多い。データを読み出す前に、コマンドを用いて基本的なデータの集約を行う事で、軽快に分析を行う事が出来る。
データ到着数の計数
以下では、それぞれの日の到着データ件数を数える例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('self 1.1.8 2 3 set_A | psort ref=1@2 key=3n | getlast key=1@2')
head(ol, 3)
Explanation: ここでは、新たな Tukubai コマンド count を使用している。
self コマンドにより、元のデータは以下のようなフォーマットに加工される。
第一フィールド: データの到着日付。 YYYYMMDD 形式。
第二フィールド: データ観測地点。文字列。
count コマンドは、入力データの指定されたフィールドをキーとして、その件数を数えるコマンドである[*1]。この例では、 self コマンドからの出力を入力データとしている。第一引数により、第一フィールド、及び第二フィールドをキーとして件数を数える。
[*1] 入力データはキーによりソートされている必要がある。
最大値の発見
以下では、それぞれの日の到着データのうち、最大のパラメータのデータを選択する例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner('self 1.1.8 2 3 set_A | sm2 key=1/2 val=3')
head(ol, 3)
Explanation: ここでは、新たな Tukubai コマンド psort および getlast を使用している。
self コマンドにより、元のデータは以下のようなフォーマットに加工される。
第一フィールド: データの到着日付。 YYYYMMDD 形式。
第二フィールド: データ観測地点。文字列。
第ニフィールド: データ。 0 ~ 100 の整数値。
psort コマンドは、入力データの指定されたフィールドをソート済みキーとし、別の指定されたフィールドをソート対象フィールドとして、ソートを行うコマンドである。ここでは、第一フィールド、第二フィールドをソート済みキーとして、第三フィールドを数値として昇順ソートを行っている。
getlast コマンドは、入力データの指定されたフィールドをキーとし、最後のデータを取り出すコマンドである。ここでは、第一フィールド、第二フィールドをキーとして、最後のレコードを取り出している。
データの合計
以下では、それぞれの日の到着データの合計を算出する例を示す。
End of explanation
ol = shellOneLiner.ShellOneLiner(
'self 1.1.8 2 3 1.9.2 set_A | cjoin2 key=4 weight | lcalc \'$1, $2, $3 * $6\' | sm2 key=1/2 val=3')
head(ol, 3)
Explanation: ここでは、新たな Tukubai コマンド sm2 を使用している。
sm2 コマンドは、入力データの指定されたフィールドをソート済みキーとして、指定されたフィールドの合計を算出するコマンドである。ここでは第一フィールドから第二フィールドをキーとして、第三フィールドの値を合計している。
情報の付加と集約
shellOneLiner モジュールと Tukubai コマンドを組み合わせて、より複雑な処理を行う事も出来る。以下では時間帯毎の重みをつけた、日毎のパラメータの合計を算出する例を示す。
時間帯毎の重みには、``データの付加''の節で用いたファイル weight を用いる。
End of explanation
l = map((lambda n: ['%s' % str(n)]),range(80,100))
print l
Explanation: ここでは、新たな Tukubai コマンド lcalc を使用している。
lcalc コマンドは入力データについて指定された演算を行うコマンドである。ここでは,第一フィールドと第二フィールドを出力データの第一フィールドと第二フィールドとし、第三フィールドと第六フィールドの積を出力データの第三フィールドとしている。
lcalc コマンドの出力を先ほど使用した sm2 コマンドを使用して合計する事で、日毎の重み付き平均を算出している。
Python 上のデータを使用した処理
ここまででは、 Tukubai コマンドを使用してファイルを読み込み、処理をして、 Python 上のデータとして利用する例を見て来た。 shellOnerLiner モジュールは、 Python 上のイテレータ型データを入力として Unix コマンドにて処理をする用途でも使用する事ができる。
以下では、特定の範囲のパラメータをとるレコードを抜き出す例を示す。パラメータの範囲の指定は、 Python 上のデータにより指定される。まず、パラメータ範囲を指定する為のデータを作成する。
End of explanation
di = list2iter(l)
ol = shellOneLiner.ShellOneLiner(
'cjoin0 key=3 - set_A',
input=di )
head(ol,3)
Explanation: shellOneLiner モジュールはイテレータ型の Python データを入力として受け取る。
End of explanation |
14,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
At the end of this lesson, you will be able to use transfer learning to build highly accurate computer vision models for your custom purposes, even when you have relatively little data.
Lesson
Step1: Sample Code
Specify Model
Step2: Compile Model
Step3: Fit Model | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('mPFq5KMxKVw', width=800, height=450)
Explanation: Intro
At the end of this lesson, you will be able to use transfer learning to build highly accurate computer vision models for your custom purposes, even when you have relatively little data.
Lesson
End of explanation
from tensorflow.python.keras.applications import ResNet50
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D
num_classes = 2
resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))
my_new_model.add(Dense(num_classes, activation='softmax'))
# Say not to train first layer (ResNet) model. It is already trained
my_new_model.layers[0].trainable = False
Explanation: Sample Code
Specify Model
End of explanation
my_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
Explanation: Compile Model
End of explanation
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
image_size = 224
data_generator = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = data_generator.flow_from_directory(
'../input/urban-and-rural-photos/train',
target_size=(image_size, image_size),
batch_size=24,
class_mode='categorical')
validation_generator = data_generator.flow_from_directory(
'../input/urban-and-rural-photos/val',
target_size=(image_size, image_size),
class_mode='categorical')
my_new_model.fit_generator(
train_generator,
steps_per_epoch=3,
validation_data=validation_generator,
validation_steps=1)
Explanation: Fit Model
End of explanation |
14,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Topics
Step1: Frequency tables
Ibis provides the value_counts API, just like pandas, for computing a frequency table for a table column or array expression. You might have seen it used already earlier in the tutorial.
Step2: This can be customized, of course
Step3: Binning and histograms
Numeric array expressions (columns with numeric type and other array expressions) have bucket and histogram methods which produce different kinds of binning. These produce category values (the computed bins) that can be used in grouping and other analytics.
Some backends implement the .summary() method, which can be used to see the general distribution of a column.
Let's have a look at a few examples.
Alright then, now suppose we want to split the countries up into some buckets of our choosing for their population
Step4: The bucket function creates a bucketed category from the prices
Step5: Let's have a look at the value counts
Step6: The buckets we wrote down define 4 buckets numbered 0 through 3. The NaN is a pandas NULL value (since that's how pandas represents nulls in numeric arrays), so don't worry too much about that. Since the bucketing ends at 100000, we see there are 4122 values that are over 100000. These can be included in the bucketing with include_over
Step7: The bucketed object here is a special category type
Step8: Category values can either have a known or unknown cardinality. In this case, there's either 4 or 5 buckets based on how we used the bucket function.
Labels can be assigned to the buckets at any time using the label function | Python Code:
import os
import ibis
ibis.options.interactive = True
connection = ibis.sqlite.connect(os.path.join('data', 'geography.db'))
Explanation: Advanced Topics: Analytics Tools
Setup
End of explanation
countries = connection.table('countries')
countries.continent.value_counts()
Explanation: Frequency tables
Ibis provides the value_counts API, just like pandas, for computing a frequency table for a table column or array expression. You might have seen it used already earlier in the tutorial.
End of explanation
freq = (countries.group_by(countries.continent)
.aggregate([countries.count().name('# countries'),
countries.population.sum().name('total population')]))
freq
Explanation: This can be customized, of course:
End of explanation
buckets = [0, 1e6, 1e7, 1e8, 1e9]
Explanation: Binning and histograms
Numeric array expressions (columns with numeric type and other array expressions) have bucket and histogram methods which produce different kinds of binning. These produce category values (the computed bins) that can be used in grouping and other analytics.
Some backends implement the .summary() method, which can be used to see the general distribution of a column.
Let's have a look at a few examples.
Alright then, now suppose we want to split the countries up into some buckets of our choosing for their population:
End of explanation
bucketed = countries.population.bucket(buckets).name('bucket')
Explanation: The bucket function creates a bucketed category from the prices:
End of explanation
bucketed.value_counts()
Explanation: Let's have a look at the value counts:
End of explanation
bucketed = (countries.population
.bucket(buckets, include_over=True)
.name('bucket'))
bucketed.value_counts()
Explanation: The buckets we wrote down define 4 buckets numbered 0 through 3. The NaN is a pandas NULL value (since that's how pandas represents nulls in numeric arrays), so don't worry too much about that. Since the bucketing ends at 100000, we see there are 4122 values that are over 100000. These can be included in the bucketing with include_over:
End of explanation
bucketed.type()
Explanation: The bucketed object here is a special category type
End of explanation
bucket_counts = bucketed.value_counts()
labeled_bucket = (bucket_counts.bucket
.label(['< 1M', '> 1M', '> 10M', '> 100M', '> 1B'])
.name('bucket_name'))
expr = (bucket_counts[labeled_bucket, bucket_counts]
.sort_by('bucket'))
expr
Explanation: Category values can either have a known or unknown cardinality. In this case, there's either 4 or 5 buckets based on how we used the bucket function.
Labels can be assigned to the buckets at any time using the label function:
End of explanation |
14,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load, filter, export the NSQD Dataset
The cell below imports the libaries we need and defines some function that help up clean up the NSQD
Step1: Create a raw data set, then compute season and apply basic filters
(also export to CSV file)
Step2: Show the sample counts for each parameter
Step3: Export TSS to a CSV file | Python Code:
import numpy
import wqio
import pynsqd
import pycvc
def get_cvc_parameter(nsqdparam):
try:
cvcparam = list(filter(
lambda p: p['nsqdname'] == nsqdparam, pycvc.info.POC_dicts
))[0]['cvcname']
except IndexError:
cvcparam = numpy.nan
return cvcparam
def fix_nsqd_bacteria_units(df, unitscol='units'):
df[unitscol] = df[unitscol].replace(to_replace='MPN/100 mL', value='CFU/100 mL')
return df
nsqd_params = [
p['nsqdname']
for p in pycvc.info.POC_dicts
]
Explanation: Load, filter, export the NSQD Dataset
The cell below imports the libaries we need and defines some function that help up clean up the NSQD
End of explanation
raw_data = pynsqd.NSQData().data
clean_data = (
raw_data
.query("primary_landuse != 'Unknown'")
.query("parameter in @nsqd_params")
.query("fraction == 'Total'")
.query("epa_rain_zone == 1")
.assign(station='outflow')
.assign(cvcparam=lambda df: df['parameter'].apply(get_cvc_parameter))
.assign(season=lambda df: df['start_date'].apply(wqio.utils.getSeason))
.drop('parameter', axis=1)
.rename(columns={'cvcparam': 'parameter'})
.pipe(fix_nsqd_bacteria_units)
.query("primary_landuse == 'Residential'")
)
Explanation: Create a raw data set, then compute season and apply basic filters
(also export to CSV file)
End of explanation
clean_data.groupby(by=['parameter', 'season']).size().unstack(level='season')
Explanation: Show the sample counts for each parameter
End of explanation
(
clean_data
.query("parameter == 'Total Suspended Solids'")
.to_csv('NSQD_Res_TSS.csv', index=False)
)
Explanation: Export TSS to a CSV file
End of explanation |
14,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCMA
Source ID: CANESM5
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
14,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Dreams (with Caffe)
This notebook demonstrates how to use the Caffe neural network framework to
produce "dream" visuals shown in the
Google Research blog post. #deepdream
Dependencies
Standard Python scientific stack
Step1: Loading DNN model
In this notebook we are going to use a GoogLeNet model trained on ImageNet dataset.
Step2: Producing dreams
Making the "dream" images is very simple. Essentially it is just a gradient ascent process that tries to maximize the L2 norm of activations of a particular DNN layer. Here are a few simple tricks that we found useful for getting good images
Step3: Next we implement an ascent through different scales. We call these scales "octaves".
Step4: Now we are ready to let the neural network reveal its dreams! Let's take a cloud image as a starting point
Step5: Running the next code cell starts the detail generation process. You may see how new patterns start to form, iteration by iteration, octave by octave.
Step6: The complexity of the details generated depends on which layer's activations we try to maximize. Higher layers produce complex features, while lower ones enhance edges and textures, giving the image an impressionist feeling
Step7: We encourage readers to experiment with layer selection to see how it affects the results. Execute the next code cell to see the list of different layers. You can modify the make_step function to make it follow some different objective, say to select a subset of activations to maximize, or to maximize multiple layers at once. There is a huge design space to explore!
Step8: What if we feed the deepdream function its own output, after applying a little zoom to it? It turns out that this leads to an endless stream of impressions of the things that the network saw during training. Some patterns fire more often than others, suggestive of basins of attraction.
We will start the process from the same sky image as above, but after some iteration the original image becomes irrelevant; even random noise can be used as the starting point.
Step9: Be careful running the code above, it can bring you into very strange realms!
Step10: Controlling dreams
The image detail generation method described above tends to produce some patterns more often the others. One easy way to improve the generated image diversity is to tweak the optimization objective. Here we show just one of many ways to do that. Let's use one more input image. We'd call it a "guide".
Step11: Note that the neural network we use was trained on images downscaled to 224x224 size. So high resolution images might have to be downscaled, so that the network could pick up their features. The image we use here is already small enough.
Now we pick some target layer and extract guide image features.
Step12: Instead of maximizing the L2-norm of current image activations, we try to maximize the dot-products between activations of current image, and their best matching correspondences from the guide image. | Python Code:
# imports and basic notebook setup
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from IPython.display import clear_output, Image, display
from google.protobuf import text_format
import caffe
# If your GPU supports CUDA and Caffe was built with CUDA support,
# uncomment the following to run Caffe operations on the GPU.
# caffe.set_mode_gpu()
# caffe.set_device(0) # select GPU device if multiple devices exist
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
Explanation: Deep Dreams (with Caffe)
This notebook demonstrates how to use the Caffe neural network framework to
produce "dream" visuals shown in the
Google Research blog post. #deepdream
Dependencies
Standard Python scientific stack: NumPy, SciPy, PIL, IPython. Those libraries can also be installed as a part of one of the scientific packages for Python, such as Anaconda or Canopy.
Caffe deep learning framework (installation instructions).
Google protobuf library that is used for Caffe model manipulation.
End of explanation
model_path = '../caffe/models/bvlc_googlenet/' # substitute your path here
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + 'bvlc_googlenet.caffemodel'
# Patching model to be able to compute gradients.
# Note that you can also manually add "force_backward: true" line to "deploy.prototxt".
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
Explanation: Loading DNN model
In this notebook we are going to use a GoogLeNet model trained on ImageNet dataset.
End of explanation
def objective_L2(dst):
dst.diff[:] = dst.data
def make_step(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective=objective_L2):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective(dst) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
Explanation: Producing dreams
Making the "dream" images is very simple. Essentially it is just a gradient ascent process that tries to maximize the L2 norm of activations of a particular DNN layer. Here are a few simple tricks that we found useful for getting good images:
* offset image by a random jitter
* normalize the magnitude of gradient ascent steps
* apply ascent across multiple scales (octaves)
First we implement a basic gradient ascent step function, applying the first two tricks:
End of explanation
def deepdream(net, base_img, iter_n=10, octave_n=4, octave_scale=1.4,
end='inception_4c/output', clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
Explanation: Next we implement an ascent through different scales. We call these scales "octaves".
End of explanation
img = np.float32(PIL.Image.open('sky1024px.jpg'))
showarray(img)
Explanation: Now we are ready to let the neural network reveal its dreams! Let's take a cloud image as a starting point:
End of explanation
_=deepdream(net, img)
Explanation: Running the next code cell starts the detail generation process. You may see how new patterns start to form, iteration by iteration, octave by octave.
End of explanation
_=deepdream(net, img, end='inception_3b/5x5_reduce')
Explanation: The complexity of the details generated depends on which layer's activations we try to maximize. Higher layers produce complex features, while lower ones enhance edges and textures, giving the image an impressionist feeling:
End of explanation
net.blobs.keys()
Explanation: We encourage readers to experiment with layer selection to see how it affects the results. Execute the next code cell to see the list of different layers. You can modify the make_step function to make it follow some different objective, say to select a subset of activations to maximize, or to maximize multiple layers at once. There is a huge design space to explore!
End of explanation
!mkdir frames
frame = img
frame_i = 0
h, w = frame.shape[:2]
s = 0.05 # scale coefficient
for i in xrange(100):
frame = deepdream(net, frame)
PIL.Image.fromarray(np.uint8(frame)).save("frames/%04d.jpg"%frame_i)
frame = nd.affine_transform(frame, [1-s,1-s,1], [h*s/2,w*s/2,0], order=1)
frame_i += 1
Explanation: What if we feed the deepdream function its own output, after applying a little zoom to it? It turns out that this leads to an endless stream of impressions of the things that the network saw during training. Some patterns fire more often than others, suggestive of basins of attraction.
We will start the process from the same sky image as above, but after some iteration the original image becomes irrelevant; even random noise can be used as the starting point.
End of explanation
Image(filename='frames/0029.jpg')
Explanation: Be careful running the code above, it can bring you into very strange realms!
End of explanation
guide = np.float32(PIL.Image.open('flowers.jpg'))
showarray(guide)
Explanation: Controlling dreams
The image detail generation method described above tends to produce some patterns more often the others. One easy way to improve the generated image diversity is to tweak the optimization objective. Here we show just one of many ways to do that. Let's use one more input image. We'd call it a "guide".
End of explanation
end = 'inception_3b/output'
h, w = guide.shape[:2]
src, dst = net.blobs['data'], net.blobs[end]
src.reshape(1,3,h,w)
src.data[0] = preprocess(net, guide)
net.forward(end=end)
guide_features = dst.data[0].copy()
Explanation: Note that the neural network we use was trained on images downscaled to 224x224 size. So high resolution images might have to be downscaled, so that the network could pick up their features. The image we use here is already small enough.
Now we pick some target layer and extract guide image features.
End of explanation
def objective_guide(dst):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
_=deepdream(net, img, end=end, objective=objective_guide)
Explanation: Instead of maximizing the L2-norm of current image activations, we try to maximize the dot-products between activations of current image, and their best matching correspondences from the guide image.
End of explanation |
14,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exam
Problem 2. Interpolation
Linear Spline Interpolation visualization, courtesy of codecogs
Interpolation is a method of curve fitting.
In this problem, spline interpolation is considered
Practical applications
Step1: Linear spline functions are calculated with the following | Python Code:
from IPython.display import display
import pandas as pd
import matplotlib.pyplot
%matplotlib inline
index = ['f(x)']
columns = [-2, 0, 2, 3]
data = [[-3, -5, 9, 22]]
df = pd.DataFrame(data, index=index, columns=columns)
print(df)
# for brevity, we will write it like this
index = [' x', 'f(x)']
columns = [1, 2, 3, 4] #['x1', 'x2', 'x3', 'x4']
data = [[-2, 0, 2, 3], [-3, -5, 9, 22]]
df = pd.DataFrame(data, index=index, columns=columns)
display(df)
matplotlib.pyplot.plot(data[0], data[1], ls='dashed', color='#a23636')
matplotlib.pyplot.scatter(data[0], data[1])
matplotlib.pyplot.show()
Explanation: Exam
Problem 2. Interpolation
Linear Spline Interpolation visualization, courtesy of codecogs
Interpolation is a method of curve fitting.
In this problem, spline interpolation is considered
Practical applications:
+ estimating function values based on some sample of known data points
Problem
Given the inputs and function values below, approximate f(-1) and f(1) by linear spline functions.
End of explanation
print('x1 = %i' % data[0][0])
print('y1 = %i' % data[1][0])
print('---')
# linear spline function aproximation
print('no values: %i' % len(columns))
spline = {}
for i in range(len(columns)-1):
print('\nP[' + str(i+1) + ']')
# we calculate the numerator
num_1s = str(data[1][i+1]) + ' * x - ' + str(data[1][i]) + ' * x'
print('num_1s: %s' % num_1s)
num_2 = data[1][i] * data[0][i+1] - data[1][i+1] * data[0][i]
print('num_2: %i' % num_2)
# we calculate the denominator
den = data[0][i+1] - data[0][i]
print('den: %i' % den)
# constructing the function
func = 'lambda x: (' + num_1s + str(num_2) + ') / ' + str(den)
print('func: %s' % func)
spline[i] = eval(func)
print('---')
# sanity checks
# P1(x) = -x - 5
assert (spline[0](-5) == 0),"For this example, the value should be 0, but the value returned is " + str(spline[0](-5))
# P2(x) = 4x + 1
# TODO: this is failing (checked my solution, probably my assertion is wrong) !
#assert (spline[1](0) == 1),"For this example, the value should be 1, but the value returned is " + str(spline[1](0))
# P3(x) = 13x - 17
assert (spline[2](1) == -4),"For this example, the value should be -4, but the value returned is " + str(spline[2](1))
print('Approximating values of S\n---')
aproximation_queue = [-1, 1]
results = {}
def approximate(spline, val):
for i in range(len(spline)-1):
if data[0][i] <= val <= data[0][i+1]:
print('Approximation using P[%i] is: %i' % (i, spline[i](val)))
results[val] = spline[i](val)
for i in range(len(aproximation_queue)):
approximate(spline, aproximation_queue[i])
# sanity checks
# S(-1) = P1(-1) = -4
assert (spline[0](-1) == -4),"For this example, the value should be -4, but the value returned is " + str(spline[0](-5))
# S(1) = P2(1) = 5
# TODO: same as above !
#assert (spline[1](1) == 5),"For this example, the value should be 5, but the value returned is " + str(spline[1](0))
#x.extend(results.keys())
#y.extend(results.values())
x2 = list(results.keys())
y2 = list(results.values())
matplotlib.pyplot.plot(data[0], data[1], ls='dashed', color='#a23636')
matplotlib.pyplot.scatter(data[0], data[1])
matplotlib.pyplot.scatter(x2, y2, color='#ff0000')
matplotlib.pyplot.show()
Explanation: Linear spline functions are calculated with the following:
$$i \in [1,\ \left\vert{X}\right\vert - 1],\ i \in \mathbb{N}: $$
$$P_i = \frac{x-x_i}{x_{i+1}-x_i} * y_{i+1} + \frac{x_{i+1}-x}{x_{i+1}-x_i} * y_i$$
By simplification, we can reduce to the following:
$$P_i = \frac{y_{i+1} (x-x_i) + y_i (x_{i+1}-x)}{x_{i+1}-x_i} = \frac{(y_{i+1}x - y_ix) - y_{i+1}x_i + y_ix_{i+1}}{x_{i+1}-x_i}$$
The final form used will be:
$$P_i = \frac{(y_{i+1}x - y_ix) + (y_ix_{i+1} - y_{i+1}x_i)}{(x_{i+1}-x_i)}$$
As it can be seen, the only gist would be to emulate the x in the first term (num1s below), the other terms being numbers (num2, den). Parantheses used to isolate the formula for each of the 3 variables.
As such, we can write the parantheses as a string, while the others will be simply calculated. After this, the final string is evaluated as a lambda function.
End of explanation |
14,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
14,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recreating Ling IMMI (2017)
In this notebook, we will recreate some key results from Ling et al. IMMI (2017). We will show that the errors produced from the Random Forest implemented in lolo are well-calibrated and that the uncertainties can be used with Sequential Learning to quickly find optimal materials within a search space.
Note
Step1: Set the random seed
Step2: Get the Datasets
The Ling Paper used 4 different datasets to test the uncertainty estimates
Step4: Convert the composition and class variable from strings
Step5: Compute Features
Every dataset except the steel fatigue dataset uses the composition-based features of Ward et al..
Step6: Get the Residuals and RF Uncertainty
As described in the Ling paper, ideally-calibrated uncertainty estimaes should have a particular relationship with the errors of a machine learning model. Specifically, the distribution of $r(x)/\sigma(x)$ where $r(x)$ is the residual of the prediction and $\sigma(x)$ is the uncertainty of the prediction for x should have a Gaussian distribution with zero mean and unit standard deviation.
Step7: Get the errors from 8-fold cross-validation
Step8: Plot the normalized residuals ($r(x)/\sigma(x)$) against the normal distribution
Step9: Here, we compare the error distribution using the Lolo uncertainty estimates (left) and the assumption that all entries have the same error (right). The normalized residuals for the uncertainty estimates have a distribution closer to the unit normal distribution, which means - as expected - that it better captures which predictions will have a higher error.
Sequential Learning
One important use of model uncertainties is to employ them to guide which experiments to pick to find optimal materials with minimal experiments/computations. As described in the Ling paper (and other nice articles), it is not always best to pick the experiment that the model predicts to have the best properties if you can perform more than one experiment sequentially. Rather, it can be better to pick entries with large uncertainities that, when tested and added to the training set, can improve the models predictions for the next experiments.
Here, we demonstrate one approach for picking experiments
Step10: Step 2
Step11: For MEI, we pick the highest predicted value. For MLI, we pick the material that has the highest probability of being better than any material in the training set. As we assume the predictions to be normally distributed, the probability of materials can be computed from the Z-score $Z = (y - y^)/\sigma$ where $y^$ is the maximum of the $y$ of the training set. Formally, the probability can be computed from the Z-score using the cumulative distribution function of the normal distribution. For our purposes, we can use the Z-score becuase the probability is a monotonic function of the Z-score (stated simply
Step12: For this particular iteration, the MEI and MLI strategies pick the same material. Depending on the random seed of this notebook and that used by lolo, you may see that the material picked by MLI has a lower predicted $ZT$ but a higher variance. According to the logic behind MLI, picking that entry will (1) yield a higher liklihood of finding a well-performing material and (2) lead to an improved model.
Step 3
Step13: Random Selection
Just pick an entry at random, no need to train a model
Step14: Maximum Expected Improvement
Pick the entry with the largest predicted value
Step15: Maximum Likelihood of Improvement
Pick the entry with the largest probability of improvement
Step16: Plot the results | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from matminer.data_retrieval.retrieve_Citrine import CitrineDataRetrieval
from matminer.featurizers.base import MultipleFeaturizer
from matminer.featurizers import composition as cf
from lolopy.learners import RandomForestRegressor
from sklearn.model_selection import KFold
from pymatgen import Composition
from scipy.stats import norm
import pandas as pd
import numpy as np
import os
Explanation: Recreating Ling IMMI (2017)
In this notebook, we will recreate some key results from Ling et al. IMMI (2017). We will show that the errors produced from the Random Forest implemented in lolo are well-calibrated and that the uncertainties can be used with Sequential Learning to quickly find optimal materials within a search space.
Note: This notebook will require you to install matminer and establish an account with Citrination to get an an API key (see Quickstart), and set it as an environment variable named CITRINE_KEY.
End of explanation
np.random.seed(8)
Explanation: Set the random seed
End of explanation
cdr = CitrineDataRetrieval()
data = cdr.get_dataframe(criteria={'data_set_id': 150888}, print_properties_options=False)
Explanation: Get the Datasets
The Ling Paper used 4 different datasets to test the uncertainty estimates
End of explanation
def get_compostion(c):
Attempt to parse composition, return None if failed
try:
return Composition(c)
except:
return None
data['composition'] = data['chemicalFormula'].apply(get_compostion)
data['ZT'] = pd.to_numeric(data['ZT'], errors='coerce')
data.reset_index(drop=True, inplace=True)
Explanation: Convert the composition and class variable from strings
End of explanation
f = MultipleFeaturizer([cf.Stoichiometry(), cf.ElementProperty.from_preset("magpie"),
cf.ValenceOrbital(props=['avg']), cf.IonProperty(fast=True)])
X = np.array(f.featurize_many(data['composition']))
Explanation: Compute Features
Every dataset except the steel fatigue dataset uses the composition-based features of Ward et al..
End of explanation
model = RandomForestRegressor()
Explanation: Get the Residuals and RF Uncertainty
As described in the Ling paper, ideally-calibrated uncertainty estimaes should have a particular relationship with the errors of a machine learning model. Specifically, the distribution of $r(x)/\sigma(x)$ where $r(x)$ is the residual of the prediction and $\sigma(x)$ is the uncertainty of the prediction for x should have a Gaussian distribution with zero mean and unit standard deviation.
End of explanation
y = data['ZT'].values
y_resid = []
y_uncer = []
for train_id, test_id in KFold(8, shuffle=True).split(X):
model.fit(X[train_id], y[train_id])
yf_pred, yf_std = model.predict(X[test_id], return_std=True)
y_resid.extend(yf_pred - y[test_id])
y_uncer.extend(yf_std)
Explanation: Get the errors from 8-fold cross-validation
End of explanation
fig, axs = plt.subplots(1, 2, sharey=True)
x = np.linspace(-8, 8, 50)
# Plot the RF uncertainty
resid = np.divide(y_resid, y_uncer)
axs[0].hist(resid, x, density=True)
axs[0].set_title('With Lolo Uncertainty Estimates')
# Plot assuming constant errors
resid = np.divide(y_resid, np.sqrt(np.power(y_resid, 2).mean()))
axs[1].hist(resid, x, density=True)
axs[1].set_title('Assuming Constant Error')
for ax in axs:
ax.plot(x, norm.pdf(x), 'k--', lw=0.75)
ax.set_xlabel('Normalized Residual')
axs[0].set_ylabel('Probability Density')
fig.set_size_inches(6.5, 2)
fig.tight_layout()
Explanation: Plot the normalized residuals ($r(x)/\sigma(x)$) against the normal distribution
End of explanation
in_train = np.zeros(len(data), dtype=np.bool)
in_train[np.random.choice(len(data), 10, replace=False)] = True
print('Picked {} training entries'.format(in_train.sum()))
assert not np.isclose(max(y), max(y[in_train]))
Explanation: Here, we compare the error distribution using the Lolo uncertainty estimates (left) and the assumption that all entries have the same error (right). The normalized residuals for the uncertainty estimates have a distribution closer to the unit normal distribution, which means - as expected - that it better captures which predictions will have a higher error.
Sequential Learning
One important use of model uncertainties is to employ them to guide which experiments to pick to find optimal materials with minimal experiments/computations. As described in the Ling paper (and other nice articles), it is not always best to pick the experiment that the model predicts to have the best properties if you can perform more than one experiment sequentially. Rather, it can be better to pick entries with large uncertainities that, when tested and added to the training set, can improve the models predictions for the next experiments.
Here, we demonstrate one approach for picking experiments: Maximum Liklihood of Improvement (MLI). In contrast to picking the material with the best predicted properties (an approach we refer to Maximum Expected Improvment (MEU)), the MLI approach pickes the material with with the highest liklihood of being better than the best material in the training set - a measure that uses both the predicted value and the uncertainty.
Step 1: Pick an initial training set
We'll start with a small set of entries from the training set
End of explanation
model.fit(X[in_train], y[in_train])
y_pred, y_std = model.predict(X[~in_train], return_std=True)
Explanation: Step 2: Demonstrate picking the entries based on MLI and MEI
Just to give a visual of how the selection process works
Make the predictions
End of explanation
mei_selection = np.argmax(y_pred)
mli_selection = np.argmax(np.divide(y_pred - np.max(y[in_train]), y_std))
print('Predicted ZT of material #{} selected based on MEI: {:.2f} +/- {:.2f}'.format(mei_selection, y_pred[mei_selection], y_std[mei_selection]))
print('Predicted ZT of material #{} selected based on MLI: {:.2f} +/- {:.2f}'.format(mli_selection, y_pred[mli_selection], y_std[mli_selection]))
Explanation: For MEI, we pick the highest predicted value. For MLI, we pick the material that has the highest probability of being better than any material in the training set. As we assume the predictions to be normally distributed, the probability of materials can be computed from the Z-score $Z = (y - y^)/\sigma$ where $y^$ is the maximum of the $y$ of the training set. Formally, the probability can be computed from the Z-score using the cumulative distribution function of the normal distribution. For our purposes, we can use the Z-score becuase the probability is a monotonic function of the Z-score (stated simply: the material with the highest probability will have the highest Z-score).
End of explanation
n_steps = 32
all_inds = set(range(len(y)))
Explanation: For this particular iteration, the MEI and MLI strategies pick the same material. Depending on the random seed of this notebook and that used by lolo, you may see that the material picked by MLI has a lower predicted $ZT$ but a higher variance. According to the logic behind MLI, picking that entry will (1) yield a higher liklihood of finding a well-performing material and (2) lead to an improved model.
Step 3: Run an iterative search
Starting with the same 32 materials in the training set, we will iteratively pick materials, add them to the training set, and retrain the model using 3 different strategies for picking entries: MEI, MLI, and randomly.
End of explanation
random_train = [set(np.where(in_train)[0].tolist())]
for i in range(n_steps):
# Get the current train set and search space
train_inds = set(random_train[-1]) # Last iteration
search_inds = sorted(all_inds.difference(train_inds))
# Pick an entry at random
train_inds.add(np.random.choice(search_inds))
# Add it to the list of training sets
random_train.append(train_inds)
Explanation: Random Selection
Just pick an entry at random, no need to train a model
End of explanation
mei_train = [set(np.where(in_train)[0].tolist())]
for i in range(n_steps):
# Get the current train set and search space
train_inds = sorted(set(mei_train[-1])) # Last iteration
search_inds = sorted(all_inds.difference(train_inds))
# Pick entry with the largest maximum value
model.fit(X[train_inds], y[train_inds])
y_pred = model.predict(X[search_inds])
train_inds.append(search_inds[np.argmax(y_pred)])
# Add it to the list of training sets
mei_train.append(set(train_inds))
Explanation: Maximum Expected Improvement
Pick the entry with the largest predicted value
End of explanation
mli_train = [set(np.where(in_train)[0].tolist())]
for i in range(n_steps):
# Get the current train set and search space
train_inds = sorted(set(mei_train[-1])) # Last iteration
search_inds = sorted(all_inds.difference(train_inds))
# Pick entry with the largest maximum value
model.fit(X[train_inds], y[train_inds])
y_pred, y_std = model.predict(X[search_inds], return_std=True)
train_inds.append(search_inds[np.argmax(np.divide(y_pred - np.max(y[train_inds]), y_std))])
# Add it to the list of training sets
mli_train.append(set(train_inds))
Explanation: Maximum Likelihood of Improvement
Pick the entry with the largest probability of improvement
End of explanation
fig, ax = plt.subplots()
for train_inds, label in zip([random_train, mei_train, mli_train], ['Random', 'MEI', 'MLI']):
ax.plot(np.arange(len(train_inds)), [max(y[list(t)]) for t in train_inds], label=label)
ax.set_xlabel('Number of New Experiments')
ax.set_ylabel('Best $ZT$ Found')
fig.set_size_inches(3.5, 2)
ax.legend()
fig.tight_layout()
Explanation: Plot the results
End of explanation |
14,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
rv = np.zeros((len(x), 10))
for i, val in enumerate(x):
rv[i][val] = 1
return rv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, (None,) + tuple(image_shape), "x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, (None,) + (n_classes,), "y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, None, "keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
x_depth = x_tensor.get_shape().as_list()[-1]
W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_depth, conv_num_outputs], stddev=0.05))
# bias = tf.Variable(tf.random_normal([conv_num_outputs], stddev = 0.05))
bias = tf.Variable(tf.zeros([conv_num_outputs]))
net = tf.nn.conv2d(x_tensor, W, [1, conv_strides[0], conv_strides[1], 1], 'SAME')
net = tf.nn.bias_add(net, bias)
net = tf.nn.relu(net)
net = tf.nn.max_pool(net, [1, pool_ksize[0], pool_ksize[1], 1], [1, pool_strides[0], pool_strides[1], 1], 'SAME')
return net
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
batch_size = x_tensor.get_shape().as_list()[0]
flattened = 1
for i, val in enumerate(x_tensor.get_shape().as_list()):
if i != 0:
flattened *= val
if batch_size == None:
batch_size = -1
return tf.reshape(x_tensor, [batch_size, flattened])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
batch_size = x_tensor.get_shape().as_list()[0]
n = x_tensor.get_shape().as_list()[1]
W = tf.Variable(tf.truncated_normal([n, num_outputs], stddev = 0.05))
#bias = tf.Variable(tf.random_normal([num_outputs]))
bias = tf.Variable(tf.zeros([num_outputs]))
net = tf.matmul(x_tensor, W)
net = tf.add(net, bias)
net = tf.nn.relu(net)
return net
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
batch_size = x_tensor.get_shape().as_list()[0]
n = x_tensor.get_shape().as_list()[1]
W = tf.Variable(tf.truncated_normal([n, num_outputs], stddev = 0.05))
# bias = tf.Variable(tf.random_normal([num_outputs]))
bias = tf.Variable(tf.zeros([num_outputs]))
net = tf.matmul(x_tensor, W)
net = tf.add(net, bias)
return net
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv = x
conv = conv2d_maxpool(conv, 32, (3, 3), (1, 1), (2, 2), (2, 2))
# conv = conv2d_maxpool(conv, 128, (3, 3), (1, 1), (2, 2), (2, 2))
conv = conv2d_maxpool(conv, 256, (3, 3), (1, 1), (2, 2), (2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
full = flat
full = fully_conn(full, 1024)
full = tf.nn.dropout(full, keep_prob)
# full = fully_conn(full, 1024)
# full = tf.nn.dropout(full, keep_prob)
# full = fully_conn(full, 1024)
# full = tf.nn.dropout(full, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(full, 10)
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict = { x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
costval = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})
accval = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})
print("loss: " + str(costval) + "; accuracy: " + str(accval))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 40
batch_size = 256
keep_probability = 0.75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
14,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas
Fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for many different kinds of data
Step1: Series
Creating a Series by passing a list of values, letting pandas create a default integer index
Step2: DataFrame
Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns
Step3: Creating a DataFrame by passing a dict of objects that can be converted to series-like.
Step4: Panel
items
Step5: Viewing data
Step6: See NumPy data
Step7: Statistic Summary
Step8: Transposing data
Step9: Sorting
Step10: Selection
Get a column
Step11: Get rows
Step12: Selection by Label
Step13: Adding data
Step14: Boolean indexing
Step15: Stats operations
Step16: Optimized pandas data access
It is recommended to use the optimized pandas data access methods .at, .iat, .loc, .iloc and .ix. | Python Code:
import pandas as pd
import numpy as np
Explanation: Pandas
Fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for many different kinds of data:
Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure
In this section we will be looking at some of the basics functions that Pandas can perform
Panda data structures
Series - 1D array
DataFrame - 2D array
Panel - 3D array
Data types - dtypes
Types in pandas objects:
- float
- int
- bool
- datetime64[ns] and datetime64[ns, tz] (in >= 0.17.0)
- timedelta[ns]
- category (in >= 0.15.0)
- object
dtypes have item sizes, e.g. int64 and int32
Standard libraries
End of explanation
s = pd.Series([1,3,5,np.nan,6,8])
s
Explanation: Series
Creating a Series by passing a list of values, letting pandas create a default integer index:
End of explanation
dates = pd.date_range('20130101', periods=6)
dates
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
Explanation: DataFrame
Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns:
End of explanation
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
df2
df2.dtypes
Explanation: Creating a DataFrame by passing a dict of objects that can be converted to series-like.
End of explanation
wp = pd.Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'],
major_axis=pd.date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
wp
Explanation: Panel
items: axis 0, each item corresponds to a DataFrame contained inside
major_axis: axis 1, it is the index (rows) of each of the DataFrames
minor_axis: axis 2, it is the columns of each of the DataFrames
End of explanation
df.head()
df.tail(3)
Explanation: Viewing data
End of explanation
df.index
df.columns
df.values
Explanation: See NumPy data
End of explanation
df.describe()
Explanation: Statistic Summary
End of explanation
df.T
Explanation: Transposing data
End of explanation
df.sort_index(axis=1, ascending=False)
Explanation: Sorting
End of explanation
df['A']
Explanation: Selection
Get a column
End of explanation
# By index
df[0:3]
#By Value
df['20130102':'20130104']
Explanation: Get rows
End of explanation
df.loc[dates[0]]
# Limit columns
df.loc[:,['A','B']]
df_stock = pd.DataFrame({'Stocks': ["AAPL","CA","CTXS","FIS","MA"],
'Values': [126.17,31.85,65.38,64.08,88.72]})
df_stock
Explanation: Selection by Label
End of explanation
df_stock = df_stock.append({"Stocks":"GOOG", "Values":523.53}, ignore_index=True)
df_stock
Explanation: Adding data
End of explanation
df_stock[df_stock["Values"]>65]
Explanation: Boolean indexing
End of explanation
df_stock.mean()
# Per column
df.mean()
# Per row
df.mean(1)
Explanation: Stats operations
End of explanation
big_dates = pd.date_range('20130101', periods=60000)
big_dates
big_df = pd.DataFrame(np.random.randn(60000,4), index=big_dates, columns=list('ABCD'))
big_df
big_df['20200102':'20200104']
big_df.loc['20130102':'20130104']
%timeit big_df['20200102':'20200104']
%timeit big_df.loc['20200102':'20200104']
big_df[30000:30003]
big_df.iloc[30000:30003]
%timeit big_df[30000:30003]
%timeit big_df.iloc[30000:30003]
Explanation: Optimized pandas data access
It is recommended to use the optimized pandas data access methods .at, .iat, .loc, .iloc and .ix.
End of explanation |
14,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Tuples
"Though tuples may seem similar to lists, they are often used in different situations and for different purposes. Tuples are immutable, and usually contain an heterogeneous sequence of elements that are accessed via unpacking (or indexing (or even by attribute in the case of namedtuples). Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list." - Python Documentation
"Tuples are heterogeneous data structures (i.e., their entries have different meanings), while lists are homogeneous sequences." - StackOverflow
Parentheses are optional, but useful.
Step2: Dictionaries
"A dictionary is like an address-book where you can find the address or contact details of a person by knowing only his/her name i.e. we associate keys (name) with values (details). Note that the key must be unique just like you cannot find out the correct information if you have two persons with the exact same name." - A Byte Of Python
Step3: Sets
Sets are unordered collections of simple objects. | Python Code:
# Create a list of countries, then print the results
allies = ['USA','UK','France','New Zealand',
'Australia','Canada','Poland']; allies
# Print the length of the list
len(allies)
# Add an item to the list, then print the results
allies.append('China'); allies
# Sort list, then print the results
allies.sort(); allies
# Reverse sort list, then print the results
allies.reverse(); allies
# View the first item of the list
allies[0]
# View the last item of the list
allies[-1]
# Delete the item in the list
del allies[0]; allies
# Add a numeric value to a list of strings
allies.append(3442); allies
Explanation: Title: Data Structure Basics
Slug: data_structure_basics
Summary: Data Structure Basics
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Lists
"A list is a data structure that holds an ordered collection of items i.e. you can store a sequence of items in a list." - A Byte Of Python
Lists are mutable.
End of explanation
# Create a tuple of state names
usa = ('Texas', 'California', 'Maryland'); usa
# Create a tuple of countries
# (notice the USA has a state names in the nested tuple)
countries = ('canada', 'mexico', usa); countries
# View the third item of the top tuple
countries[2]
# View the third item of the third tuple
countries[2][2]
Explanation: Tuples
"Though tuples may seem similar to lists, they are often used in different situations and for different purposes. Tuples are immutable, and usually contain an heterogeneous sequence of elements that are accessed via unpacking (or indexing (or even by attribute in the case of namedtuples). Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list." - Python Documentation
"Tuples are heterogeneous data structures (i.e., their entries have different meanings), while lists are homogeneous sequences." - StackOverflow
Parentheses are optional, but useful.
End of explanation
# Create a dictionary with key:value combos
staff = {'Chris' : '[email protected]',
'Jake' : '[email protected]',
'Ashley' : '[email protected]',
'Shelly' : '[email protected]'
}
# Print the value using the key
staff['Chris']
# Delete a dictionary entry based on the key
del staff['Chris']; staff
# Add an item to the dictionary
staff['Guido'] = '[email protected]'; staff
Explanation: Dictionaries
"A dictionary is like an address-book where you can find the address or contact details of a person by knowing only his/her name i.e. we associate keys (name) with values (details). Note that the key must be unique just like you cannot find out the correct information if you have two persons with the exact same name." - A Byte Of Python
End of explanation
# Create a set of BRI countries
BRI = set(['brazil', 'russia', 'india'])
# Is India in the set BRI?
'india' in BRI
# Is the US in the set BRI?
'usa' in BRI
# Create a copy of BRI called BRIC
BRIC = BRI.copy()
# Add China to BRIC
BRIC.add('china')
# Is BRIC a super-set of BRI?
BRIC.issuperset(BRI)
# Remove Russia from BRI
BRI.remove('russia')
# What items are the union of BRI and BRIC?
BRI & BRIC
Explanation: Sets
Sets are unordered collections of simple objects.
End of explanation |
14,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forecast to Power Tutorial
This tutorial will walk through the process of going from Unidata forecast model data to AC power using the SAPM.
Table of contents
Step1: Load Forecast data
pvlib forecast module only includes several models. To see the full list of forecast models visit the Unidata website
Step2: Define some PV system parameters.
Step3: Let's look at the downloaded version of the forecast data.
Step4: This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
Step5: Plot the GHI data. Most pvlib forecast models derive this data from the weather models' cloud clover data.
Step6: Calculate modeling intermediates
Before we can calculate power for all the forecast times, we will need to calculate
Step7: The funny looking jump in the azimuth is just due to the coarse time sampling in the TMY file.
DNI ET
Calculate extra terrestrial radiation. This is needed for many plane of array diffuse irradiance models.
Step8: Airmass
Calculate airmass. Lots of model options here, see the atmosphere module tutorial for more details.
Step9: The funny appearance is due to aliasing and setting invalid numbers equal to NaN. Replot just a day or two and you'll see that the numbers are right.
POA sky diffuse
Use the Hay Davies model to calculate the plane of array diffuse sky radiation. See the irradiance module tutorial for comparisons of different models.
Step10: POA ground diffuse
Calculate ground diffuse. We specified the albedo above. You could have also provided a string to the surface_type keyword argument.
Step11: AOI
Calculate AOI
Step12: Note that AOI has values greater than 90 deg. This is ok.
POA total
Calculate POA irradiance
Step13: Cell and module temperature
Calculate pv cell and module temperature
Step14: DC power using SAPM
Get module data from the web.
Step15: Choose a particular module
Step16: Run the SAPM using the parameters we calculated above.
Step17: AC power using SAPM
Get the inverter database from the web
Step18: Choose a particular inverter
Step19: Plot just a few days.
Step20: Some statistics on the AC power | Python Code:
# built-in python modules
import datetime
import inspect
import os
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
# seaborn makes your plots look better
try:
import seaborn as sns
sns.set(rc={"figure.figsize": (12, 6)})
sns.set_color_codes()
except ImportError:
print('We suggest you install seaborn using conda or pip and rerun this cell')
# finally, we import the pvlib library
from pvlib import solarposition,irradiance,atmosphere,pvsystem
from pvlib.forecast import GFS, NAM, NDFD, RAP, HRRR
Explanation: Forecast to Power Tutorial
This tutorial will walk through the process of going from Unidata forecast model data to AC power using the SAPM.
Table of contents:
1. Setup
2. Load Forecast data
2. Calculate modeling intermediates
2. DC power using SAPM
2. AC power using SAPM
This tutorial has been tested against the following package versions:
* Python 3.5.2
* IPython 5.0.0
* pandas 0.18.0
* matplotlib 1.5.1
* netcdf4 1.2.1
* siphon 0.4.0
It should work with other Python and Pandas versions. It requires pvlib >= 0.3.0 and IPython >= 3.0.
Authors:
* Derek Groenendyk (@moonraker), University of Arizona, November 2015
* Will Holmgren (@wholmgren), University of Arizona, November 2015, January 2016, April 2016, July 2016
Setup
These are just your standard interactive scientific python imports that you'll get very used to using.
End of explanation
# Choose a location.
# Tucson, AZ
latitude = 32.2
longitude = -110.9
tz = 'US/Mountain'
Explanation: Load Forecast data
pvlib forecast module only includes several models. To see the full list of forecast models visit the Unidata website:
http://www.unidata.ucar.edu/data/#tds
End of explanation
surface_tilt = 30
surface_azimuth = 180 # pvlib uses 0=North, 90=East, 180=South, 270=West convention
albedo = 0.2
start = pd.Timestamp(datetime.date.today(), tz=tz) # today's date
end = start + pd.Timedelta(days=7) # 7 days from today
# Define forecast model
fm = GFS()
#fm = NAM()
#fm = NDFD()
#fm = RAP()
#fm = HRRR()
# Retrieve data
forecast_data = fm.get_processed_data(latitude, longitude, start, end)
Explanation: Define some PV system parameters.
End of explanation
forecast_data.head()
Explanation: Let's look at the downloaded version of the forecast data.
End of explanation
forecast_data['temp_air'].plot()
Explanation: This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
End of explanation
ghi = forecast_data['ghi']
ghi.plot()
plt.ylabel('Irradiance ($W/m^{-2}$)')
Explanation: Plot the GHI data. Most pvlib forecast models derive this data from the weather models' cloud clover data.
End of explanation
# retrieve time and location parameters
time = forecast_data.index
a_point = fm.location
solpos = a_point.get_solarposition(time)
#solpos.plot()
Explanation: Calculate modeling intermediates
Before we can calculate power for all the forecast times, we will need to calculate:
* solar position
* extra terrestrial radiation
* airmass
* angle of incidence
* POA sky and ground diffuse radiation
* cell and module temperatures
The approach here follows that of the pvlib tmy_to_power notebook. You will find more details regarding this approach and the values being calculated in that notebook.
Solar position
Calculate the solar position for all times in the forecast data.
The default solar position algorithm is based on Reda and Andreas (2004). Our implementation is pretty fast, but you can make it even faster if you install numba and use add method='nrel_numba' to the function call below.
End of explanation
dni_extra = irradiance.extraradiation(fm.time)
#dni_extra.plot()
#plt.ylabel('Extra terrestrial radiation ($W/m^{-2}$)')
Explanation: The funny looking jump in the azimuth is just due to the coarse time sampling in the TMY file.
DNI ET
Calculate extra terrestrial radiation. This is needed for many plane of array diffuse irradiance models.
End of explanation
airmass = atmosphere.relativeairmass(solpos['apparent_zenith'])
#airmass.plot()
#plt.ylabel('Airmass')
Explanation: Airmass
Calculate airmass. Lots of model options here, see the atmosphere module tutorial for more details.
End of explanation
poa_sky_diffuse = irradiance.haydavies(surface_tilt, surface_azimuth,
forecast_data['dhi'], forecast_data['dni'], dni_extra,
solpos['apparent_zenith'], solpos['azimuth'])
#poa_sky_diffuse.plot()
#plt.ylabel('Irradiance ($W/m^{-2}$)')
Explanation: The funny appearance is due to aliasing and setting invalid numbers equal to NaN. Replot just a day or two and you'll see that the numbers are right.
POA sky diffuse
Use the Hay Davies model to calculate the plane of array diffuse sky radiation. See the irradiance module tutorial for comparisons of different models.
End of explanation
poa_ground_diffuse = irradiance.grounddiffuse(surface_tilt, ghi, albedo=albedo)
#poa_ground_diffuse.plot()
#plt.ylabel('Irradiance ($W/m^{-2}$)')
Explanation: POA ground diffuse
Calculate ground diffuse. We specified the albedo above. You could have also provided a string to the surface_type keyword argument.
End of explanation
aoi = irradiance.aoi(surface_tilt, surface_azimuth, solpos['apparent_zenith'], solpos['azimuth'])
#aoi.plot()
#plt.ylabel('Angle of incidence (deg)')
Explanation: AOI
Calculate AOI
End of explanation
poa_irrad = irradiance.globalinplane(aoi, forecast_data['dni'], poa_sky_diffuse, poa_ground_diffuse)
poa_irrad.plot()
plt.ylabel('Irradiance ($W/m^{-2}$)')
plt.title('POA Irradiance')
Explanation: Note that AOI has values greater than 90 deg. This is ok.
POA total
Calculate POA irradiance
End of explanation
temperature = forecast_data['temp_air']
wnd_spd = forecast_data['wind_speed']
pvtemps = pvsystem.sapm_celltemp(poa_irrad['poa_global'], wnd_spd, temperature)
pvtemps.plot()
plt.ylabel('Temperature (C)')
Explanation: Cell and module temperature
Calculate pv cell and module temperature
End of explanation
sandia_modules = pvsystem.retrieve_sam('SandiaMod')
Explanation: DC power using SAPM
Get module data from the web.
End of explanation
sandia_module = sandia_modules.Canadian_Solar_CS5P_220M___2009_
sandia_module
Explanation: Choose a particular module
End of explanation
effective_irradiance = pvsystem.sapm_effective_irradiance(poa_irrad.poa_direct, poa_irrad.poa_diffuse,
airmass, aoi, sandia_module)
sapm_out = pvsystem.sapm(effective_irradiance, pvtemps['temp_cell'], sandia_module)
#print(sapm_out.head())
sapm_out[['p_mp']].plot()
plt.ylabel('DC Power (W)')
Explanation: Run the SAPM using the parameters we calculated above.
End of explanation
sapm_inverters = pvsystem.retrieve_sam('sandiainverter')
Explanation: AC power using SAPM
Get the inverter database from the web
End of explanation
sapm_inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_']
sapm_inverter
p_ac = pvsystem.snlinverter(sapm_out.v_mp, sapm_out.p_mp, sapm_inverter)
p_ac.plot()
plt.ylabel('AC Power (W)')
plt.ylim(0, None)
Explanation: Choose a particular inverter
End of explanation
p_ac[start:start+pd.Timedelta(days=2)].plot()
Explanation: Plot just a few days.
End of explanation
p_ac.describe()
p_ac.index.freq
# integrate power to find energy yield over the forecast period
p_ac.sum() * 3
Explanation: Some statistics on the AC power
End of explanation |
14,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Objetive of the NMF is Find two non-negative matrices (W, H) whose product approximates the non-negative matrix X.
This factorization can be used for example for dimensionality reduction, source separation or topic extraction.
Using Scikit-learn
http
Step1: Defining the model
Step2: A test Array
Step3: Now lets see how mush close to X are WxH and lets call it crossValue
Step4: Using an independent Implementation from
http | Python Code:
import numpy as np
from sklearn.decomposition import NMF
Explanation: The Objetive of the NMF is Find two non-negative matrices (W, H) whose product approximates the non-negative matrix X.
This factorization can be used for example for dimensionality reduction, source separation or topic extraction.
Using Scikit-learn
http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html
End of explanation
K = 4
model = NMF(n_components=K)
model
Explanation: Defining the model
End of explanation
Original = [
[5,3,0,1],
[4,0,0,1],
[3,2,0,0],
[7,0,4,1],
[0,2,5,0]
]
Original = np.array(Original)
W = model.fit_transform(Original)
H = model.components_
print("W\n",np.round(W,2))
print("H\n",np.round(H,2))
Explanation: A test Array
End of explanation
crossValue = np.dot(W,H)
print("crossValue \n",crossValue)
print("rounded Values\n",np.round(crossValue,2))
print("Original\n",Original)
import matplotlib.pyplot as plt
def plotCompare(Original,prediction):
N = Original.shape[0]
last = Original.shape[1]-1
ind = np.arange(N) # the x locations for the groups
width = 0.17 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, Original[:,last], width, color='r')
rects2 = ax.bar(ind + width, prediction[:,last], width, color='b')
# rects3 = ax.bar(ind + width+width, np.round(prediction[:,last],2), width, color='g')
# add some text for labels, title and axes ticks
ax.set_ylabel('Last Value')
ax.set_title('Row Values')
ax.set_xticks(ind+ width / last)
ax.set_xticklabels(('G1', 'G2', 'G3', 'G4','G5','G6'))
ax.legend((rects1[0], rects2[0]), ('Original', 'Cross Value'))
plt.show()
plotCompare(Original,crossValue)
Explanation: Now lets see how mush close to X are WxH and lets call it crossValue
End of explanation
def matrix_factorization(R, K = 2, steps=5000, alpha=0.0002, beta=0.02,error = 0.001):
W = np.random.rand(len(R),K)
H = np.random.rand(K,len(R[0]))
for step in range(steps):
for i in range(len(R)):
for j in range(len(R[i])):
if R[i][j] > 0:
eij = R[i][j] - np.dot(W[i,:],H[:,j])
for k in range(K):
W[i][k] = W[i][k] + alpha * (2 * eij * H[k][j] - beta * W[i][k])
H[k][j] = H[k][j] + alpha * (2 * eij * W[i][k] - beta * H[k][j])
# eR = np.dot(W,H)
e = 0
for i in range(len(R)):
for j in range(len(R[i])):
if R[i][j] > 0:
e = e + pow(R[i][j] - np.dot(W[i,:],H[:,j]), 2)
for k in range(K):
e = e + (beta/2) * ( pow(W[i][k],2) + pow(H[k][j],2) )
if e < error:
break
return W,H
W, H = matrix_factorization(Original,K)
W
H
prediction = np.dot(W,H)
print(prediction)
np.around(prediction,2)
Original
plotCompare(Original,prediction)
Explanation: Using an independent Implementation from
http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/
Modified by DavidGutierrez
<br><b>R :</b> A matrix to be factorized, dimension N x M
<br><b>K :</b> The number of latent features
<br><b>Steps :</b> The maximum number of steps to perform the optimisation
<br><b>Alpha :</b> The learning rate
<br><b>Beta :</b> The regularization parameter
<br>The final matrices W and H
End of explanation |
14,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
About
This notebook demonstrates stacking machine learning algorithm - folding, which physics use in their analysis.
Step1: Loading data
Step2: Training variables
Step3: Folding strategy - stacking algorithm
It implements the same interface as all classifiers, but with some difference
Step4: Define folding model
Step5: Default prediction (predict i_th_ fold by i_th_ classifier)
Step6: Voting prediction (predict i-fold by all classifiers and take value, which is calculated by vote_function)
Step7: Comparison of folds
Again use ClassificationReport class to compare different results. For folding classifier this report uses only default prediction.
Report training dataset
Step8: Signal distribution for each fold
Use mask parameter to plot distribution for the specific fold
Step9: Background distribution for each fold
Step10: ROCs (each fold used as test dataset)
Step11: Report for test dataset
NOTE | Python Code:
%pylab inline
Explanation: About
This notebook demonstrates stacking machine learning algorithm - folding, which physics use in their analysis.
End of explanation
import numpy, pandas
from rep.utils import train_test_split
from sklearn.metrics import roc_auc_score
sig_data = pandas.read_csv('toy_datasets/toyMC_sig_mass.csv', sep='\t')
bck_data = pandas.read_csv('toy_datasets/toyMC_bck_mass.csv', sep='\t')
labels = numpy.array([1] * len(sig_data) + [0] * len(bck_data))
data = pandas.concat([sig_data, bck_data])
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.7)
Explanation: Loading data
End of explanation
variables = ["FlightDistance", "FlightDistanceError", "IP", "VertexChi2", "pt", "p0_pt", "p1_pt", "p2_pt", 'LifeTime', 'dira']
data = data[variables]
Explanation: Training variables
End of explanation
from rep.estimators import SklearnClassifier
from sklearn.ensemble import GradientBoostingClassifier
Explanation: Folding strategy - stacking algorithm
It implements the same interface as all classifiers, but with some difference:
all prediction methods have additional parameter "vote_function" (example folder.predict(X, vote_function=None)), which is used to combine all classifiers' predictions. By default "mean" is used as "vote_function"
End of explanation
from rep.metaml import FoldingClassifier
n_folds = 4
folder = FoldingClassifier(GradientBoostingClassifier(), n_folds=n_folds, features=variables)
folder.fit(train_data, train_labels)
Explanation: Define folding model
End of explanation
folder.predict_proba(train_data)
Explanation: Default prediction (predict i_th_ fold by i_th_ classifier)
End of explanation
# definition of mean function, which combines all predictions
def mean_vote(x):
return numpy.mean(x, axis=0)
folder.predict_proba(test_data, vote_function=mean_vote)
Explanation: Voting prediction (predict i-fold by all classifiers and take value, which is calculated by vote_function)
End of explanation
from rep.data.storage import LabeledDataStorage
from rep.report import ClassificationReport
# add folds_column to dataset to use mask
train_data["FOLDS"] = folder._get_folds_column(len(train_data))
lds = LabeledDataStorage(train_data, train_labels)
report = ClassificationReport({'folding': folder}, lds)
Explanation: Comparison of folds
Again use ClassificationReport class to compare different results. For folding classifier this report uses only default prediction.
Report training dataset
End of explanation
for fold_num in range(n_folds):
report.prediction_pdf(mask="FOLDS == %d" % fold_num, labels_dict={1: 'sig fold %d' % fold_num}).plot()
Explanation: Signal distribution for each fold
Use mask parameter to plot distribution for the specific fold
End of explanation
for fold_num in range(n_folds):
report.prediction_pdf(mask="FOLDS == %d" % fold_num, labels_dict={0: 'bck fold %d' % fold_num}).plot()
Explanation: Background distribution for each fold
End of explanation
for fold_num in range(n_folds):
report.roc(mask="FOLDS == %d" % fold_num).plot()
Explanation: ROCs (each fold used as test dataset)
End of explanation
lds = LabeledDataStorage(test_data, test_labels)
report = ClassificationReport({'folding': folder}, lds)
report.prediction_pdf().plot(new_plot=True, figsize = (9, 4))
report.roc().plot(xlim=(0.5, 1))
Explanation: Report for test dataset
NOTE: Here vote function is None, so default prediction is used
End of explanation |
14,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
REINFORCE in Sonnet
This notebook implements a basic reinforce algorithm a.k.a. policy gradient for CartPole env.
It has been deliberately written to be as simple and human-readable.
Authors
Step1: Building the network for REINFORCE
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
Step2: Loss function and updates
We now need to define objective and update over policy gradient.
The objective function can be defined thusly
Step5: Computing cumulative rewards
Step7: Playing the game
Step9: Results & video | Python Code:
import gym
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0")
#gym compatibility: unwrap TimeLimit
if hasattr(env,'env'):
env=env.env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
Explanation: REINFORCE in Sonnet
This notebook implements a basic reinforce algorithm a.k.a. policy gradient for CartPole env.
It has been deliberately written to be as simple and human-readable.
Authors: Practical_RL course team
The notebook assumes that you have openai gym installed.
In case you're running on a server, use xvfb
End of explanation
import tensorflow as tf
import sonnet as snt
#create input variables. We only need <s,a,R> for REINFORCE
states = tf.placeholder('float32',(None,)+state_dim,name="states")
actions = tf.placeholder('int32',name="action_ids")
cumulative_rewards = tf.placeholder('float32', name="cumulative_returns")
def make_network(inputs):
lin1 = snt.Linear(output_size=100)(inputs)
elu1 = tf.nn.elu(lin1)
logits = snt.Linear(output_size=n_actions)(elu1)
policy = tf.nn.softmax(logits)
log_policy = tf.nn.log_softmax(logits)
return logits, policy, log_policy
net = snt.Module(make_network,name="policy_network")
logits,policy,log_policy = net(states)
#utility function to pick action in one given state
get_action_proba = lambda s: policy.eval({states:[s]})[0]
Explanation: Building the network for REINFORCE
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
End of explanation
#REINFORCE objective function
actions_1hot = tf.one_hot(actions,n_actions)
log_pi_a = -tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=actions_1hot)
J = tf.reduce_mean(log_pi_a * cumulative_rewards)
#regularize with entropy
entropy = -tf.reduce_mean(policy*log_policy)
#all network weights
all_weights = net.get_variables()
#weight updates. maximizing J is same as minimizing -J
loss = -J -0.1*entropy
update = tf.train.AdamOptimizer().minimize(loss,var_list=all_weights)
Explanation: Loss function and updates
We now need to define objective and update over policy gradient.
The objective function can be defined thusly:
$$ J \approx \sum i log \pi\theta (a_i | s_i) \cdot R(s_i,a_i) $$
When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient.
End of explanation
def get_cumulative_rewards(rewards, #rewards at each step
gamma = 0.99 #discount for reward
):
take a list of immediate rewards r(s,a) for the whole session
compute cumulative rewards R(s,a) (a.k.a. G(s,a) in Sutton '16)
R_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
The simple way to compute cumulative rewards is to iterate from last to first time tick
and compute R_t = r_t + gamma*R_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
cumulative_rewards = []
R = 0
for r in rewards[::-1]:
R = r + gamma*R
cumulative_rewards.insert(0,R)
return cumulative_rewards
assert len(get_cumulative_rewards(range(100))) == 100
assert np.allclose(get_cumulative_rewards([0,0,1,0,0,1,0],gamma=0.9),[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(get_cumulative_rewards([0,0,1,-2,3,-4,0],gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(get_cumulative_rewards([0,0,1,2,3,4,0],gamma=0), [0, 0, 1, 2, 3, 4, 0])
print("looks good!")
def train_step(_states,_actions,_rewards):
given full session, trains agent with policy gradient
_cumulative_rewards = get_cumulative_rewards(_rewards)
update.run({states:_states,actions:_actions,cumulative_rewards:_cumulative_rewards})
Explanation: Computing cumulative rewards
End of explanation
def generate_session(t_max=1000):
play env with REINFORCE agent and train at the session end
#arrays to record session
states,actions,rewards = [],[],[]
s = env.reset()
for t in range(t_max):
#action probabilities array aka pi(a|s)
action_probas = get_action_proba(s)
a = np.random.choice(n_actions,p=action_probas)
new_s,r,done,info = env.step(a)
#record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done: break
train_step(states,actions,rewards)
return sum(rewards)
s = tf.InteractiveSession()
s.run(tf.global_variables_initializer())
for i in range(100):
rewards = [generate_session() for _ in range(100)] #generate new sessions
print ("mean reward:%.3f"%(np.mean(rewards)))
if np.mean(rewards) > 300:
print ("You Win!")
break
Explanation: Playing the game
End of explanation
#record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),directory="videos",force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML(
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
.format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
#That's all, thank you for your attention!
Explanation: Results & video
End of explanation |
14,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Checkerboard Microstructure
Introduction - What are 2-Point Spatial Correlations (also called 2-Point Statistics)?
The purpose of this example is to introduce 2-point spatial correlations and how they are computed, using PyMKS.
The example starts with some introductory information about spatial correlations. PyMKS is used to compute both the periodic and non-periodic 2-point spatial correlations (also referred to as 2-point statistics or autocorrelations and crosscorrelations) for a checkerboard microstructure. This is a relatively simple example that allows an easy discussion of how the spatial correlations capture the main features seen in the original microstructure.
n-Point Spatial Correlations
1-Point Spatial Correlations (or 1-point statistics)
n-point spatial correlations provide a way rigorously quantify material structure, using statistics. As an introduction n-point spatial correlations, let's first discuss 1-point statistics. 1-point statistics are the probabilities that a specified local state will be found in any randomly selected spatial bin in a microstructure [1][2][3]. In this checkerboard example discussed here, there are two possible local states, one is colored white and the other is colored black. 1-point statistics compute the volume fractions of the local states in the microstructure. 1-point statistics are computed as
$$ f[l] = \frac{1}{S} \sum_s m[s,l] $$
In this equation, $f[l]$ is the probability of finding the local state $l$ in any randomly selected spatial bin in the microstructure, $m[s, l]$ is the microstructure function (the digital representation of the microstructure), $S$ is the total number of spatial bins in the microstructure and $s$ refers to a specific spatial bin.
While 1-point statistics provide information on the relative amounts of the different local states, it does not provide any information about how those local states are spatially arranged in the microstructure. Therefore, 1-point statistics are a limited set of metrics to describe the structure of materials.
2-Point Spatial Correlations
2-point spatial correlations (also known as 2-point statistics) contain information about the fractions of local states as well as the first order information on how the different local states are distributed in the microstructure.
2-point statistics can be thought of as the probability of having a vector placed randomly in the microstructure and having one end of the vector be on one specified local state and the other end on another specified local state. This vector could have any length or orientation that the discrete microstructure allows. The equation for 2-point statistics can found below.
$$ f[r \vert l, l'] = \frac{1}{S} \sum_s m[s, l] m[s + r, l'] $$
In this equation $ f[r \vert l, l']$ is the conditional probability of finding the local states $l$ and $l'$ at a distance and orientation away from each other defined by the vector $r$. All other variables are the same as those in the 1-point statistics equation. In the case that we have an eigen microstructure function (it only contains values of 0 or 1) and we are using an indicator basis, the the $r=0$ vector will recover the 1-point statistics.
When the 2 local states are the same $l = l'$, it is referred to as a autocorrelation. If the 2 local states are not the same, it is referred to as a crosscorrelation.
Higher Order Spatial Statistics
Higher order spatial statistics are similar to 2-point statistics, in that they can be thought of in terms of conditional probabilities of finding specified local states separated by a prescribed set of vectors. 3-point statistics are the probabilities of finding three specified local states at the ends of a triangle (defined by 2 vectors) placed randomly in the material structure. 4-point statistics describe the probabilities of finding 4 local states at 4 locations (defined using 3 vectors) and so on.
While higher order statistics are a better metric to quantify the material structure, the 2-point statistics can be computed much faster than higher order spatial statistics, and still provide information about how the local states are distributed. For this reason, only 2-point statistics are implemented into PyMKS. Let us look at an example of computing the 2-point statistics for a checkerboard microstructure.
Step1: 2-Point Statistics for Checkerboard Microstructure
Let's first start with making a microstructure that looks like a 8 x 8 checkerboard. Although this type of microstructure may not resemble a physical system, it provides solutions that give some intuitive understanding of 2-point statistics.
We can create a checkerboard microstructure using make_checkerboard_microstructure function from pymks.datasets.
Step2: Now let's take a look at how the microstructure looks.
Step3: Compute Periodic 2-Point Statistics
Now that we have created a microstructure to work with, we can start computing the 2-point statistics. Let's start by looking at the periodic autocorrelations of the microstructure and then compute the periodic crosscorrelation. This can be done using the autocorrelate and crosscorrelate functions from pymks.states, and using the keyword argument periodic_axes to specify the axes that are periodic.
In order to compute 2-pont statistics, we need to select a basis to generate the microstructure function X_ from the microstructure X. Because we only have values of 0 or 1 in our microstructure we will using the PrimitiveBasis with n_states equal to 2.
Step4: We have now computed the autocorrelations.
Let's take a look at them using draw_autocorrelations from pymks.tools.
Step5: Notice that for this checkerboard microstructure, the autocorrelation for these 2 local states in the exact same. We have just computed the periodic autocorrelations for a perfectly periodic microstructure with equal volume fractions. In general this is not the case and the autocorrelations will be different, as we will see later in this example.
As mentioned in the introduction, because we using an indicator basis and the we have eigen microstructure functions (values are either 0 or 1), the (0, 0) vector equals the volume fraction.
Let's double check that both the phases have a volume fraction of 0.5.
Step6: We can compute the cross-correlation of the microstructure function, using the crosscorrelate function from pymks.stats
Step7: Let's take a look at the cross correlation using draw_crosscorrelations from pymks.tools.
Step8: Notice that the crosscorrelation is the exact opposite of the 2 autocorrelations. The (0, 0) vector has a value of 0. This statistic reflects the probablity of 2 phases having the same location. In our microstructure, this probability is zero, as we have not allowed the two phases (colored black and white) to co-exist in the same spatial voxel.
Let's check that it is zero.
Step9: Compute Non-Periodic 2-Point Statistics
We will now compute the non-periodic 2-point statistics for our microstructure. This time, rather than using the autocorrelate and crosscorrelate functions, we will use the correlate function from pymks.stats. The correlate function computes all of the autocorrelations and crosscorrelations at the same time. We will compute the non-periodic statistics by omitting the keyword argument periodic_axes.
Step10: All or some of the correlations can be viewed, using the draw_correlations function from pymks.tools. In this example we will look at all of them.
Step11: Notice that the maximum values for the autocorrelations are higher than 0.5. We can still show that the centers or the (0, 0) vectors are still equal to the volume fractions. | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
Explanation: Checkerboard Microstructure
Introduction - What are 2-Point Spatial Correlations (also called 2-Point Statistics)?
The purpose of this example is to introduce 2-point spatial correlations and how they are computed, using PyMKS.
The example starts with some introductory information about spatial correlations. PyMKS is used to compute both the periodic and non-periodic 2-point spatial correlations (also referred to as 2-point statistics or autocorrelations and crosscorrelations) for a checkerboard microstructure. This is a relatively simple example that allows an easy discussion of how the spatial correlations capture the main features seen in the original microstructure.
n-Point Spatial Correlations
1-Point Spatial Correlations (or 1-point statistics)
n-point spatial correlations provide a way rigorously quantify material structure, using statistics. As an introduction n-point spatial correlations, let's first discuss 1-point statistics. 1-point statistics are the probabilities that a specified local state will be found in any randomly selected spatial bin in a microstructure [1][2][3]. In this checkerboard example discussed here, there are two possible local states, one is colored white and the other is colored black. 1-point statistics compute the volume fractions of the local states in the microstructure. 1-point statistics are computed as
$$ f[l] = \frac{1}{S} \sum_s m[s,l] $$
In this equation, $f[l]$ is the probability of finding the local state $l$ in any randomly selected spatial bin in the microstructure, $m[s, l]$ is the microstructure function (the digital representation of the microstructure), $S$ is the total number of spatial bins in the microstructure and $s$ refers to a specific spatial bin.
While 1-point statistics provide information on the relative amounts of the different local states, it does not provide any information about how those local states are spatially arranged in the microstructure. Therefore, 1-point statistics are a limited set of metrics to describe the structure of materials.
2-Point Spatial Correlations
2-point spatial correlations (also known as 2-point statistics) contain information about the fractions of local states as well as the first order information on how the different local states are distributed in the microstructure.
2-point statistics can be thought of as the probability of having a vector placed randomly in the microstructure and having one end of the vector be on one specified local state and the other end on another specified local state. This vector could have any length or orientation that the discrete microstructure allows. The equation for 2-point statistics can found below.
$$ f[r \vert l, l'] = \frac{1}{S} \sum_s m[s, l] m[s + r, l'] $$
In this equation $ f[r \vert l, l']$ is the conditional probability of finding the local states $l$ and $l'$ at a distance and orientation away from each other defined by the vector $r$. All other variables are the same as those in the 1-point statistics equation. In the case that we have an eigen microstructure function (it only contains values of 0 or 1) and we are using an indicator basis, the the $r=0$ vector will recover the 1-point statistics.
When the 2 local states are the same $l = l'$, it is referred to as a autocorrelation. If the 2 local states are not the same, it is referred to as a crosscorrelation.
Higher Order Spatial Statistics
Higher order spatial statistics are similar to 2-point statistics, in that they can be thought of in terms of conditional probabilities of finding specified local states separated by a prescribed set of vectors. 3-point statistics are the probabilities of finding three specified local states at the ends of a triangle (defined by 2 vectors) placed randomly in the material structure. 4-point statistics describe the probabilities of finding 4 local states at 4 locations (defined using 3 vectors) and so on.
While higher order statistics are a better metric to quantify the material structure, the 2-point statistics can be computed much faster than higher order spatial statistics, and still provide information about how the local states are distributed. For this reason, only 2-point statistics are implemented into PyMKS. Let us look at an example of computing the 2-point statistics for a checkerboard microstructure.
End of explanation
from pymks.datasets import make_checkerboard_microstructure
X = make_checkerboard_microstructure(square_size=21, n_squares=8)
Explanation: 2-Point Statistics for Checkerboard Microstructure
Let's first start with making a microstructure that looks like a 8 x 8 checkerboard. Although this type of microstructure may not resemble a physical system, it provides solutions that give some intuitive understanding of 2-point statistics.
We can create a checkerboard microstructure using make_checkerboard_microstructure function from pymks.datasets.
End of explanation
from pymks.tools import draw_microstructures
draw_microstructures(X)
print X.shape
Explanation: Now let's take a look at how the microstructure looks.
End of explanation
from pymks.stats import autocorrelate
from pymks import PrimitiveBasis
prim_basis = PrimitiveBasis(n_states=2)
X_ = prim_basis.discretize(X)
X_auto = autocorrelate(X_, periodic_axes=(0, 1))
Explanation: Compute Periodic 2-Point Statistics
Now that we have created a microstructure to work with, we can start computing the 2-point statistics. Let's start by looking at the periodic autocorrelations of the microstructure and then compute the periodic crosscorrelation. This can be done using the autocorrelate and crosscorrelate functions from pymks.states, and using the keyword argument periodic_axes to specify the axes that are periodic.
In order to compute 2-pont statistics, we need to select a basis to generate the microstructure function X_ from the microstructure X. Because we only have values of 0 or 1 in our microstructure we will using the PrimitiveBasis with n_states equal to 2.
End of explanation
from pymks.tools import draw_autocorrelations
correlations = [('black', 'black'), ('white', 'white')]
draw_autocorrelations(X_auto[0], autocorrelations=correlations)
Explanation: We have now computed the autocorrelations.
Let's take a look at them using draw_autocorrelations from pymks.tools.
End of explanation
center = (X_auto.shape[1] + 1) / 2
print 'Volume fraction of black phase', X_auto[0, center, center, 0]
print 'Volume fraction of white phase', X_auto[0, center, center, 1]
Explanation: Notice that for this checkerboard microstructure, the autocorrelation for these 2 local states in the exact same. We have just computed the periodic autocorrelations for a perfectly periodic microstructure with equal volume fractions. In general this is not the case and the autocorrelations will be different, as we will see later in this example.
As mentioned in the introduction, because we using an indicator basis and the we have eigen microstructure functions (values are either 0 or 1), the (0, 0) vector equals the volume fraction.
Let's double check that both the phases have a volume fraction of 0.5.
End of explanation
from pymks.stats import crosscorrelate
X_cross = crosscorrelate(X_, periodic_axes=(0, 1))
Explanation: We can compute the cross-correlation of the microstructure function, using the crosscorrelate function from pymks.stats
End of explanation
from pymks.tools import draw_crosscorrelations
correlations = [('black', 'white')]
draw_crosscorrelations(X_cross[0], crosscorrelations=correlations)
Explanation: Let's take a look at the cross correlation using draw_crosscorrelations from pymks.tools.
End of explanation
print 'Center value', X_cross[0, center, center, 0]
Explanation: Notice that the crosscorrelation is the exact opposite of the 2 autocorrelations. The (0, 0) vector has a value of 0. This statistic reflects the probablity of 2 phases having the same location. In our microstructure, this probability is zero, as we have not allowed the two phases (colored black and white) to co-exist in the same spatial voxel.
Let's check that it is zero.
End of explanation
from pymks.stats import correlate
X_corr = correlate(X_)
Explanation: Compute Non-Periodic 2-Point Statistics
We will now compute the non-periodic 2-point statistics for our microstructure. This time, rather than using the autocorrelate and crosscorrelate functions, we will use the correlate function from pymks.stats. The correlate function computes all of the autocorrelations and crosscorrelations at the same time. We will compute the non-periodic statistics by omitting the keyword argument periodic_axes.
End of explanation
from pymks.tools import draw_correlations
correlations = [('black', 'black'), ('white', 'white'), ('black', 'white')]
draw_correlations(X_corr[0].real, correlations=correlations)
Explanation: All or some of the correlations can be viewed, using the draw_correlations function from pymks.tools. In this example we will look at all of them.
End of explanation
print 'Volume fraction of black phase', X_corr[0, center, center, 0]
print 'Volume fraction of white phase', X_corr[0, center, center, 1]
Explanation: Notice that the maximum values for the autocorrelations are higher than 0.5. We can still show that the centers or the (0, 0) vectors are still equal to the volume fractions.
End of explanation |
14,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Neural Networks
Step1: Part 01a -- Simple neural network as XOR gate using sigmoid activation function
Step3: Create sigmoid function
Step4: Plotting sigmoid activation function
Step5: Create an input data matrix as numpy array
Step6: Define the output data matrix as numpy array
Step7: Create a random number seed
Step8: Create a synapse matrix
Step9: Implement a single forward pass of the XOR input table
Create the input layer
Step10: Create the activation function for layer 1
Step11: Create an activation function for layer 2
Step12: Part 01b -- Backpropagation
This backpropagation example implments the logistic function
Step13: Updating the weights/synapses of the neural network
Step14: Training the simple XOR gate neural network
Step15: Part 01b -- Neural network based XOR gate using rectified linear units activation function
Step16: Plotting the rectified linear units (ReLU) activation function
Step17: Create input and output data
Step18: N is batch size(sample size); D_in is input dimension; H is hidden dimension; D_out is output dimension
Step19: Randomly initialize weights
Step20: ReLu as the activation function and squared error as the loss function
Step21: Part 02 -- Build a more complex neural network classifier using numpy
Step22: Plotting hyperbolic tan (tanh) activation function
Step23: Display plots inline and change default figure size
Step24: Generate a dataset and create a plot
Step25: Train the logistic regression classifier
Step26: Visualize the logistic regression classifier output
Step27: Plotting the decision boundary
Step28: Create a neural network
Step29: Gradient descent parameters
Step30: Compute loss function on the dataset
Step31: Function that predicts the output of either 0 or 1
Step32: This function learns parameters for the neural network and returns the model
Step33: Build a model with 50-dimensional hidden layer
Step34: Plot the decision boundary
Step35: Visualizing the hidden layers with varying sizes
Step36: Part 03 -- Example illustrating the importance of learning rate in hyper-parameter tuning
Step37: Plotting output of the model that failed to learn, given a set of hyper-parameters
Step38: Adjusting the learning rate such that the neural network re-starts learning
Step39: Plotting the decision boundary layer generated by an improved neural network model
Step40: Part 04 -- Building a neural network using tensorflow
Step41: Create a synthetic dataset for training and generating predictions
Step42: Variable objects store tensors in tensorflow.
Tensorflow considers all input data tensors.
Tensors are 3 dimensional matrices.
Constructing a linear model
Step43: Gradient descent optimizer
Step44: Training function
Step45: Initialize the variables for the computational graph
Step46: Launching the tensorflow computational graph
Step47: Training the model
Step48: Part 05 -- Neural net XOR gate solver using Tensorflow and Keras
Step49: Create input and output data
Step50: Create a neural network using Keras Sequential API
Step51: Select optimizer
Step52: Compile keras model
Step53: Load model weights
Step54: Summarize keras model
Step55: Visualize model architecture
Step56: Train model
Step57: Save model weights
Step58: Extra credit -- Activation functions in numpy | Python Code:
if input_1 == input_2:
output = 0
else:
output = 1
Explanation: Introduction to Neural Networks:
Author:
Dr. Rahul Remanan
Dr. Jesse Kanter
CEO and Chief Imagination Officer
Moad Computer
Launch this notebook in Google Colab
This is a hands-on workshop notebook on deep-learning using python 3. In this notebook, we will learn how to implement a neural network from scratch using numpy. Once we have implemented this network, we will visualize the predictions generated by the neural network and compare it with a logistic regression model, in the form of classification boundaries. This workshop aims to provide an intuitive understanding of neural networks.
In practical code development, there is seldom an use case for building a neural network from scratch. Neural networks in real-world are typically implemented using a deep-learning framework such as tensorflow. But, building a neural network with very minimal dependencies helps one gain an understanding of how neural networks work. This understanding is essential to designing effective neural network models. Also, towards the end of the session, we will use tensorflow deep-learning library to build a neural network, to illustrate the importance of building a neural network using a deep-learning framework.
Architecture of the basic XOR gate neural network:
XOR gate problem and neural networks -- Background:
The XOR gate is an interesting problem in neural networks. Marvin Minsky and Samuel Papert in their book 'Perceptrons' (1969)
Some of these earliest work in AI were using networks or circuits of connected units to simulate intelligent behavior. Examples of this kind of work are called "connectionism". After the publication of 'Perceptrons', the interest in connectionism significantly reduced, till the renewed interest following the works of John Hopfield and David Rumelhart.
The assertions in the book 'Perceptrons' by Minsky was inspite of his thorough knowledge that the powerful perceptrons have multiple layers and that Rosenblatt's basic feed-forward perceptrons have three layers. In the book, to deceive unsuspecting readers, Minsky defined a perceptron as a two-layer machine that can handle only linearly separable problems and, for example, cannot solve the exclusive-OR problem. The Minsky-Papert collaboation is now believed to be a political maneuver and a hatchet job for contract funding by some knowledgeable scientists. This strong, unidimensional and misplaced criticism of perceptrons essentially halted work on practical, powerful artificial intelligence systems that were based on neural-networks for nearly a decade.
Part 1 of this notebook explains how to build a very basic neural network in numpy. This perceptron like neural network is trained to predict the output of a XOR gate.
XOR gate table:
Image below shows an example of a lienarly separable dataset:
Image below shows the XOR gate problem and no linear separation:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
Explanation: Part 01a -- Simple neural network as XOR gate using sigmoid activation function:
The XOR gate neural network implemention uses a two layer perceptron with sigmoid activation function. This portion of the notebook is a modified fork of the neural network implementation in numpy by Milo Harper.
Import the dependent libraries -- numpy and matplotlib:
End of explanation
import math
x = -1.2
y = 1/(1+math.exp(-x))
print (y)
import numpy as np
y = 1/(1+np.exp(-x))
print (y)
def sigmoid(x, derivative=False):
Parameters:
x: input
derivative: boolean to specify if the derivative of the function should be computed
if derivative:
return (x*(1-x))
return (1/(1+np.exp(-x)))
sigmoid(-1.2, derivative=False)
x = -1.2
y_d = (1/(1+np.exp(x)))*(1-(1/(1+np.exp(x))))
y_d
sigmoid(0.23147521650098238, derivative=True)
Explanation: Create sigmoid function:
The sigmoid function takes two input arguments: x and a boolean argument called 'derivative'.
When the boolean argument is set as true, the sigmoid function calculates the derivative of x.
The derivative of x is required when calculating error or performing back-propagation.
The sigmoid function runs in every single neuron.
The sigmoid funtion feeds forward the data by converting the numeric matrices to probablities.
To implement the logistic sigmoid function using numpy, we use the mathematical formula:
Backpropagation:
Method to make the network better.
Mathematically we need to compute the derivative of the activation function.
If sigmoid function can be expressed as follows:
Then, the first derivative of this function can be expressed as:
Forwardpropagation and backpropagation functions using sigmoid activation:
Implementing sigmoid function using math library in python:
End of explanation
xmin= -10
xmax = 10
ymin = -0.1
ymax = 1.1
step_size = 0.01
x = list(np.arange(xmin, xmax, step_size))
y = []
for i in x:
y_i = sigmoid(i)
y.append(y_i)
axis = [xmin, xmax, ymin, ymax]
plt.axhline(y=0.5, color='C2', alpha=0.5)
plt.axvline(x=0, color='C2', alpha=0.5)
plt.axis(axis)
plt.plot(x, y, linewidth=2.0)
Explanation: Plotting sigmoid activation function:
End of explanation
import numpy as np
x = np.asarray([[0,0],
[1,1],
[1,0],
[0,1]])
print (x.shape)
x.shape[1]
x.shape[0]
x_ = (1 , 2, 3, 4)
len(x_)
for i in range(len(x_)):
print ("This is the {} element in the tuple".format(i))
print ("The value is: {}".format(x_[i]))
Explanation: Create an input data matrix as numpy array:
Matrix with n number of dimensions.
End of explanation
y = np.asarray([[0],
[0],
[1],
[1]])
y.shape
Explanation: Define the output data matrix as numpy array:
End of explanation
seed = 1
np.random.seed(seed)
Explanation: Create a random number seed:
Random number seeding is useful for producing reproducible results.
End of explanation
bias_val = 1
output_dim = 1
input_shape_1 = x.shape[1]
input_shape_2 = x.shape[0]
hidden_layer_size = 5
synapse_0 = 2*np.random.random((input_shape_1, hidden_layer_size)) - bias_val
synapse_1 = 2*np.random.random((hidden_layer_size, output_dim)) - bias_val
loss_col = []
print (synapse_0.shape)
synapse_0
print (synapse_1.shape)
Explanation: Create a synapse matrix:
A function applied to the syanpses.
For the first synapse, weights matrix of shape: input_shape_1 x input_shape_2 is created.
For the second synapse, weights matrix of shape: input_shape_2 x output_dim is created.
This function also introduces the first hyper-parameter in neural network tuning called 'bias_val', which is the bias value for the synaptic function.
End of explanation
layer_0 = x
Explanation: Implement a single forward pass of the XOR input table
Create the input layer
End of explanation
bias_val = 1
layer_1 = sigmoid(np.dot(layer_0, synapse_0) - bias_val)
layer_1.shape
layer_1
Explanation: Create the activation function for layer 1
End of explanation
layer_2 = sigmoid(np.dot(layer_1, synapse_1) - bias_val)
layer_2.shape
layer_2
Explanation: Create an activation function for layer 2
End of explanation
outputLoss_derivative = (layer_2 - y)
outputLoss_derivative
layer_2_delta = (outputLoss_derivative*sigmoid(layer_2,derivative=True))
layer_2_delta
layer_1_error = (layer_2_delta.dot(synapse_1.T))
layer_1_error
layer_1_delta=layer_1_error*sigmoid(layer_1,derivative=True)
layer_1_delta
Explanation: Part 01b -- Backpropagation
This backpropagation example implments the logistic function:
$ \frac{\partial E}{\partial o_j} = \frac{\partial E}{\partial o_j} = \frac{\partial }{\partial o_j} {\frac{1}{2}}(t-y)^2 {= y-t}$
and computes layer delta using:
$\Delta w_{ij} = -\eta \frac{\partial E}{\partial w_{ij}} $
In this implementation, learning rate ($\eta$) = 1
Read more by following the backpropogation link above.
Implement a signle backprop pass
End of explanation
synapse_1 += layer_1.T.dot(layer_2_delta)
synapse_1.shape
synapse_0 += layer_0.T.dot(layer_1_delta)
synapse_0
Explanation: Updating the weights/synapses of the neural network
End of explanation
training_steps = 50000
update_freq = 10
input_data = x
output_data = y
bias_val_1 = 1e-2
bias_val_2 = 10
learning_rate = 0.1
for t in range(training_steps):
# Creating the layers of the neural network:
layer_0 = input_data
layer_1 = sigmoid(np.dot(layer_0, synapse_0)+bias_val_1)
layer_2 = sigmoid(np.dot(layer_1, synapse_1)+bias_val_2)
# Backpropagation:
outputLoss_derivative = output_data - layer_2
loss_col.append(np.mean(np.abs(outputLoss_derivative)))
if ((t*update_freq) % training_steps == 0):
print ('Training step :' + str(t))
print ('Prediction error during training :' + str(np.mean(np.abs(outputLoss_derivative))))
# Layer-wsie delta function:
layer_2_delta = (learning_rate*outputLoss_derivative*sigmoid(layer_2, derivative = True))
layer_1_error = layer_2_delta.dot(synapse_1.T) # Matrix multiplication of the layer 2 delta with the transpose of the first synapse function.
layer_1_delta = (layer_1_error*learning_rate)*(sigmoid(layer_1, derivative = True))
# Updating synapses or weights:
synapse_1 += layer_1.T.dot(layer_2_delta)
synapse_0 += layer_0.T.dot(layer_1_delta)
del layer_0
del layer_1
print ('Training completed ...')
print ('Predictions :' + str (layer_2))
plt.plot(loss_col)
plt.show()
delete_model = True
if delete_model:
try:
del loss_col
except:
pass
try:
del input_data
except:
pass
try:
del output_data
except:
pass
try:
del x
except:
pass
try:
del y
except:
pass
try:
del layer_2
except:
pass
try:
del output_data
except:
pass
try:
del synapse_0
except:
pass
try:
del synapse_1
except:
pass
import gc
gc.collect()
Explanation: Training the simple XOR gate neural network:
Note: There is no function that defines a neuron! In practice neuron is just an abstract concept to understand the probability function.
Continuously feeding the data throught the neural network.
Updating the weights of the network through backpropagation.
During the training the model becomes better and better in predicting the output values.
The layers are just matrix multiplication functions that apply the sigmoid function to the synapse matrix and the corresponding layer.
Backpropagation portion of the training is the machine learning portion of this code.
Backpropagation function reduces the prediction errors during each training step.
Synapses and weights are synonymous.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
Explanation: Part 01b -- Neural network based XOR gate using rectified linear units activation function:
End of explanation
def ReLU(x, h = None, derivative=False):
if derivative:
return x[h < 0]
x_relu = np.maximum(x, 0)
return x_relu
x = list(np.arange(-6.0, 6.0, 0.1))
y = []
for i in x:
y_i = ReLU(i)
y.append(y_i)
xmin= -6
xmax = 6
ymin = 0
ymax = 1
axis = [xmin, xmax, ymin, ymax]
plt.axhline(y=0.5, color='C2', alpha=0.5)
plt.axvline(x=0, color='C2', alpha=0.5)
plt.axis(axis)
plt.plot(x, y, linewidth=2.0)
Explanation: Plotting the rectified linear units (ReLU) activation function:
End of explanation
x = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([[0],
[1],
[1],
[0]])
Explanation: Create input and output data
End of explanation
N, D_in, H, D_out = hidden_layer_size, x.shape[1], 30, 1
Explanation: N is batch size(sample size); D_in is input dimension; H is hidden dimension; D_out is output dimension:
End of explanation
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
learning_rate = 0.002
update_freq = 10
training_steps = 200
loss_col = []
Explanation: Randomly initialize weights:
End of explanation
for t in range(training_steps):
# Forward pass: compute predicted y
h = x.dot(w1)
h_relu = np.maximum(h, 0) # using ReLU as activate function
y_pred = h_relu.dot(w2)
# Compute and print loss
loss = np.square(y_pred - y).sum() # squared error as the loss function
loss_col.append(loss)
if ((t*update_freq) % training_steps ==0):
print ('Training step :' + str(t))
print ('Loss function during training :' + str(loss))
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y) # the last layer's error
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T) # the second layer's error
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0 # the derivate of ReLU
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
print ('Training completed ...')
print ('Predictions :' + str (y_pred))
plt.plot(loss_col)
plt.show()
delete_model = True
if delete_model:
try:
del loss_col
except:
pass
try:
del input_data
except:
pass
try:
del output_data
except:
pass
try:
del x
except:
pass
try:
del y
except:
pass
try:
del output_data
except:
pass
import gc
gc.collect()
Explanation: ReLu as the activation function and squared error as the loss function:
End of explanation
import matplotlib.pyplot as plt # pip3 install matplotlib
import numpy as np # pip3 install numpy
import sklearn # pip3 install scikit-learn
import sklearn.datasets
import sklearn.linear_model
import matplotlib
Explanation: Part 02 -- Build a more complex neural network classifier using numpy:
Importing dependent libraries:
End of explanation
def tanh(x, derivative=False):
if (derivative == True):
return (1 - (x ** 2))
return np.tanh(x)
x = list(np.arange(-6.0, 6.0, 0.1))
y = []
for i in x:
y_i = tanh(i)
y.append(y_i)
xmin=-6
xmax = 6
ymin = -1.1
ymax = 1.1
axis = [xmin, xmax, ymin, ymax]
plt.axhline(y=0, color='C2', alpha=0.5)
plt.axvline(x=0, color='C2', alpha=0.5)
plt.axis(axis)
plt.plot(x, y, linewidth=2.0)
Explanation: Plotting hyperbolic tan (tanh) activation function:
End of explanation
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
Explanation: Display plots inline and change default figure size:
End of explanation
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)
Explanation: Generate a dataset and create a plot:
End of explanation
linear_classifier = sklearn.linear_model.LogisticRegressionCV()
linear_classifier.fit(X, y)
Explanation: Train the logistic regression classifier:
The classification problem can be summarized as creating a boundary between the red and the blue dots.
End of explanation
def plot_decision_boundary(prediction_function):
# Setting minimum and maximum values for giving the plot function some padding
x_min, x_max = X[:, 0].min() - .5, \
X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, \
X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), \
np.arange(y_min, y_max, h))
# Predict the function value for the whole grid
Z = prediction_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plotting the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.get_cmap("Spectral"))
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.get_cmap("Spectral"))
Explanation: Visualize the logistic regression classifier output:
End of explanation
plot_decision_boundary(lambda x: linear_classifier.predict(x))
plt.title("Logistic Regression")
Explanation: Plotting the decision boundary:
End of explanation
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
Explanation: Create a neural network:
End of explanation
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
Explanation: Gradient descent parameters:
End of explanation
def loss_function(model):
W1, b1, W2, b2 = model['W1'], \
model['b1'], \
model['W2'], \
model['b2']
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probabilities = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss function:
corect_logprobs = -np.log(probabilities[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
# Adding the regulatization term to the loss function
data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)))
return 1./num_examples * data_loss
Explanation: Compute loss function on the dataset:
Calculating predictions using forward propagation
End of explanation
def predict(model, x):
W1, b1, W2, b2 = model['W1'], \
model['b1'], \
model['W2'], \
model['b2']
# Design a network with forward propagation
z1 = x.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
Explanation: Function that predicts the output of either 0 or 1:
End of explanation
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = {}
# Gradient descent. For each batch...
for i in range(0, num_passes):
# Forward propagation
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Backpropagation
delta3 = probs
delta3[range(num_examples), y] -= 1
dW2 = (a1.T).dot(delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2))
dW1 = np.dot(X.T, delta2)
db1 = np.sum(delta2, axis=0)
# Add regularization terms (b1 and b2 don't have regularization terms)
dW2 += reg_lambda * W2
dW1 += reg_lambda * W1
# Gradient descent parameter update
W1 += -epsilon * dW1
b1 += -epsilon * db1
W2 += -epsilon * dW2
b2 += -epsilon * db2
# Assign new parameters to the model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print("Loss after iteration %i: %f" %(i, loss_function(model)))
return model
Explanation: This function learns parameters for the neural network and returns the model:
nn_hdim: Number of nodes in the hidden layer
num_passes: Number of passes through the training data for gradient descent
print_loss: If True, print the loss every 1000 iterations
End of explanation
model = build_model(50, print_loss=True)
Explanation: Build a model with 50-dimensional hidden layer:
End of explanation
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 50")
Explanation: Plot the decision boundary:
End of explanation
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
Explanation: Visualizing the hidden layers with varying sizes:
End of explanation
np.random.seed(0)
X, y = sklearn.datasets.make_moons(20000, noise=0.5)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)
linear_classifier = sklearn.linear_model.LogisticRegressionCV()
linear_classifier.fit(X, y)
plot_decision_boundary(lambda x: linear_classifier.predict(x))
plt.title("Logistic Regression")
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
model = build_model(50, print_loss=True)
Explanation: Part 03 -- Example illustrating the importance of learning rate in hyper-parameter tuning:
Learning rate is a decreasing function of time.
Two forms that are commonly used are:
1) a linear function of time
2) a function that is inversely proportional to the time t
Create a noisier, more complex dataset:
End of explanation
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 50")
Explanation: Plotting output of the model that failed to learn, given a set of hyper-parameters:
End of explanation
epsilon = 1e-6 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
model = build_model(50, print_loss=True)
Explanation: Adjusting the learning rate such that the neural network re-starts learning:
End of explanation
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 50")
Explanation: Plotting the decision boundary layer generated by an improved neural network model:
End of explanation
import tensorflow as tf
import numpy as np
Explanation: Part 04 -- Building a neural network using tensorflow:
A neural network that predicts the y value given an x value.
Implemented using tensorflow, an open-source deep-learning library.
Import dependent libraries:
End of explanation
x_data = np.float32(np.random.rand(2,500))
y_data = np.dot([0.5, 0.7], x_data) + 0.6
Explanation: Create a synthetic dataset for training and generating predictions:
End of explanation
bias = tf.Variable(tf.zeros([1]))
synapses = tf.Variable(tf.random_uniform([1, 2], -1, 1))
y = tf.matmul(synapses, x_data) + bias
Explanation: Variable objects store tensors in tensorflow.
Tensorflow considers all input data tensors.
Tensors are 3 dimensional matrices.
Constructing a linear model:
End of explanation
lr = 0.01
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(lr)
Explanation: Gradient descent optimizer:
Imagine the valley with a ball.
The goal of the optimizer is to localize the ball to the lowest point in the valley.
Loss function will be reduced over the training.
Mean squared error as the loss function.
End of explanation
train = optimizer.minimize(loss)
Explanation: Training function:
In tensorflow the computation is wrapped inside a graph.
Tensorflow makes it easier to visualize the training sessions.
End of explanation
init = tf.global_variables_initializer()
Explanation: Initialize the variables for the computational graph:
End of explanation
sess = tf.Session()
sess.run(init)
Explanation: Launching the tensorflow computational graph:
End of explanation
training_steps = 60000
for step in range (0, training_steps):
sess.run(train)
if step % 1000 == 0:
print ('Current training session: ' + str(step) + str(sess.run(synapses))+ str(sess.run(bias)))
Explanation: Training the model:
End of explanation
import keras
import numpy as np
import os
from keras import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
MODEL_PATH = './XOR_gate_keras_network.h5'
! wget https://github.com/rahulremanan/python_tutorial/raw/master/Fundamentals_of_deep-learning/weights/XOR_gate_keras_network.h5 -O XOR_gate_keras_network.h5
Explanation: Part 05 -- Neural net XOR gate solver using Tensorflow and Keras
End of explanation
x = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([[0],
[1],
[1],
[0]])
Explanation: Create input and output data
End of explanation
model = Sequential()
model.add(Dense(5, activation="relu",
input_shape=(2,)))
model.add(Dense(5, activation="relu"))
model.add(Dense(1, activation="relu"))
Explanation: Create a neural network using Keras Sequential API
End of explanation
optimizer = keras.optimizers.SGD(lr=1e-4)
Explanation: Select optimizer
End of explanation
model.compile(optimizer=optimizer,
loss="binary_crossentropy",
metrics=['accuracy'])
Explanation: Compile keras model
End of explanation
if os.path.exists(MODEL_PATH):
model.load_weights(MODEL_PATH)
Explanation: Load model weights
End of explanation
model.summary()
Explanation: Summarize keras model
End of explanation
! apt-get install -y graphviz libgraphviz-dev && pip3 install pydot graphviz
from keras.utils import plot_model
import pydot
import graphviz # apt-get install -y graphviz libgraphviz-dev && pip3 install pydot graphviz
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
output_dir = './'
plot_model(model, to_file= output_dir + '/model_summary_plot.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Explanation: Visualize model architecture
End of explanation
model.fit(x, y, batch_size=4,epochs=1000)
Explanation: Train model
End of explanation
model.save_weights(MODEL_PATH)
model.predict(x)
from google.colab import files
files.download(MODEL_PATH)
Explanation: Save model weights
End of explanation
import numpy as np
def sigmoid(x, derivative=False):
if (derivative == True):
return x * (1 - x)
return 1 / (1 + np.exp(-x))
def tanh(x, derivative=False):
if (derivative == True):
return (1 - (x ** 2))
return np.tanh(x)
def relu(x, derivative=False):
if (derivative == True):
for i in range(0, len(x)):
for k in range(len(x[i])):
if x[i][k] > 0:
x[i][k] = 1
else:
x[i][k] = 0
return x
for i in range(0, len(x)):
for k in range(0, len(x[i])):
if x[i][k] > 0:
pass # do nothing since it would be effectively replacing x with x
else:
x[i][k] = 0
return x
def arctan(x, derivative=False):
if (derivative == True):
return (np.cos(x) ** 2)
return np.arctan(x)
def step(x, derivative=False):
if (derivative == True):
for i in range(0, len(x)):
for k in range(len(x[i])):
if x[i][k] > 0:
x[i][k] = 0
return x
for i in range(0, len(x)):
for k in range(0, len(x[i])):
if x[i][k] > 0:
x[i][k] = 1
else:
x[i][k] = 0
return x
def squash(x, derivative=False):
if (derivative == True):
for i in range(0, len(x)):
for k in range(0, len(x[i])):
if x[i][k] > 0:
x[i][k] = (x[i][k]) / (1 + x[i][k])
else:
x[i][k] = (x[i][k]) / (1 - x[i][k])
return x
for i in range(0, len(x)):
for k in range(0, len(x[i])):
x[i][k] = (x[i][k]) / (1 + abs(x[i][k]))
return x
def gaussian(x, derivative=False):
if (derivative == True):
for i in range(0, len(x)):
for k in range(0, len(x[i])):
x[i][k] = -2* x[i][k] * np.exp(-x[i][k] ** 2)
for i in range(0, len(x)):
for k in range(0, len(x[i])):
x[i][k] = np.exp(-x[i][k] ** 2)
return x
Explanation: Extra credit -- Activation functions in numpy:
End of explanation |
14,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ipytest Summary
ipytest aims to make testing code in IPython notebooks easy. At its core, it offers a way to run pytest tests inside the notebook environment. It is also designed to make the transfer of the tests into proper python modules easy by supporting to use standard pytest features.
To get started install ipytest via
Step1: Execute tests
To execute test, just decorate the cells containing the tests with the %%ipytest magic
Step2: Using pytest fixtures
Common pytest features, such as fixtures and parametrize, are supported out of the box | Python Code:
import ipytest
ipytest.autoconfig()
Explanation: ipytest Summary
ipytest aims to make testing code in IPython notebooks easy. At its core, it offers a way to run pytest tests inside the notebook environment. It is also designed to make the transfer of the tests into proper python modules easy by supporting to use standard pytest features.
To get started install ipytest via:
bash
pip install -U ipytest
To use ipytest, import it and configure the notebook. In most cases, running ipytest.autoconfig() will result in reasonable defaults:
Tests can be executed with the %%ipytest magic
The pytest assert rewriting system to get nice assert messages will integrated into the notebook
If not notebook name is given, a workaround using temporary files will be used
For more control, pass the relevant arguments to ipytest.autconfig(). For details, see the documentation in the readme.
End of explanation
%%ipytest
# define the tests
def test_my_func():
assert my_func(0) == 0
assert my_func(1) == 0
assert my_func(2) == 2
assert my_func(3) == 2
def my_func(x):
return x // 2 * 2
Explanation: Execute tests
To execute test, just decorate the cells containing the tests with the %%ipytest magic:
End of explanation
%%ipytest
import pytest
@pytest.mark.parametrize('input,expected', [
(0, 0),
(1, 0),
(2, 2),
(3, 2),
])
def test_parametrized(input, expected):
assert my_func(input) == expected
@pytest.fixture
def my_fixture():
return 42
def test_fixture(my_fixture):
assert my_fixture == 42
Explanation: Using pytest fixtures
Common pytest features, such as fixtures and parametrize, are supported out of the box:
End of explanation |
14,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Data
Step2: Split Data For Cross Validation
Step3: Standardize Feature Data | Python Code:
from sklearn import datasets
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
Explanation: Title: Preprocessing Iris Data
Slug: preprocessing_iris_data
Summary: Preprocessing iris data using scikit learn.
Date: 2016-09-21 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
# Load the iris data
iris = datasets.load_iris()
# Create a variable for the feature data
X = iris.data
# Create a variable for the target data
y = iris.target
Explanation: Load Data
End of explanation
# Random split the data into four new datasets, training features, training outcome, test features,
# and test outcome. Set the size of the test data to be 30% of the full dataset.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
Explanation: Split Data For Cross Validation
End of explanation
# Load the standard scaler
sc = StandardScaler()
# Compute the mean and standard deviation based on the training data
sc.fit(X_train)
# Scale the training data to be of mean 0 and of unit variance
X_train_std = sc.transform(X_train)
# Scale the test data to be of mean 0 and of unit variance
X_test_std = sc.transform(X_test)
# Feature Test Data, non-standardized
X_test[0:5]
# Feature Test Data, standardized.
X_test_std[0:5]
Explanation: Standardize Feature Data
End of explanation |
14,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Hamiltonian
Step2: Software version | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
Explanation: QuTiP example: Dynamics of a Spin Chain
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
def integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver):
si = qeye(2)
sx = sigmax()
sy = sigmay()
sz = sigmaz()
sx_list = []
sy_list = []
sz_list = []
for n in range(N):
op_list = []
for m in range(N):
op_list.append(si)
op_list[n] = sx
sx_list.append(tensor(op_list))
op_list[n] = sy
sy_list.append(tensor(op_list))
op_list[n] = sz
sz_list.append(tensor(op_list))
# construct the hamiltonian
H = 0
# energy splitting terms
for n in range(N):
H += - 0.5 * h[n] * sz_list[n]
# interaction terms
for n in range(N-1):
H += - 0.5 * Jx[n] * sx_list[n] * sx_list[n+1]
H += - 0.5 * Jy[n] * sy_list[n] * sy_list[n+1]
H += - 0.5 * Jz[n] * sz_list[n] * sz_list[n+1]
# collapse operators
c_op_list = []
# spin dephasing
for n in range(N):
if gamma[n] > 0.0:
c_op_list.append(np.sqrt(gamma[n]) * sz_list[n])
# evolve and calculate expectation values
if solver == "me":
result = mesolve(H, psi0, tlist, c_op_list, sz_list)
elif solver == "mc":
ntraj = 250
result = mcsolve(H, psi0, tlist, c_op_list, sz_list, ntraj)
return result.expect
#
# set up the calculation
#
solver = "me" # use the ode solver
#solver = "mc" # use the monte-carlo solver
N = 10 # number of spins
# array of spin energy splittings and coupling strengths. here we use
# uniform parameters, but in general we don't have too
h = 1.0 * 2 * np.pi * np.ones(N)
Jz = 0.1 * 2 * np.pi * np.ones(N)
Jx = 0.1 * 2 * np.pi * np.ones(N)
Jy = 0.1 * 2 * np.pi * np.ones(N)
# dephasing rate
gamma = 0.01 * np.ones(N)
# intial state, first spin in state |1>, the rest in state |0>
psi_list = []
psi_list.append(basis(2,1))
for n in range(N-1):
psi_list.append(basis(2,0))
psi0 = tensor(psi_list)
tlist = np.linspace(0, 50, 200)
sz_expt = integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver)
fig, ax = plt.subplots(figsize=(10,6))
for n in range(N):
ax.plot(tlist, np.real(sz_expt[n]), label=r'$\langle\sigma_z^{(%d)}\rangle$'%n)
ax.legend(loc=0)
ax.set_xlabel(r'Time [ns]')
ax.set_ylabel(r'\langle\sigma_z\rangle')
ax.set_title(r'Dynamics of a Heisenberg spin chain');
Explanation: Hamiltonian:
$\displaystyle H = - \frac{1}{2}\sum_n^N h_n \sigma_z(n) - \frac{1}{2} \sum_n^{N-1} [ J_x^{(n)} \sigma_x(n) \sigma_x(n+1) + J_y^{(n)} \sigma_y(n) \sigma_y(n+1) +J_z^{(n)} \sigma_z(n) \sigma_z(n+1)]$
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Software version:
End of explanation |
14,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Teori dan Praktek Alokasi Aset Portfolio yang Optimal
Referensi dan Atribusi
Materi dari studi ini diambil dari minggu kedua kursus Introduction to Portfolio Construction and Analysis with Python.
Sumber-sumber lain
Step1: Kita buat covariance matrix-nya.
Step2: Agar lebih sederhana, kita pilih 4 industri saja dalam portfolio kita.
Step3: Expected returns dan covariance matrix untuk aset-aset ini
Step4: Mari membuat fungsi untuk menghitung expected return dan volatilitas dari portfolio dengan operasi matriks seperti dijelaskan di atas.
Step5: Misalkan 4 aset tadi kita alokasikan secara seimbang
Step6: Kita bisa hitung expected return dari portfolionya dengan alokasi ini
Step7: Dan juga volatilitasnya
Step8: Praktek Efficient Frontier dengan 2 Aset
Agar lebih sederhana dalam pengaturan alokasi aset untuk portfolio, kita pilih hanya 2 aset saja. Kita akan mem-plot risk vs return dari beberapa kombinasi alokasi (weight).
Step9: Sekarang mari kita hitung returns dari kombinasi alokasi di atas, dan kita masukkan dalam list.
Step10: Kita lakukan hal yang sama untuk volatility.
Step11: Mari kita gabungkan return dan volatilitas menjadi dataframe agar mudah diplot.
Step12: Sekarang mari kita plot volatility vs return-nya.
Step16: Done! Kita telah menggambar efficient frontier untuk 2 aset.
Praktek Efficient Frontier untuk Beberapa Aset
Garis besar langkah-langkahnya kurang lebih adalah sebagai berikut
Step19: Whoa! It works!
Capital Market Line (CML)
Capital Market Line (CML) adalah kurva yang menggambarkan grafik resiko vs return dari portfolio yang merupakan gabungan dari risk free asset dan risky asset. Garis CML menggambarkan return dari portfolio yang naik seiring dengan ditambahkannya risky asset dalam portfolio, seperti dalam grafik di bawah ini.
Pada alokasi yang optimal antara risk free dan risky asset, portfolio akan menghasilkan nilai Sharpe Ratio yang optimal. Titik ini adalah persinggungan antara kurva Capital Market Line dan Efficient Frontier, seperti terlihat dalam gambar di bawah.
Dan titik-titik dalam garis CML adalah kurva risk-return yang optimal untuk portolio yang merupakan perpaduan antara risk free asset dan risky asset.
Praktek
Step20: Kelemahan Efficient Frontier
Walaupun pada awalnya Efficient frontier terlihat sangat menjanjikan, ternyata kemudian diketahui bahwa pendekatan ini mempunyai kelemahan yang cukup fatal, yang bahkan membuatnya kurang layak (feasible) untuk dipakai. Kelemahannya adalah, EF membutuhkan nilai expected return yang akurat, dan sedikit perbedaan pada nilai expected return akan menyebabkan perubahan yang drastis pada alokasi terhadap aset-asetnya.
Yuk kita demonstrasikan berikut. Agar sederhanan kita pakai dua aset saja.
Step21: Expected return dari kedua asset di atas adalah
Step22: MSR untuk dua asset di atas adalah
Step23: Jadi alokasi yang optimal adalah Food 75% dan Steel 25%.
Sekarang mari kita lihat perubahan pada alokasi aset kalau expected return dari asetnya kita rubah sedikit.
Step24: Maka prosentase alokasinya sudah berubah jauh. Padahal perubahannya tidak sampai 1%.
Sekarang coba kita rubah expected return-nya lebih banyak.
Step25: Wow! Sekarang MSR mengalokasikan 100% ke aset Steel. Demikian juga kalau kita rubah sebaliknya
Step27: Maka MSR mengalokasikan 100% ke Food. Padahal perubahannya tidak sampai 2%.
Dalam dunia nyata, kita harus ingat bahwa nilai expected return adalah nilai ramalan dari return suatu aset untuk periode ke depan. Nilai ini dibuat oleh analis. Kalaupun ada kesalahan 2%, maka di dunia nyata ini sudah merupakan ramalan yang sangat bagus. Namun ternyata hal ini menghasilkan perubahan alokasi yang sangat drastis jika kita memakai MSR.
Oleh karena itu orang kemudian memakai perhitungan alokasi lain, misalnya GMV di bawah ini.
Global Minimum Variance (GMV) Portfolio
GMV portfolio adalah portfolio dengan volatilitas terendah yang mungkin dicapai dengan mengombinasikan aset-aset yang ada. Dalam grafik EF, titik GMV adalah ujung "hidung" kurva EF, seperti terlihat di bawah.
Kelebihan GMV adalah perhitungannya hanya membutuhkan matriks covariance, dan tidak membutuhkan expected return, sehingga terbebas dari masalah kesalahan pada prediksi expected return di atas.
Menghitung GMV
Step28: Alokasi Seimbang
Sebagai pelengkap, kita gambarkan titik risk/return jika kita mengalokasikan aset secara merata. | Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
# Load data returns dari sektor2
ind = pd.read_csv("ind30_m_vw_rets.csv", header=0, index_col=0)/100
# Ubah index jadi perioda bulanan
ind.index = pd.to_datetime(ind.index, format="%Y%m").to_period('M')
# Hilangkan spasi pada kolom
ind.columns = ind.columns.str.strip()
# Batasi lingkup agar sama dengan di MOOC
ind = ind["1996":"2000"]
# Konversi returns menjadi tahunan. "er" adalah expected return
compounded_growth = (1+ind).prod()
er = compounded_growth ** (12/ind.shape[0]) -1
print('Expected returns:')
print(er)
Explanation: Teori dan Praktek Alokasi Aset Portfolio yang Optimal
Referensi dan Atribusi
Materi dari studi ini diambil dari minggu kedua kursus Introduction to Portfolio Construction and Analysis with Python.
Sumber-sumber lain:
Efficient frontier - Investopedia
Return dan Volatilitas dari Portfolio
Portfolio dengan Dua Aset
Misalkan kita membangun portfolio yang berisi dua aset, yaitu aset A dan aset B. Return, volatilitas (dalam hal ini standard deviasi), dan alokasi (weight) masing-masing aset direpresentasikan dengan R, σ, dan w.
Maka return portfolio adalah weighted average dari masing-masing return:
$$ R(w_A, w_B) = w_A \times R_A + w_B \times R_B $$
Sedangkan untuk volatilitas, nilainya tergantung dari korelasi antar aset. Kalau aset-asetnya berkorelasi sempurna, maka pada dasarnya aset-aset tersebut akan bergerak secara seragam, sehingga volatilitasnya adalah semacam rata-rata dari volatilitas aset-aset tersebut. Semakin rendah korelasinya, maka volatilitas dari portfolio akan semakin rendah juga, sampai pada suatu titik ketika korelasi antara A dan B cukup rendah sehingga bahkan volatilitas dari portfolio lebih rendah dari volatilitas baik A atau B!
Perhitungan untuk volatility adalah sbb:
$$ \sigma^2(w_A, w_B) = \sigma_A^2 w_A^2 + \sigma_B^2 w_B^2 + 2 w_A w_B \sigma_A \sigma_B \rho_{A,B} $$
dimana $\rho_{A,B}$ adalah korelasi dari A dan B.
Portfolio dengan Beberapa Aset
Men-generalisasi formula di bagian sebelumnya, kalau portfolio berisi lebih dari dua aset, maka return-nya adalah:
$$ R_p = \sum_{i=1}^{k} w_i R_o $$
Sedangkan volatilitasnya adalah:
$$ \sigma_p^2 = \sum_{i=1}^{k} \sum_{j=1}^{k} w_i w_j \sigma_i \sigma_j \rho_{i, j} $$
Volatilitas dari aset i dikali volatilitas dari aset j dikali korelasi dari aset i dan j adalah covariance dari aset i dan j, sehingga persamaan di atas bisa disederhanakan menjadi:
$$ \sigma_p^2 = \sum_{i=1}^{k} \sum_{j=1}^{k} w_i w_j \sigma_{i, j} $$
Di mana $\sigma_{i,i}$ adalah covariance dari aset i dan j. Sebagai tambahan info, $\sigma_{i,i}$ adalah variance dari aset i (karena $\sigma_{i,i} = \sigma_i \sigma_i \rho_{i,i}$ dan $\rho_{i,i}$ tentunya adalah 1).
Notasi Matriks
Return dari portfolio dengan menggunakan notasi matriks:
$$ R_p = w^T R $$
Sedangkan perhitungan volatilitas dari portfolio dapat disederhanakan menjadi:
$$ \sigma_p^2 = w^T \Sigma w $$
dimana $\Sigma$ bukan notasi sum tapi adalah covariance matriks.
Efficient Frontier
Efficient frontier yang ditemukan oleh pemenang Nobel Harry Markowitz di 1952, adalah kumpulan portfolio yang memberikan "performansi" yang paling optimal, yaitu memberikan return tertinggi untuk suatu resiko yang telah ditentukan, atau sebaliknya, resiko terendah untuk suatu return yang telah ditentukan.
Efficient frontier sering digambarkan sebagai kurva seperti di bawah ini.
Sumbu X adalah resiko, dan sumbu Y adalah return. Untuk suatu pilihan aset-aset, kombinasi yang melambangkan efficient frontier dilambangkan dengan kurva garis merah. Kombinasi lain dari aset-aset yang menyimpang dari efficient frontier bukanlah kombinasi yang bagus. Misalnya, ambil titik A, yang melambangkan suatu portfolio dengan kombinasi alokasi aset-aset tertentu. Kita tidak akan mau memilih portfolio A ini, karena ada portfolio lain (artinya dengan menggunakan kombinasi alokasi aset yang berbeda) dimana untuk resiko yang sama kita bisa mendapatkan return yang lebih tinggi (yaitu titik B), atau dengan return yang sama kita bisa mendapatkan resiko yang lebih rendah (titik C).
Demo Menghitung Return dan Volatilitas Portfolio
Kali ini kita akan mendemonstrasikan perhitungan return dan volatilitas dari portfolio. Data yang digunakan adalah data return bulanan beberapa industri/sektor mulai dari tahun 1926 sampai 2018. Data ini saya ambil dari kursus Introduction to Portfolio Construction and Analysis with Python oleh Vijay Vaidyanathan. Datanya sendiri bersumber dari dan hak cipta oleh Kenneth French
End of explanation
# Covariance matrix
cov = ind.cov()
Explanation: Kita buat covariance matrix-nya.
End of explanation
assets = ['Food', 'Beer', 'Smoke', 'Coal']
Explanation: Agar lebih sederhana, kita pilih 4 industri saja dalam portfolio kita.
End of explanation
er[assets]
cov.loc[assets, assets]
Explanation: Expected returns dan covariance matrix untuk aset-aset ini:
End of explanation
def portfolio_return(weights, returns):
return weights.T @ returns
def portfolio_vol(weights, covmat):
return (weights.T @ covmat @ weights)**0.5
Explanation: Mari membuat fungsi untuk menghitung expected return dan volatilitas dari portfolio dengan operasi matriks seperti dijelaskan di atas.
End of explanation
weights = np.repeat(1/4, 4)
weights
Explanation: Misalkan 4 aset tadi kita alokasikan secara seimbang:
End of explanation
portfolio_return(weights, er[assets])
Explanation: Kita bisa hitung expected return dari portfolionya dengan alokasi ini:
End of explanation
portfolio_vol(weights, cov.loc[assets, assets])
Explanation: Dan juga volatilitasnya:
End of explanation
# Pilih 2 aset
assets = ['Games', 'Fin']
# Generate kombinasi alokasi untuk dua aset
N_POINTS = 20
weights = [np.array([w, 1-w]) for w in np.linspace(0, 1, N_POINTS)]
weights
Explanation: Praktek Efficient Frontier dengan 2 Aset
Agar lebih sederhana dalam pengaturan alokasi aset untuk portfolio, kita pilih hanya 2 aset saja. Kita akan mem-plot risk vs return dari beberapa kombinasi alokasi (weight).
End of explanation
rets = [portfolio_return(w, er[assets]) for w in weights]
rets
Explanation: Sekarang mari kita hitung returns dari kombinasi alokasi di atas, dan kita masukkan dalam list.
End of explanation
vols = [portfolio_vol(w, cov.loc[assets,assets]) for w in weights]
vols
Explanation: Kita lakukan hal yang sama untuk volatility.
End of explanation
ef = pd.DataFrame(data={'Return': rets,
'Volatility': vols})
ef
Explanation: Mari kita gabungkan return dan volatilitas menjadi dataframe agar mudah diplot.
End of explanation
ef.plot.line(x='Volatility', y='Return',
title='Efficient Frontier dengan Dua Aset ({} dan {})'.format(assets[0], assets[1]),
figsize=(15,6), style='.-')
Explanation: Sekarang mari kita plot volatility vs return-nya.
End of explanation
from scipy.optimize import minimize
def minimize_vol(target_return, er, cov):
Returns the optimal weights that achieve the target return
given a set of expected returns and a covariance matrix
n = er.shape[0]
init_guess = np.repeat(1/n, n)
bounds = ((0.0, 1.0),) * n # an N-tuple of 2-tuples!
# construct the constraints
weights_sum_to_1 = {'type': 'eq',
'fun': lambda weights: np.sum(weights) - 1
}
return_is_target = {'type': 'eq',
'args': (er,),
'fun': lambda weights, er: target_return - portfolio_return(weights,er)
}
weights = minimize(portfolio_vol, init_guess,
args=(cov,), method='SLSQP',
options={'disp': False},
constraints=(weights_sum_to_1,return_is_target),
bounds=bounds)
return weights.x
def optimal_weights(n_points, er, cov):
target_rs = np.linspace(er.min(), er.max(), n_points)
weights = [minimize_vol(target_return, er, cov) for target_return in target_rs]
return weights
def plot_ef(n_points, er, cov):
Plots the multi-asset efficient frontier
weights = optimal_weights(n_points, er, cov)
rets = [portfolio_return(w, er) for w in weights]
vols = [portfolio_vol(w, cov) for w in weights]
ef = pd.DataFrame({
"Returns": rets,
"Volatility": vols
})
ax = ef.plot.line(x="Volatility", y="Returns", style='.-',
label='Efficient Frontier', legend=True, figsize=(15,6))
ax.set_ylabel('Returns')
ax.set_xlim(left=0)
ax.set_ylim(bottom=0)
return ax
assets = ['Smoke', 'Fin', 'Games', 'Coal']
plot_ef(25, er[assets], cov.loc[assets, assets])
Explanation: Done! Kita telah menggambar efficient frontier untuk 2 aset.
Praktek Efficient Frontier untuk Beberapa Aset
Garis besar langkah-langkahnya kurang lebih adalah sebagai berikut:
- kita tahu return terkecil dari portfolio adalah kalau 100% alokasi diberikan kepada aset dengan return terkecil, dan return terbesar dari portfolio adalah kalau 100% alokasi diberikan kepada aset dengan return terbesar. Maka return terkecil dan terbesar ini adalah titik awal dan akhir dari kurva EF kita (return terkecil adalah ujung bawah kurva, return terbesar adalah ujung atas kurva). Contoh:
- misalnya return terkecil diberikan oleh aset A yaitu sebesar 10% dan return terbesar diberikan oleh aset B yaitu 90%.
- lalu kita bagi range dari return terendah-tertinggi di atas menjadi beberapa bagian (n_points). Contoh:
- misalnya kita bagi menjadi 20 bagian (n_points=20), maka kita akan mempunyai list dengan isi 20 element, mulai dari return terendah yaitu 10% sampai tertinggi yaitu 90%: [10.0, 14.2, 18.4, ..., 85.8, 90.0]
- tiap-tiap bagian merepresentasikan suatu return tertentu. Untuk setiap return ini, kita akan mencari alokasi (weights) yang tepat yang dapat memberikan volatilitas yang terendah. Proses ini dilakukan di fungsi minimize_vol() di bawah. Fungsi ini memanggil fungsi minimize() yang disediakan oleh pustaka scipy.optimize.
- pada akhir iterasi, kita mendapatkan kombinasi weights yang tepat yang memberikan volatilitas terendah untuk setiap return pada slice yang kita tentukan di atas.
- dengan weights itu, kita bisa hitung volatilitas dari portfolio dengan memanggil portfolio_vol(), dan juga returnnya dengan memanggil portfolio_return().
- dengan return dan volatilitas di tangan, kita bisa membuat grafik efficient frontier.
Kali ini langsung akan saya copy-paste kodenya dari lab MOOC Introduction to Portfolio Construction and Analysis with Python minggu kedua.
End of explanation
# Credit: Vijay Vaidyanathan
# https://www.coursera.org/learn/introduction-portfolio-construction-python
def msr(riskfree_rate, er, cov):
Returns the weights of the portfolio that gives you the maximum sharpe ratio
given the riskfree rate and expected returns and a covariance matrix
n = er.shape[0]
init_guess = np.repeat(1/n, n)
bounds = ((0.0, 1.0),) * n # an N-tuple of 2-tuples!
# construct the constraints
weights_sum_to_1 = {'type': 'eq',
'fun': lambda weights: np.sum(weights) - 1
}
def neg_sharpe(weights, riskfree_rate, er, cov):
Returns the negative of the sharpe ratio
of the given portfolio
r = portfolio_return(weights, er)
vol = portfolio_vol(weights, cov)
return -(r - riskfree_rate)/vol
weights = minimize(neg_sharpe, init_guess,
args=(riskfree_rate, er, cov), method='SLSQP',
options={'disp': False},
constraints=(weights_sum_to_1,),
bounds=bounds)
return weights.x
def plot_cml(ax, riskfree_rate, w_msr, er, cov):
r = portfolio_return(w_msr, er)
vol = portfolio_vol(w_msr, cov)
x = [0, vol]
y = [riskfree_rate, r]
ax.plot(x, y, color='green', marker='o', label='CML',
linestyle='-', linewidth=2, markersize=10)
ax.legend()
RISKFREE_RATE = 0.10
ax = plot_ef(25, er[assets], cov.loc[assets, assets])
w_msr = msr(RISKFREE_RATE, er[assets], cov.loc[assets, assets])
plot_cml(ax, RISKFREE_RATE, w_msr, er[assets], cov.loc[assets, assets])
Explanation: Whoa! It works!
Capital Market Line (CML)
Capital Market Line (CML) adalah kurva yang menggambarkan grafik resiko vs return dari portfolio yang merupakan gabungan dari risk free asset dan risky asset. Garis CML menggambarkan return dari portfolio yang naik seiring dengan ditambahkannya risky asset dalam portfolio, seperti dalam grafik di bawah ini.
Pada alokasi yang optimal antara risk free dan risky asset, portfolio akan menghasilkan nilai Sharpe Ratio yang optimal. Titik ini adalah persinggungan antara kurva Capital Market Line dan Efficient Frontier, seperti terlihat dalam gambar di bawah.
Dan titik-titik dalam garis CML adalah kurva risk-return yang optimal untuk portolio yang merupakan perpaduan antara risk free asset dan risky asset.
Praktek: Mencari Titik Maximum Sharpe Ratio (MSR)
Fungsi di bawah mencari titik MSR dan menggambarkannya ke grafik EF.
End of explanation
assets = ['Food', 'Steel']
Explanation: Kelemahan Efficient Frontier
Walaupun pada awalnya Efficient frontier terlihat sangat menjanjikan, ternyata kemudian diketahui bahwa pendekatan ini mempunyai kelemahan yang cukup fatal, yang bahkan membuatnya kurang layak (feasible) untuk dipakai. Kelemahannya adalah, EF membutuhkan nilai expected return yang akurat, dan sedikit perbedaan pada nilai expected return akan menyebabkan perubahan yang drastis pada alokasi terhadap aset-asetnya.
Yuk kita demonstrasikan berikut. Agar sederhanan kita pakai dua aset saja.
End of explanation
er[assets]
Explanation: Expected return dari kedua asset di atas adalah:
End of explanation
msr(RISKFREE_RATE, er[assets], cov.loc[assets, assets])
Explanation: MSR untuk dua asset di atas adalah:
End of explanation
msr(RISKFREE_RATE, np.array([0.11, 0.12]), cov.loc[assets, assets])
Explanation: Jadi alokasi yang optimal adalah Food 75% dan Steel 25%.
Sekarang mari kita lihat perubahan pada alokasi aset kalau expected return dari asetnya kita rubah sedikit.
End of explanation
msr(RISKFREE_RATE, np.array([0.10, 0.13]), cov.loc[assets, assets])
Explanation: Maka prosentase alokasinya sudah berubah jauh. Padahal perubahannya tidak sampai 1%.
Sekarang coba kita rubah expected return-nya lebih banyak.
End of explanation
msr(RISKFREE_RATE, np.array([0.13, 0.10]), cov.loc[assets, assets])
Explanation: Wow! Sekarang MSR mengalokasikan 100% ke aset Steel. Demikian juga kalau kita rubah sebaliknya:
End of explanation
# Credit: Vijay Vaidyanathan
# https://www.coursera.org/learn/introduction-portfolio-construction-python
def gmv(cov):
Returns the weights of the Global Minimum Volatility portfolio
given a covariance matrix
n = cov.shape[0]
return msr(0, np.repeat(1, n), cov)
assets = ['Smoke', 'Fin', 'Games', 'Coal']
def plot_point(ax, weights, er, cov, label, color='C1'):
r = portfolio_return(weights, er)
vol = portfolio_vol(weights, cov)
x = [vol]
y = [r]
ax.plot([vol], [r], color=color, marker='o', label=label,
linestyle='-', linewidth=2, markersize=10)
ax.legend()
ax = plot_ef(25, er[assets], cov.loc[assets, assets])
w_gmv = gmv(cov.loc[assets, assets])
plot_point(ax, w_gmv, er[assets], cov.loc[assets, assets], 'GMV')
Explanation: Maka MSR mengalokasikan 100% ke Food. Padahal perubahannya tidak sampai 2%.
Dalam dunia nyata, kita harus ingat bahwa nilai expected return adalah nilai ramalan dari return suatu aset untuk periode ke depan. Nilai ini dibuat oleh analis. Kalaupun ada kesalahan 2%, maka di dunia nyata ini sudah merupakan ramalan yang sangat bagus. Namun ternyata hal ini menghasilkan perubahan alokasi yang sangat drastis jika kita memakai MSR.
Oleh karena itu orang kemudian memakai perhitungan alokasi lain, misalnya GMV di bawah ini.
Global Minimum Variance (GMV) Portfolio
GMV portfolio adalah portfolio dengan volatilitas terendah yang mungkin dicapai dengan mengombinasikan aset-aset yang ada. Dalam grafik EF, titik GMV adalah ujung "hidung" kurva EF, seperti terlihat di bawah.
Kelebihan GMV adalah perhitungannya hanya membutuhkan matriks covariance, dan tidak membutuhkan expected return, sehingga terbebas dari masalah kesalahan pada prediksi expected return di atas.
Menghitung GMV
End of explanation
n_assets = len(assets)
ax = plot_ef(25, er[assets], cov.loc[assets, assets])
w_ew = np.repeat(1/n_assets, n_assets)
plot_point(ax, w_ew, er[assets], cov.loc[assets, assets], 'Equal weights', color='C4')
Explanation: Alokasi Seimbang
Sebagai pelengkap, kita gambarkan titik risk/return jika kita mengalokasikan aset secara merata.
End of explanation |
14,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 18
Step1: Edge cases
There are edge cases when generating tiles that are in the first or the last position. In the first position, the left upper tile does not exist, in which case, we assume it to be safe. Same for the upper right tile for the tile in the last position.
We create an accessor method to get us the correct tile even when the index is out of bounds. Here we assume that all tiles out of bounds are 'safe'.
Step2: Rules
There are four rules as specified by
Step3: Test data
Step4: Generating the 10 x 10 test data
Step5: Running against the given input for 40 rows
Step6: Part Two
How many safe tiles are there in a total of 400000 rows? | Python Code:
SAFE = '.'
TRAP = '^'
def is_safe(tile):
return tile == SAFE
def is_trap(tile):
return tile == TRAP
Explanation: Day 18: Like a Rogue
author: Harshvardhan Pandit
license: MIT
link to problem statement
As you enter this room, you hear a loud click! Some of the tiles in the floor here seem to be pressure plates for traps, and the trap you just triggered has run out of... whatever it tried to do to you. You doubt you'll be so lucky next time.
Upon closer examination, the traps and safe tiles in this room seem to follow a pattern. The tiles are arranged into rows that are all the same width; you take note of the safe tiles (.) and traps (^) in the first row (your puzzle input).
The type of tile (trapped or safe) in each row is based on the types of the tiles in the same position, and to either side of that position, in the previous row. (If either side is off either end of the row, it counts as "safe" because there isn't a trap embedded in the wall.)
For example, suppose you know the first row (with tiles marked by letters) and want to determine the next row (with tiles marked by numbers):
ABCDE
12345
The type of tile 2 is based on the types of tiles A, B, and C; the type of tile 5 is based on tiles D, E, and an imaginary "safe" tile. Let's call these three tiles from the previous row the left, center, and right tiles, respectively. Then, a new tile is a trap only in one of the following situations:
Its left and center tiles are traps, but its right tile is not.
Its center and right tiles are traps, but its left tile is not.
Only its left tile is a trap.
Only its right tile is a trap.
In any other situation, the new tile is safe.
Then, starting with the row ..^^., you can determine the next row by applying those rules to each new tile:
The leftmost character on the next row considers the left (nonexistent, so we assume "safe"), center (the first ., which means "safe"), and right (the second ., also "safe") tiles on the previous row. Because all of the trap rules require a trap in at least one of the previous three tiles, the first tile on this new row is also safe, ..
The second character on the next row considers its left (.), center (.), and right (^) tiles from the previous row. This matches the fourth rule: only the right tile is a trap. Therefore, the next tile in this new row is a trap, ^.
The third character considers .^^, which matches the second trap rule: its center and right tiles are traps, but its left tile is not. Therefore, this tile is also a trap, ^.
The last two characters in this new row match the first and third rules, respectively, and so they are both also traps, ^.
After these steps, we now know the next row of tiles in the room: .^^^^. Then, we continue on to the next row, using the same rules, and get ^^..^. After determining two new rows, our map looks like this:
..^^.
.^^^^
^^..^
Here's a larger example with ten tiles per row and ten rows:
.^^.^.^^^^
^^^...^..^
^.^^.^.^^.
..^^...^^^
.^^^^.^^.^
^^..^.^^..
^^^^..^^^.
^..^^^^.^^
.^^^..^.^^
^^.^^^..^^
In ten rows, this larger example has 38 safe tiles.
Starting with the map in your puzzle input, in a total of 40 rows (including the starting row), how many safe tiles are there?
Solution logic
This is a generation problem, where the generation of values is based on certain rules and/or inputs. In this case, the generation of a tile is based on the tiles around it in the previous row. Given the set of rules, we must generate a total of 40 rows and then count the 'safe' tiles.
Each tile to be generated is based on the three tiles above it - one directly above it, one to its left, and one to its right. Based on the combination of these three tiles, we determine whether the new tile is safe or a trap.
Notations for tiles that are safe or are traps
We use constants SAFE and TRAP to specify the type of tiles, and define functions that check the type.
End of explanation
def get_tile(row, index):
if 0 <= index < len(row):
return row[index]
return SAFE
Explanation: Edge cases
There are edge cases when generating tiles that are in the first or the last position. In the first position, the left upper tile does not exist, in which case, we assume it to be safe. Same for the upper right tile for the tile in the last position.
We create an accessor method to get us the correct tile even when the index is out of bounds. Here we assume that all tiles out of bounds are 'safe'.
End of explanation
def make_tile(previous_row, tile_index):
left = is_trap(get_tile(previous_row, tile_index - 1))
center = is_trap(get_tile(previous_row, tile_index))
right = is_trap(get_tile(previous_row, tile_index + 1))
if (left == center == (not right)) or ((not left) == center == right):
return TRAP
return SAFE
Explanation: Rules
There are four rules as specified by:
Its left and center tiles are traps, but its right tile is not.
Its center and right tiles are traps, but its left tile is not.
Only its left tile is a trap.
Only its right tile is a trap.
Which, when written in an alternate form using the functions we wrote, becomes:
1. is_trap(left) and is_trap(center) and is_safe(right)
2. is_safe(left) and is_trap(center) and is_trap(right)
3. is_trap(left) and is_safe(center) and is_safe(right)
4. is_safe(left) and is_safe(center) and is_trap(right)
A quick observation:
There are a total of four rules, and they are mirrors of each other - 1 & 4 and 2 & 3. We can exploit this condition by checking for only one of them.
Each set contains one of the two combinations - first and second are equal, but not to the third, or the inverse of this arrangement.
Therefore, we can summarize these rules as:
left == center == (not right) OR (not left) == center == right
Now whether the tiles are safe or traps, if they satisfy the given rules, then the new tile is a trap. Based on this, we create the make_tile method.
End of explanation
row = '..^^.'
print(row)
next_row = ''.join((make_tile(row, index) for index in range(len(row))))
print(next_row)
next_row = ''.join((make_tile(next_row, index) for index in range(len(next_row))))
print(next_row)
Explanation: Test data
End of explanation
row = '.^^.^.^^^^'
print(row)
for _ in range(10 - 1):
row = ''.join((make_tile(row, index) for index in range(len(row))))
print(row)
Explanation: Generating the 10 x 10 test data
End of explanation
with open('../inputs/Day18.txt', 'r') as f:
input_data = f.readline().strip()
safe_tiles = input_data.count(SAFE)
row = input_data
for _ in range(40 - 1):
row = ''.join((make_tile(row, index) for index in range(len(row))))
safe_tiles += row.count(SAFE)
print('answer', safe_tiles)
Explanation: Running against the given input for 40 rows
End of explanation
safe_tiles = input_data.count(SAFE)
row = input_data
for _ in range(400000 - 1):
row = ''.join((make_tile(row, index) for index in range(len(row))))
safe_tiles += row.count(SAFE)
print('answer', safe_tiles)
Explanation: Part Two
How many safe tiles are there in a total of 400000 rows?
End of explanation |
14,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Aggregate all blast results
Step1: 2. Write annotated results out | Python Code:
blast_file_regex = re.compile(r"(blast[np])_vs_([a-zA-Z0-9_]+).tsv")
blast_cols = ["query_id","subject_id","pct_id","ali_len","mism",
"gap_open","q_start","q_end","s_start","s_end",
"e_value","bitscore","q_len","s_len","s_gi",
"s_taxids","s_scinames","s_names","q_cov","s_description"
]
#blast_cols = "qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore qlen slen sgi staxids sscinames scomnames qcovs stitle"
blast_hits = []
for blast_filename in glob("2_blast/*.tsv"):
tool_id,db_id = blast_file_regex.search(blast_filename).groups()
blast_hits.append( pd.read_csv(blast_filename,sep="\t",header=None,names=blast_cols) )
blast_hits[-1]["tool"] = tool_id
blast_hits[-1]["db"] = db_id
all_blast_hits = blast_hits[0]
for search_hits in blast_hits[1:]:
all_blast_hits = all_blast_hits.append(search_hits)
print(all_blast_hits.shape)
all_blast_hits.head()
Explanation: 1. Aggregate all blast results
End of explanation
all_blast_hits.sort_values(by=["query_id","bitscore"],ascending=False).to_csv("2_blastp_hits.tsv",sep="\t",quotechar='"',index=False)
Explanation: 2. Write annotated results out
End of explanation |
14,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cosmological Analysis with SNe (Topic Five)
Renée Hložek
Preparing for SN Science in the LSST era
Step1: Now that we have called our distance modulus once and know how to do it, we can also compute it over a range of parameters.
Step2: Now let's make a fake likelihood for the SN data. Of course this is over-simplified, we know that the errors are correlated, and that we need to account for the LC parameters too.
Step3: The first thing we notice is that we shouldn't just be taking the model spectrum at that bin, but we should be binning the theory.
<font color='red'>EXCERCISE if interested
Step4: Your discussion here.
Step5: We want to take a step in this 5-D parameter space specified by the step vector.
Step6: MCMC
We are now ready to do the MCMC. We'll define the simplest/ugliest version of the Metropolis Hastings algorithm
Step7: We don't actually want to read in the data every time.
<font color='red'>EXCERCISE if interested
Step8: Now that we've run the chain, let's analyse it to see what the constraints look like
Step9: Fisher matrix
In this case we want to compute the Fisher derivatives for a given parameter of interest, and include the errors that are simulated and then forecast the constraints on the parameters around an assumed model.
<img src="fisher.png">
Step10: In general the Cramer-Rao bound states that the Fisher matrix will always be smaller than the MCMC bound. The smaller the errorbars - the more Gaussian the contours and the more the FM contours agree with the MCMC ones! | Python Code:
%matplotlib inline
import sys, platform, os
from matplotlib import pyplot as plt
import numpy as np
import astropy as ap
import pylab as pl
# we start by setting the cosmological parameters of interest, and reading in our data
cosmoparams_orig = [70., 0.3, 0.7, -0.9, 0.2]
redshift=np.arange(0.001,1.3,0.01) # a redshift vector for plotting etc.
plot=False
root = 'large_error'
sndata = np.loadtxt('workshop_data_' + root+'.txt', unpack=True)
sndata[2]=sndata[2]
cov = np.diag(sndata[2]**2)
# We will start by defining some functions to generate Ia data (for our model computation later)
def gen_ia(cosmoparams, redshift=np.arange(0.01,1,0.1), plot=True):
'''Code to simulate the SNeIa, taking input of cosmology params, redshift vector and a plotting flag'''
from astropy.cosmology import w0waCDM
import pylab as pl
cosmo = w0waCDM(H0=cosmoparams[0], Om0=cosmoparams[1], Ode0=cosmoparams[2], w0=cosmoparams[3], wa=cosmoparams[4])
mu = cosmo.distmod(redshift).value
if plot:
pl.figure(figsize=(8,6))
pl.plot(redshift, mu, '-')
pl.xlabel(r'redshift $z$', fontsize=20)
pl.ylabel(r'$\mu(z)$', fontsize=20)
pl.show()
return mu
# To check this works we generate some theory curve to match the data
cosmoparams_orig = [70., 0.3, 0.7, -0.9, 0.2]
mulcdm = gen_ia(cosmoparams_orig, redshift, plot)
pl.figure(figsize=(8,6))
pl.xlabel(r'redshift $z$', fontsize=20)
pl.ylabel(r'$\mu(z)$', fontsize=20)
pl.errorbar(sndata[0], sndata[1],sndata[2],marker='.', color='m', linestyle='None')
pl.plot(redshift, mulcdm)
Explanation: Cosmological Analysis with SNe (Topic Five)
Renée Hložek
Preparing for SN Science in the LSST era: a kick off workshop
We are going to do a very rough example of an MCMC, using the <a href=" https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm"> Metroplis Hastings algorithm, </a> so that when you run more complicated code (eg. emcee) it makes sense intuitively! We will then do a comparison with a <a href="https://en.wikipedia.org/wiki/Fisher_information"> Fisher matrix technique </a> so we can compare them mentally.
We will make sure that we have astropy installed:
use
pip install astropy
MCMC
End of explanation
wvals = np.arange(-0.94,-0.84,0.03)
pl.figure(figsize=(8,6))
cosmoparams=list(cosmoparams_orig)
# changing oonly the wvalue for now
for wval in wvals:
cosmoparams[3] = wval
mu = gen_ia(cosmoparams, redshift, plot=False)
pl.plot(redshift, (mu-mulcdm)/mulcdm, label=r'$w_0=%s$'%wval)
pl.xlabel(r'redshift $z$', fontsize=20)
pl.ylabel(r'$\Delta \mu(z)$', fontsize=20)
leg = pl.legend(loc='best')
leg.draw_frame(False)
Explanation: Now that we have called our distance modulus once and know how to do it, we can also compute it over a range of parameters.
End of explanation
## Fake likelihood for LSST SN data
def sn_likelihood(cosmoparams, loaddata=True):
if loaddata:
# if it is the first time, load the data
data = np.loadtxt('workshop_data_' + root+'.txt', unpack=True)
redshift = data[0] # we are just assuming that the distance modulus is at the same redshift as the binned value
modelmu = gen_ia(cosmoparams, redshift, plot=False)
num_sn=len(data[0])
loglike = (data[1]-modelmu)**2/(2.*data[2]**2)
loglike=-np.sum(loglike,axis=0)
return loglike, num_sn
## Define a prior while you are sampling so you don't go to weird places (ie negative Omega_m)
def snprior(cosmoparams):
p = -np.ones(len(cosmoparams))
# Gaussian priors
#p[0] = -((cosmoparams[0])-70.)**2/(2.*5)**2
p[1] = -((cosmoparams[1])-0.3)**2/(2.*0.02)**2
#p[2] = -((cosmoparams[2])-0.7)**2/(2.*0.02)**2
p[3] = -((cosmoparams[2])+0.9)**2/(2.*0.7)**2
# hard cuts
if ((cosmoparams[0]< 50) or (cosmoparams[0]> 100)):
p[0] = -3000
if (cosmoparams[1] < 0):
p[1] = -3000
if (cosmoparams[2]< 0):
p[2] =-3000
if (cosmoparams[3]< -2):
p[3] = -3000
if ((cosmoparams[4]< -2) or (cosmoparams[4]> 2)):
p[4] = -3000
pp = sum(p)
return pp
Explanation: Now let's make a fake likelihood for the SN data. Of course this is over-simplified, we know that the errors are correlated, and that we need to account for the LC parameters too.
End of explanation
# Your code here
Explanation: The first thing we notice is that we shouldn't just be taking the model spectrum at that bin, but we should be binning the theory.
<font color='red'>EXCERCISE if interested: </font> Write a module to bin the theory over the same redshift range as the binned data.
End of explanation
# Let's call the module for the spectrum we have above.
model = list(cosmoparams_orig)
loglike, num_sn = sn_likelihood(model)
print loglike, num_sn, loglike/num_sn
Explanation: Your discussion here.
End of explanation
# Using this code above, we can take a gaussian step specified by the step vector below
stepvec = np.array([0.0,0.0001, 0.02, 0.005, 0.001])
nsteps = 2
loglike = np.zeros(nsteps)
for i in range(nsteps):
if i==0:
# First step
step = list(cosmoparams_orig)
else:
# Take a Gaussian step from the previous position
step = step+np.random.randn(len(cosmoparams))*stepvec
model=step
loglike[i], num_sn = sn_likelihood(model)
print 'loglike vector =', 2*loglike
Explanation: We want to take a step in this 5-D parameter space specified by the step vector.
End of explanation
def mcmc_mh(ratln):
accept=False
r1 = np.random.rand()
# If the step is definitely better, we want to accept it.
# If it isn't necessarily better, we want to throw a random number and step if we exceed it
if np.exp(ratln) > r1:
accept=True
return accept
# Using this code above, we can take a gaussian step specified by the step vector below
if (root=='large_error'):
stepvec = np.array([0.0,0.03, 0.0, 0.005, 0.0])
else:
stepvec = np.array([0.0,0.005, 0.0, 0.001, 0.0])
paramsvec=np.array(cosmoparams_orig)
steps = 10000
loglike = np.zeros(steps)
prior = np.zeros(steps)
post = np.zeros(steps)
stepskeep = np.zeros((steps,len(paramsvec)+1))
accept_count=0
for i in range(steps):
if i==0:
step = np.array(paramsvec)
accept=True
model=list(step)
loglike[i], num_sn = sn_likelihood(model)
prior[i] = snprior(model)
post[i] = loglike[i]+prior[i]
stepskeep[i,0:len(paramsvec)] = np.array(step)
stepskeep[i,len(paramsvec)]= loglike[i]
else:
step = stepskeep[i-1,0:len(paramsvec)]+np.random.randn(len(paramsvec))*stepvec
model=list(step)
prior[i] = snprior(model)
if (prior[i]>-3000):
loglike[i], num_sn = sn_likelihood(model)
post[i] = loglike[i]+prior[i]
rat = post[i]-post[i-1]
accept = mcmc_mh(rat)
else:
accept=False
if accept:
stepskeep[i,0:len(paramsvec)] = np.array(step)
stepskeep[i,len(paramsvec)] = loglike[i]
accept_count+=1
else:
stepskeep[i,0:len(paramsvec)] = stepskeep[i-1,0:len(paramsvec)]
loglike[i] = loglike[i-1]
stepskeep[i,len(paramsvec)] = loglike[i]
if (steps%i ==0):
print 'acceptance ratio = ', accept_count/float(i), 'steps taken = ', i
np.savetxt('chain_'+root+'.txt', stepskeep, delimiter=' ', fmt='%.3e')
print 'we are done'
Explanation: MCMC
We are now ready to do the MCMC. We'll define the simplest/ugliest version of the Metropolis Hastings algorithm:
End of explanation
## Your code here
Explanation: We don't actually want to read in the data every time.
<font color='red'>EXCERCISE if interested: </font> Change the likelihood function to only read in the data the first time it is called.
End of explanation
# Read in the chain
import corner
chain = np.loadtxt('chain_'+root+'.txt', unpack=True)
burn = np.int(0.5*len(chain[0,:])) # burn off some initial part of the chain
pl.figure(2)
sigma = 1.0
chain_colour = '#5AB1BB'
binnum=40
newchain = np.zeros((len(chain[0,burn:]),2))
newchain[:,0] = chain[1,burn:]
newchain[:,1] = chain[3,burn:]
fig1 = corner.corner(newchain,
color=chain_colour,smooth1d=2,smooth=2,plot_datapoints=False,levels=(1-np.exp(-0.5),1-np.exp(-2.)),
density=True,bins=binnum, labels=[r'$\Omega_m$', r'$w_0$'])
fig1.figsize=[20,20]
#rcParams["figure.figsize"] = [10,10]
pl.savefig(root+'.png')
print 'mean for om:', np.mean(chain[1,burn:]), np.std(chain[1,burn:])
print 'mean for w0:', np.mean(chain[3,burn:]), np.std(chain[3,burn:])
Explanation: Now that we've run the chain, let's analyse it to see what the constraints look like
End of explanation
# We start by having a model that will change the cosmology within the Fisher matrix
def assign_cosmo(cosmo,model=[70, 0.3,0.7, -0.9, 0.2]):
import astropy as ap
from astropy.cosmology import Planck15, Flatw0waCDM
ob0=0.022
om0=model[1]
ode0 =model[2]
newcosmo = cosmo.clone(name='temp cosmo', H0=model[0], Ob0=ob0, Om0=om0, Ode0=ode0, w0=model[3], wa=model[4])
#print newcosmo.Ok0
return newcosmo
# Define code that returns the mu and the Fisher matrix
def fish_deriv_m(redshift, model, step):
"takes the model vector - for now [h0,om,ok,w0,wa], step vector (0 if not step) \
data vector and gives back the derivs and the base function value at those \
redshifts"
from astropy.cosmology import w0waCDM
from astropy import constants as const
import pylab as pl
Ob0=0.022
Om0=model[1]
Ode0 =model[2]
cosmo = w0waCDM(model[0], Ob0, Om0, Ode0, model[3],model[4])
cosmo=assign_cosmo(cosmo, model)
#print cosmo.Ok0
m = []
m_deriv = []
c = const.c.to('km/s')
base_theory = cosmo.distmod(redshift)
m = base_theory.value
step_inds = np.where(step)[0] # look for non-zero step indices
deriv = np.zeros((len(base_theory), len(model)))
if (step_inds.size==0):
print 'No steps taken, abort'
exit
else:
print '\n'
print 'Computing Fisher derivatives...'
for i, stepp in enumerate(step_inds):
print 'we are stepping in :', model[stepp], ' with step size', step[stepp]
cosmo = assign_cosmo(cosmo, model)
theory = np.zeros((len(base_theory),2))
for count,j in enumerate([-1,1]):
tempmodel = list(model)
tempmodel[stepp] = model[stepp] + j*step[stepp]
#print tempmodel
c = const.c.to('km/s')
cosmo = assign_cosmo(cosmo, tempmodel)
tmp = cosmo.distmod(redshift)
theory[:,count] = tmp.value
deriv[:,stepp] = (theory[:,1] - theory[:,0])/(2.*step[stepp])
m_deriv = deriv
return m, m_deriv
stepvec = np.array([0, 0.001, 0.00, 0.1, 0.0])
model = [70., 0.3, 0.7, -0.9, 0.2]
names = ['hubble', 'omega_m', 'omega_de', 'w0', 'wa']
step_inds = np.where(stepvec)[0]
fishermu, deriv = fish_deriv_m(sndata[0], model, stepvec)
pl.errorbar(sndata[0],sndata[1], sndata[2], marker='.', linestyle='None')
pl.plot(sndata[0], fishermu, marker='*', color='r', linestyle='None')
# lets plot the Fisher derivaties for interest
for i in step_inds:
pl.plot(sndata[0], deriv[:,i]/fishermu, label=names[i],marker='.', linestyle='None')
leg = pl.legend(loc='best', numpoints=1)
leg.draw_frame(False)
# We are setting up the covariance data for the Fishermatrix
cov = np.diag(sndata[2]**2)
inv_cov = np.diag(1./sndata[2]**2.)
# Initialising the Fisher Matrix
FM = np.zeros((len(step_inds), len(step_inds), len(sndata[2]) ))
# Compute the Fisher matrix
for i in range(len(step_inds)):
# loop over variables
for j in range(len(step_inds)):
# loop over variables
for k in range(len(sndata[0])):
# loop over redshifts
invcov = inv_cov[k,k]
FM[i,j,k] = np.dot(np.dot(deriv[k,step_inds[i]], invcov), deriv[k,step_inds[j]])
# sum over the redshift direction
fishmat = np.sum(FM,axis=2)
# Compute the prior matrix
prior_vec = np.array([0.1, 0.02, 0.0006, 0.2, 0.2])
priormat = np.diag(1./prior_vec[step_inds]**2.)
final_FM = fishmat + priormat
covmat = np.linalg.inv(final_FM)
sigma = np.sqrt(covmat.diagonal())
print 'Fisher matrix results'
print 'error for om:', sigma[0]
print 'error for w0:', sigma[1]
print 'MCMC results'
print 'error for om:', np.std(chain[1,:])
print 'error for w0:', np.std(chain[3,:])
Explanation: Fisher matrix
In this case we want to compute the Fisher derivatives for a given parameter of interest, and include the errors that are simulated and then forecast the constraints on the parameters around an assumed model.
<img src="fisher.png">
End of explanation
## Print differece
print np.std(chain[1,:])/sigma[0]
print np.std(chain[3,:])/sigma[1]
Explanation: In general the Cramer-Rao bound states that the Fisher matrix will always be smaller than the MCMC bound. The smaller the errorbars - the more Gaussian the contours and the more the FM contours agree with the MCMC ones!
End of explanation |
14,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Development Notes
This notebook documents Daniel's attempt to re-create Evan's box-model.
LOG
1/14 12
Step3: The model itself is a simple, one-box model tracking the volume ($V$, in m$^3$), salinity ($S$, kg), nitrogen ($N$, probably mass?) and oxygen ($O$, again probably in mass?) in an estuary.
For simplicity, we'll first whip up the model neglecting tidal inflow. In this case
Step4: With caveats, that implements the basics of the model. Now, we can try to run it with some simple initial conditions. Note that we'll have to re-do the initial conditions since we aren't tracked species densities - just species masses or molecular masses (ideally the former, but need to check in and see equations) | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='ticks', context='talk')
import numpy as np
import pandas as pd
Explanation: Development Notes
This notebook documents Daniel's attempt to re-create Evan's box-model.
LOG
1/14 12:45PM - initial setup, reading code
End of explanation
t_hours = np.linspace(0, 60., 1000.) # time, in hours
def tidal_flow(t, estuary_area=1.):
Rate of tidal in/out-flow in m3/s as a function of
time (hours)
return 2.*estuary_area*np.sin(2.*np.pi*(t / 12.45))
tides = tidal_flow(t_hours)
plt.figure(figsize=(5,1.5))
plt.plot(t_hours/24., tides)
lo, hi = plt.ylim()
plt.vlines([0, 1, 2], lo, hi, linestyle='dashed', color='k')
plt.hlines(0, t_hours[0]/24., t_hours[-1]/24., color='k')
def box_model_ode(y, t, tide_func=lambda t: 0,
river_flow_rate=0.05,
G=3., P=1., P_scale=1., V0=1e9, z=5.,
N_river=100., O_river=231.2,
S_ocean=35., N_ocean=20., O_ocean=231.2):
This encodes the instantaneous rate of change of the
box model sytem, `y`, at a given instant in time, `t`.
Parameters
----------
y : array
The current volume, salinity, nitrogen, and ocean state
variables:
- V: m3
- S: kg
- N: mmol
- O: mmol
t : float
The current evaluation time, in hours.
tide_func : function
A function of the argument `t` (again in hours) which yields
the mass transport due to tidal inflow and outflow in m3/hr.
By convention, the function should return positive values for
inflow and negative values for outflow.
z : float
Average estuary depth, in m
river_flow_rate : float
Fraction (preferably between 0 and 0.2) of river flow per day
relative to estuary mean volume. Set to `0` to disable river
flow
V0: float
Initial (average) estuary volume
N_river, O_river : float
Nitrogen and oxygen concentration in river in mmol m-3
G : float
Gas exchange rate in m/d, between 1 and 5
P : float
System productivity relative to normal conditions (P=1); may vary
between 0.5 (cloudy) and 2.0 (bloom)
P_scale : float
Factor to scale system productivity,
S_ocean, N_ocean, O_ocean : floats
Boundary condition concentrations for S, N, O in ocean and upriver
sources. Because these are concentrations, S is kg/m3, and N and O
are mmol/m3
Returns
-------
dy_dt : array
Derivative of the current state-time.
# Un-pack current state
V, S, N, O = y[:]
# Pre-compute terms which will be used in the derivative
# calculations
# 2) Biological production minus respiration
# Note: there's clearly some sort of stoichiometry going on here, o
# need to find out what those reactions are. also, in Evan's
# production code (post-spin-up), this is scaled by the mean
# N value from the past 24 hours divided by the ocean N
# levels
J = P_scale*P*(125.*16./154.)*np.sin(2.*np.pi*(t + 0.75)/24.) # mmol/m2/day
# J /= 24 # day-1 -> h-1
# 3) Estuary avg surface exchange area
A = V0/z
# 4) Current molar concentrations of N and O (to mmol / m3)
S = S/V
N = N/V
O = O/V
# 5) Tidal source gradients, given direction of tide
tidal_flow = (V0/z)*tide_func(t)
if tidal_flow > 0:
tidal_S_contrib = tidal_flow*S_ocean
tidal_N_contrib = tidal_flow*N_ocean
tidal_O_contrib = tidal_flow*O_ocean
else:
# N/O are already in molar concentrations
tidal_S_contrib = tidal_flow*S
tidal_N_contrib = tidal_flow*N
tidal_O_contrib = tidal_flow*O
# Compute derivative terms
dV_dt = tidal_flow
dS_dt = -river_flow_rate*V0*S + tidal_S_contrib
dN_dt = -J*A - river_flow_rate*V0*(N - N_river) \
+ tidal_N_contrib
dO_dt = J*(154./16.)*A + (G/24.)*(O_river - O)*A \
- river_flow_rate*V*(O - O_river) \
+ tidal_O_contrib
# print(J, A, tidal_flow, O, O_river, dO_dt)
return np.array([dV_dt, dS_dt, dN_dt, dO_dt])
Explanation: The model itself is a simple, one-box model tracking the volume ($V$, in m$^3$), salinity ($S$, kg), nitrogen ($N$, probably mass?) and oxygen ($O$, again probably in mass?) in an estuary.
For simplicity, we'll first whip up the model neglecting tidal inflow. In this case:
There is no time-dependent change in the tidal height; the water mass (volume) in the estuary remains constant with respect to time.
$S$ is lost to due to river flow.
Local net biological productivity, $J = P - R$, is given in terms of $N$ consumed to produce $O$ and is a function of a simple 24-hour cycle (daylight).
$N$ is consumed to produce $O$ but also transported via the river.
$O$ is produced from $N$ consumption but also transported via the river and exchanged in the gas phase.
End of explanation
V0 = 1e9 # m3
S0 = 35. # kg/m3
N0 = 20. # mmol/m3
O0 = 231.2 # mmol/m3
N_river = 100. # mmol/m3
O_river = 231.2 # mmol/m3
y0 = np.array([V0, S0*V0, N0*V0, O0*V0])
from scipy.integrate import odeint
model_kwargs = dict(V0=V0, #tide_func=tidal_flow,
river_flow_rate=0.05, P=1.0, G=3.0,
N_river=N_river, O_river=O_river,
S_ocean=S0, N_ocean=N0, O_ocean=O0)
dt = 1.0 # hours
t0, t_end = 0., 1000, #24.*50 # hours
t_spinup = 24.*2 # hours
# Euler integration loop
out_y = np.vstack([y0, ])
ts = [t0, ]
t = t0
while t < t_end:
# Pop last state off of stack
y = out_y[-1].T
# If we're past spin-up, then average the N concentration over
# the last 24 hours to scale productivity
if t > t_spinup:
n_24hrs = int(np.ceil(24./dt))
P_scale = np.mean(out_y[-n_24hrs:, 2]/out_y[-n_24hrs:, 0])/N0
model_kwargs['P_scale'] = P_scale
# Euler step
t += dt
new_y = y + dt*box_model_ode(y, t, **model_kwargs)
# Correct non-physical V/S/N/O (< 0)
new_y[new_y < 0] = 0.
# Save output onto stack
out_y = np.vstack([out_y, new_y])
ts.append(t)
out = out_y[:]
ts = np.array(ts)
# Convert to DataFrame
df = pd.DataFrame(data=out, columns=['V', 'S', 'N', 'O'],
dtype=np.float32,
index=pd.Index(ts/24., name='time (days)'))
# Convert S -> kg/m3, N/O -> mmol/m3
df.S /= df.V
df.N /= df.V
df.O /= df.V
# Convert V -> percentage change relative to initial/avg
df.V = 100*(df.V - V0)/V0
df[['S', 'N', 'O']].ix[:2.].plot(subplots=True, sharex=True, ylim=0)
df[['S', 'N', 'O']].plot(subplots=True, sharex=True, ylim=0)
Explanation: With caveats, that implements the basics of the model. Now, we can try to run it with some simple initial conditions. Note that we'll have to re-do the initial conditions since we aren't tracked species densities - just species masses or molecular masses (ideally the former, but need to check in and see equations)
End of explanation |
14,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Subset the genes based on their total number of transcripts
Step1: Look at gene's median transcript count
Step2: Clean data matrix to be compatible with the cluster labels and identities
Currently, cells are labeled by their barcode, e.g. GCGCAACTGCTC, and genes are labeled by their chrom | Python Code:
(n_transcripts_per_gene > 1e3).sum()
n_transcripts_per_gene[n_transcripts_per_gene > 1e4]
Explanation: Subset the genes based on their total number of transcripts
End of explanation
median_transcripts_per_gene = table1_t.median()
median_transcripts_per_gene.head()
sns.distplot(median_transcripts_per_gene)
fig = plt.gcf()
fig.savefig('median_transcripts_per_gene.png')
data = median_transcripts_per_gene
mask = data > 0
sns.distplot(data[mask])
fig = plt.gcf()
fig.savefig('median_transcripts_per_gene_greater0.png')
Explanation: Look at gene's median transcript count
End of explanation
gene_symbols = table1_t.columns.map(lambda x: x.split(':')[-1].upper())
gene_symbols.name = 'symbol'
table1_t.columnsmns = gene_symbols
table1_t.head()
barcodes = 'r1_' + table1_t.index
barcodes.name = 'barcode'
table1_t.index = barcodes
table1_t.head()
table1_t.to_csv('expression_table1.csv')
Explanation: Clean data matrix to be compatible with the cluster labels and identities
Currently, cells are labeled by their barcode, e.g. GCGCAACTGCTC, and genes are labeled by their chrom:start-end:symbol, e.g. 6:51460434-51469894:Hnrnpa2b1. But, in the supplementary data, the genes are all uppercase, e.g. HNRNPA2B1 (which is incorrect since this is mouse data.. ) and the barcodes have r1_ prepended before the id, e.g. r1_GCGCAACTGCTC.
So we need to clean the data to be compatible with this
End of explanation |
Subsets and Splits