Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
12,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test divergence and curl module
This document present tests on divergence and curl module calculation using pygsf.
Preliminary settings
The modules to import for dealing with grids are
Step1: Divergence in 2D
The definition of divergence for our 2D case is
Step2: The above functions define the value of the cells, using the given x and y geographic coordinates.
geotransform and grid definitions
Gridded field values are calculated for the theoretical source vector field x- and y- components using the provided number of rows and columns for the grid
Step3: Arrays components are defined in terms of indices i and j, so to transform array indices to geographical coordinates we use a geotransform. The one chosen is
Step4: Note that the chosen geotransform has no axis rotation, as is in the most part of cases with geographic grids.
vector field x-component
Step5: vector field y-component
Step6: theoretical divergence
the theoretical divergence transfer function is
Step7: The theoretical divergence field can be created using the function expressing the analytical derivatives z_func_div
Step8: pygsf-estimated divergence
Divergence as resulting from pygsf calculation
Step9: We check whether the theoretical and the estimated divergence fields are close
Step10: Vector field parameters
Step11: geotransform and grid definitions
Gridded field values are calculated for the theoretical source vector field x- and y- components using the provided number of rows and columns for the grid
Step12: Arrays components are defined in terms of indices i and j, so to transform array indices to geographical coordinates we use a geotransform. The one chosen is
Step13: Note that the chosen geotransform has no axis rotation, as is in the most part of cases with geographic grids.
vector field x-component
Step14: vector field y-component
Step15: theoretical curl module
The theoretical curl module is a constant value
Step16: We check whether the theoretical and the estimated curl module fields are close | Python Code:
from pygsf.mathematics.arrays import *
from pygsf.spatial.rasters.geotransform import *
from pygsf.spatial.rasters.fields import *
Explanation: Test divergence and curl module
This document present tests on divergence and curl module calculation using pygsf.
Preliminary settings
The modules to import for dealing with grids are:
End of explanation
def z_func_fx(x, y):
return 0.0001 * x * y**3
def z_func_fy(x, y):
return - 0.0002 * x**2 * y
Explanation: Divergence in 2D
The definition of divergence for our 2D case is:
\begin{align}
divergence = \nabla \cdot \vec{\mathbf{v}} & = \frac{\partial{v_x}}{\partial x} + \frac{\partial{v_y}}{\partial y}
\end{align}
Curl module in 2D
The definition of curl module in our 2D case is:
\begin{equation}
\nabla \times \vec{\mathbf{v}} = \begin{vmatrix}
\mathbf{i} & \mathbf{j} & \mathbf{k} \
\frac{\partial }{\partial x} & \frac{\partial }{\partial y} & \frac{\partial }{\partial z} \
{v_x} & {v_y} & 0
\end{vmatrix}
\end{equation}
so that the module of the curl is:
\begin{equation}
|curl| = \frac{\partial v_y}{\partial x} - \frac{\partial v_x}{\partial y}
\end{equation}
The implementation of the curl module calculation has been debugged using the code at [2] by Johnny Lin. Deviations from the expected theoretical values are the same for both implementations.
Vector field parameters: testing divergence
We calculate a theoretical, 2D vector field and check that the parameters calculated by pygsf is equal to the expected one.
We use a modified example from p. 67 in [3].
\begin{equation}
\vec{\mathbf{v}} = 0.0001 x y^3 \vec{\mathbf{i}} - 0.0002 x^2 y \vec{\mathbf{j}} + 0 \vec{\mathbf{k}}
\end{equation}
In order to create the two grids that represent the x- and the y-components, we therefore define the following two "transfer" functions from coordinates to z values:
End of explanation
rows=100; cols=200
size_x = 10; size_y = 10
tlx = 500.0; tly = 250.0
Explanation: The above functions define the value of the cells, using the given x and y geographic coordinates.
geotransform and grid definitions
Gridded field values are calculated for the theoretical source vector field x- and y- components using the provided number of rows and columns for the grid:
End of explanation
gt1 = GeoTransform(
inTopLeftX=tlx,
inTopLeftY=tly,
inPixWidth=size_x,
inPixHeight=size_y)
Explanation: Arrays components are defined in terms of indices i and j, so to transform array indices to geographical coordinates we use a geotransform. The one chosen is:
End of explanation
fx1 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fx)
Explanation: Note that the chosen geotransform has no axis rotation, as is in the most part of cases with geographic grids.
vector field x-component
End of explanation
fy1 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fy)
Explanation: vector field y-component
End of explanation
def z_func_div(x, y):
return 0.0001 * y**3 - 0.0002 * x**2
Explanation: theoretical divergence
the theoretical divergence transfer function is:
End of explanation
theor_div = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_div)
Explanation: The theoretical divergence field can be created using the function expressing the analytical derivatives z_func_div:
End of explanation
div = divergence(
fld_x=fx1,
fld_y=fy1,
cell_size_x=size_x,
cell_size_y=size_y)
Explanation: pygsf-estimated divergence
Divergence as resulting from pygsf calculation:
End of explanation
assert np.allclose(theor_div, div)
Explanation: We check whether the theoretical and the estimated divergence fields are close:
End of explanation
def z_func_fx(x, y):
return y
def z_func_fy(x, y):
return - x
Explanation: Vector field parameters: testing curl module
We test another theoretical, 2D vector field, maintaining the same geotransform and other grid parameters as in the previous example. We use the field described in example 1 in [4]:
\begin{equation}
\vec{\mathbf{v}} = y \vec{\mathbf{i}} - x \vec{\mathbf{j}} + 0 \vec{\mathbf{k}}
\end{equation}
The "transfer" functions from coordinates to z values are:
End of explanation
rows=200; cols=200
size_x = 10; size_y = 10
tlx = -1000.0; tly = 1000.0
Explanation: geotransform and grid definitions
Gridded field values are calculated for the theoretical source vector field x- and y- components using the provided number of rows and columns for the grid:
End of explanation
gt1 = GeoTransform(
inTopLeftX=tlx,
inTopLeftY=tly,
inPixWidth=size_x,
inPixHeight=size_y)
Explanation: Arrays components are defined in terms of indices i and j, so to transform array indices to geographical coordinates we use a geotransform. The one chosen is:
End of explanation
fx2 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fx)
Explanation: Note that the chosen geotransform has no axis rotation, as is in the most part of cases with geographic grids.
vector field x-component
End of explanation
fy2 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fy)
Explanation: vector field y-component
End of explanation
curl_mod = curl_module(
fld_x=fx2,
fld_y=fy2,
cell_size_x=size_x,
cell_size_y=size_y)
Explanation: theoretical curl module
The theoretical curl module is a constant value:
\begin{equation}
curl = -2
\end{equation}
pygsf-estimated module of curl
The module of curl as resulting from pygsf calculation is:
End of explanation
assert np.allclose(-2.0, curl_mod)
Explanation: We check whether the theoretical and the estimated curl module fields are close:
End of explanation |
12,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Learning
Step1: Now, as data scientists we dont know this relationship between y and x. Rather we have collected observations of y. These observations are bound to have some error - introduced by measurement, by the environment and so forth. This error is also called noise. Our goal is to be able to learn the relationship between x and y from an experiment in which we have collected a sample of some 200 (x,y) observations and ignore the noise in this collected data.
Step2: Goal
Step3: How much data?
Lets see if increasing the amount of data improves the model. We're going to build two more models, now using 200 and 400 data points respectively.
Step4: Now, we've learnt 3 models using 100, 150 and 200 samples of (x,y) observations. Let's try to plot our prediction against the actual prediction line.
Step5: We can see that as the amount of data we take increases, our estimate keeps getting better! This is an important learning - the more data that we have, the better the model performs.
Step6: Which one of the above is a good model for the data?
In the case above , we knew what was the real relationship between $x$ and $y$. In a real learning setting, one does not know the true relationship. How then do we know which of two models built performs better? Consider the graph above with raw and data and our 3 fitted lines. Which is a better model.The above question is not easy. One idea might be the model that best fits the data points, i.e. where the difference in $y$ and $\hat{y}$ (predicted y) is the least. However, this presents some problems.
Consider the next fitting exercise in which we fit a complex 5-degree polynomial and compare it to the simple linear (1-degree fit).
Step7: The blue, more complicated model fits the data better in the sense of being closer to the data points. If you try and calculate its error vs the simple linear model, you'll find its lower [Optional | Python Code:
# import libraries
import matplotlib
import IPython
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import pylab
import seaborn as sns
import sklearn as sk
%matplotlib inline
Explanation: Supervised Learning : Population line, estimator, overfitting
Let's explore a learning exercise for a very simple case. Assume that there exist two variables $\bf{x}$ and $\bf{y}$. $\bf{x}$ is a predictor variable and $\bf{y}$ is a response variable. They are related by the very simple equation $$y = 10x+3 $$
End of explanation
# Ignore for now!
x = np.array(np.linspace(0,10,400))
y = 10*x+3
y_obs = y+.3*np.random.random_integers(-100,101,size=x.size) #introducing noise - an artifact of experimentation
# Resize the numpy arrays - ignore for now
x.resize((x.size,1))
y_obs.resize((y_obs.size,1))
y.resize((y.size,1))
# Plot the data we have sampled.
plt.scatter(x,y_obs,c="g",s=6)
# Plot the true relationship between x and y we are trying to guess
plt.plot(x,y,color="red",alpha=0.4)
Explanation: Now, as data scientists we dont know this relationship between y and x. Rather we have collected observations of y. These observations are bound to have some error - introduced by measurement, by the environment and so forth. This error is also called noise. Our goal is to be able to learn the relationship between x and y from an experiment in which we have collected a sample of some 200 (x,y) observations and ignore the noise in this collected data.
End of explanation
# Lets make a model using the first 100 data points!
#Pick any random 100 points
ctr = np.random.randint(0,400,100)
x_train1 = x[ctr]
y_train1 = y_obs[ctr]
from sklearn.linear_model import LinearRegression
lin_mod = LinearRegression()
lin_mod.fit(x_train1,y_train1)
# See what the learned coefficients are!
print "The model for y = ax+b gives a = %f, b=%f " % (lin_mod.coef_, lin_mod.intercept_)
Explanation: Goal : Make a model that can predict unknown y value for given x value with a good deal of accuracy.
We have some reason to believe that the relationship is linear - this may be based on domain knowledge or guessing. A linear relationship is a good first guess.
We have assumed that $\bf{y=f(x)}$. Using various methods, we must make a reasonably close estimate of $f$. This is called an estimator, $\hat{f}$
Estimator = model = estimate of $f$ true relationship between $X$ (input) and $y$ (output) = $\hat{f}$
$$\hat{y} = \hat{f}(x)$$ where $\hat{y}$ is predicted value of y. The more data that $\hat{f}$ is estimated from, the better it will be.
End of explanation
# We have taken 3 samples - 1. a sample of 100 points from (x,y) values 2. a sample of 150 points from observed (x,y)
# 3. a sample of 200 points from observed (x,y)
# You can safely ignore the syntax if it is your first time reading or you are very unfamiliar wit numpy,python
ctr = np.random.randint(0,200,200)
x_train2 = x[ctr]
y_train2 = y_obs[ctr]
x_train3 = x
y_train3 = y_obs
# Ignore for now !!!
y_1 = lin_mod.predict(x)
lin_mod.fit(x_train2,y_train2)
y_2 = lin_mod.predict(x)
lin_mod.fit(x,y_obs)
y_3 = lin_mod.predict(x)
Explanation: How much data?
Lets see if increasing the amount of data improves the model. We're going to build two more models, now using 200 and 400 data points respectively.
End of explanation
# Plotting the results of the linear model
# based on 100, 150 and 200 samples
real_line = plt.plot(x,y,color="red",label='actual line')
#raw = plt.scatter(x,y_obs,c="g",s=6,label='sampled data')
l1 = plt.plot(x,y_1,'--',c="blue",label='estimate from 100 samples',alpha=0.4)
l2 = plt.plot(x,y_2,'--',c="green",label='estimate from 150 samples',alpha=0.4)
l3 = plt.plot(x,y_3,'--',c="yellow",label='estimate from 200 samples',alpha=0.8)
plt.xlabel('x')
plt.ylabel('y')
plt.legend(labels =['actual line','estimate from 100 samples','estimate from 200 samples','estimate from 400 samples'],bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Now, we've learnt 3 models using 100, 150 and 200 samples of (x,y) observations. Let's try to plot our prediction against the actual prediction line.
End of explanation
raw = plt.scatter(x,y_obs,c="g",s=6,label='sampled data')
l1 = plt.plot(x,y_1,'--',c="blue",label='estimate from 100 samples')
l2 = plt.plot(x,y_2,'--',c="orange",label='estimate from 150 samples')
l3 = plt.plot(x,y_3,'--',c="black",label='estimate from 200 samples')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(labels =['estimate from 100 points','estimate from 200 points','estimate from 400 points','sampled data'],bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: We can see that as the amount of data we take increases, our estimate keeps getting better! This is an important learning - the more data that we have, the better the model performs.
End of explanation
num = 200
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
# make a complicated 25-degree model trained on the first 100 data points
model_crazy = make_pipeline(PolynomialFeatures(5), Ridge())
model_crazy.fit(x[:num],y_obs[:num])
y_4 = model_crazy.predict(x[:num])
# See how it compares to the simple fit made earlier
plt.scatter(x[:num],y_obs[:num],c='red')
plt.plot(x[:num],y_4)
plt.plot(x[:num],y_3[:num])
plt.title('y vs x and two models')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(labels =['degree-25 polynomial model','simple linear model','actual data'],bbox_to_anchor=(1.05, 1), loc=1, borderaxespad=0.)
plt.savefig('woohoo.png')
Explanation: Which one of the above is a good model for the data?
In the case above , we knew what was the real relationship between $x$ and $y$. In a real learning setting, one does not know the true relationship. How then do we know which of two models built performs better? Consider the graph above with raw and data and our 3 fitted lines. Which is a better model.The above question is not easy. One idea might be the model that best fits the data points, i.e. where the difference in $y$ and $\hat{y}$ (predicted y) is the least. However, this presents some problems.
Consider the next fitting exercise in which we fit a complex 5-degree polynomial and compare it to the simple linear (1-degree fit).
End of explanation
# Test the model on points 201-250, points it hasnt been trained on!
start = 201
stop = 250
y_5 = model_crazy.predict(x[start:stop])
plt.plot(x[start:stop],y_5)
plt.scatter(x[start:stop],y_obs[start:stop],c='r')
plt.plot(x[start:stop],y_3[start:stop])
plt.legend(labels =['degree-25 polynomial model','simple linear model','actual data'],bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig('not_really.png')
Explanation: The blue, more complicated model fits the data better in the sense of being closer to the data points. If you try and calculate its error vs the simple linear model, you'll find its lower [Optional : Calculate MSE]. Are we done? We know , the green line is much more closer to the true form but it appears that the 5-degree polynomial is better than the actual relationship.
To be sure, lets use the complicated model to try and predict unseen data. We'll try and see how well the blue model fits unseen data points.
End of explanation |
12,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text classification for SMS spam detection
Outline
Step1: Training a Classifier on Text Features
We can now train a classifier, for instance a Multinomial Naive Bayesian classifier which is a fast baseline for text classification tasks
Step2: We can now evaluate the classifier on the testing set. Let's first use the builtin score function, which is the rate of correct classification in the test set
Step3: We can also compute the score on the training set, to see how well we do there
Step4: Visualizing important features | Python Code:
!head "datasets/smsspam/SMSSpamCollection"
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
text[:10]
y[:10]
type(text)
type(y)
from sklearn.cross_validation import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names()[:20])
print(vectorizer.get_feature_names()[3000:3020])
print(X_train.shape)
print(X_test.shape)
Explanation: Text classification for SMS spam detection
Outline:
- Feature extraction using bag-of-words
- train a binary classifier spam / not spam
- evaluate on test set
End of explanation
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf
clf.fit(X_train, y_train)
Explanation: Training a Classifier on Text Features
We can now train a classifier, for instance a Multinomial Naive Bayesian classifier which is a fast baseline for text classification tasks:
End of explanation
clf.score(X_test, y_test)
Explanation: We can now evaluate the classifier on the testing set. Let's first use the builtin score function, which is the rate of correct classification in the test set:
End of explanation
clf.score(X_train, y_train)
Explanation: We can also compute the score on the training set, to see how well we do there:
End of explanation
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(50), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 51), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(clf, vectorizer.get_feature_names())
vectorizer = CountVectorizer(min_df=2)
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
visualize_coefficients(clf, vectorizer.get_feature_names())
Explanation: Visualizing important features
End of explanation |
12,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Introduction to scikit-learn
Scikit-learn is a machine learning library in Python.
Scikit-learn is the first of the several machine learning libraries we will explore in this course. It is relatively approachable, supports a wide variety of traditional machine learning models, and is ubiquitous in the world of data science.
Datasets
Scikit-learn contains methods for loading, fetching, and making (generating) data. The methods for doing this all fall under the datasets subpackage. Most of the functions in this package have load, fetch, or make in the name to let you know what the method is doing under the hood.
Loading functions bring static datasets into your program. The data comes pre-packaged with scikit-learn, so no network access is required.
Fetching functions also bring static datasets into your program. However, the data is pulled from the internet, so if you don't have network access, these functions might fail.
Generating functions make dynamic datasets based on some equation.
These pre-packaged dataset functions exist for many popular datasets, such as the MNIST digits dataset and the Iris flower dataset. The generation functions reference classic dataset "shape" formations such as moons and swiss rolls. These datasets are perfect for getting familiar with machine learning.
Loading
Let us first look at an example of loading data. We will load the iris flowers dataset using the load_iris function.
Step2: That's a lot to take in. Let's examine this loaded data a little more closely. First we'll see what data type this dataset is
Step3: sklearn.utils.Bunch is a type that you'll see quite often when working with datasets built into scikit-learn. It is a dictionary-like container for feature and target data within a dataset.
You won't find much documentation about Bunch objects because they are not really meant for usage beyond containing data native to scikit-learn.
Let's look at the attributes of the iris dataset
Step4: DESCR is a description of the dataset.
Step5: filename is the name of the source file where the data is stored.
Step6: feature_names is the name of the feature columns.
Step7: target_names, despite the name, is not the names of the target columns. There is only one column of targets.
Instead, target_names is the human-readable names of the classes in the target list within the bunch. In this case,target_names is the names of the three species of iris in this dataset.
Step8: We can now examine target and see that it contains zeros, ones, and twos. These correspond to the target names 'setosa', 'versicolor', and 'virginica'.
Step9: Last, we'll look at the data within the bunch. The data is an array of arrays. Each sub-array contains four values. These values match up with the feature_names. The first item in each sub-array is 'sepal length (cm)', the next is 'sepal width (cm)', and so on.
Step10: The number of target values should always equal the number of rows in the data.
Step11: Bunch objects are an adequate container for data. They can be used directly to feed models. However, Bunch objects are not very good for analyzing and manipulating your data.
In this course, we will typically convert Bunch objects into Pandas DataFrame objects to make analysis, data cleaning, visualization, and train/test splitting easier.
To do this, we will take the matrix of feature data and append the target data to it to create a single matrix of data. We also take the list of feature names and append the word 'species' to represent the target classes in the matrix.
Step12: You might notice that the integer representation of species got converted to a floating point number along the way. We can change that back.
Step13: Exercise 1
Load the Boston house price dataset into a Pandas DataFrame. Append the target values to the last column of the DataFrame called boston_df. Name the target column 'PRICE'.
Student Solution
Step14: Fetching
Fetching is similar to loading. Scikit-learn will first see if it can find the dataset locally, and, if so, it will simply load the data. Otherwise, it will attempt to pull the data from the internet.
We can see fetching in action with the fetch_california_housing function below.
Step15: The dataset is once again a Bunch.
If you follow the link to the fetch_california_housing documentation, you notice that the dataset is a regression dataset as opposed to the iris dataset, which was a classification dataset.
We can see the difference in the dataset by checking out the attributes of the Bunch.
Step16: We see that four of the attributes that we expect are present, but 'target_names' is missing. This is because our target is now a continuous variable (home price) and not a discrete value (iris species).
Step17: Converting a Bunch of regression data to a DataFrame is no different than for a Bunch of classification data.
Step18: Generating
In the example datasets we've seen so far in this Colab, the data is static and loaded from a file. Sometimes it makes more sense to generate a dataset. For this, we can use one of the many generator functions.
make_regression is a generator that creates a dataset with an underlying regression that you can then attempt to discover using various machine learning models.
In the example below, we create a dataset with 10 data points. For the sake of visualization, we have only one feature per datapoint, but we could ask for more.
The return values are the $X$ and $y$ values for the regression. $X$ is a matrix of features. $y$ is a list of targets.
Since a generator uses randomness to generate data, we are going to set a random_state in this Colab for reproducibility. This ensures we get the same result every time we run the code. You won't do this in your production code.
Step19: We can use a visualization library to plot the regression data.
Step20: That data appears to have a very linear pattern!
If we want to make it more realistic (non-linear), we can add some noise during data generation.
Remember that random_state is for reproducibility only. Don't use this in your code unless you have a good reason to.
Step21: There are dozens of dataset loaders and generators in the scikit-learn datasets package. When you want to play with a new machine learning algorithm, they are a great source of data for getting started.
Exercise 2
Search the scikit-learn datasets documentation and find a function to make a "Moons" dataset. Create a dataset with 75 samples. Use a random state of 42 and a noise of 0.08. Store the $X$ return value in a variable called features and the $y$ return value in a variable called targets.
Student Solution
Step22: Exercise 3
In Exercise Two, you created a "moons" dataset. In that dataset, the features are $(x,y)$-coordinates that can be graphed in a scatterplot. The targets are zeros and ones that represent a binary classification.
Use matplotlib's scatter method to visualize the data as a scatterplot. Use the c argument to make the dots for each class a different color.
Student Solution
Step23: Models
Machine learning involves training a model to gain insight and predictive power from a dataset. Scikit-learn has support for many different types of models, ranging from classic algebraic models to more modern deep learning models.
Throughout the remainder of this course, you will learn about many of these models in much more depth. This section will walk you through some of the overarching concepts across all models.
Estimators
Most of the models in scikit-learn are considered estimators. An estimator is expected to implement two methods
Step24: At this point, don't worry too much about the details of what LinearRegression is doing. There is a deep-dive into regression problems coming up soon.
For now, just note the fit/predict pattern for training estimators, and know that you'll see it throughout our adventures with scikit-learn.
Transformers
In practice, it is rare that you will get perfectly clean data that is ready to feed into your model for training. Most of the time, you will need to perform some type of cleaning on the data first.
You've had some hands-on experience doing this in our Pandas Colabs. Scikit-learn can also be used to perform some data preprocessing.
Transformers are spread about within the scikit-learn library. Some are in the preprocessing module while others are in more specialized packages like compose, feature_extraction, impute, and others.
All transformers implement the fit and transform methods. The fit method calculates parameters necessary to perform the data transformation. The transform method actually applies the transformation. There is a convenience fit_transform method that performs both fitting and transformation in one method call.
Let's see a transformer in action.
We will use the MinMaxScaler to scale our feature data between zero and one. This scales the data with a linear transform so that the minimum value becomes 0 and the maximum value becomes 1, so all values are within 0 and 1.
Looking at our feature data pre-transformation, we can see values that are below zero and above one.
Step25: We will now create a MinMaxScaler and fit it to our feature data.
Each transformer has different information that it needs in order to perform a transformation. In the case of the MinMaxScaler, the smallest and largest values in the data are needed.
Step26: You might notice that the values are stored in arrays. This is because transformers can operate on more than one feature. In this case, however, we have only one.
Next, we need to apply the transformation to our features. After the transformation, we can now see that all of the features fall between the range of zero to one. Moreover, you might notice that the minimum and maximum value in the untransformed features array correspond to the 0 and 1 in the transformed array, respectively.
Step27: Pipelines
A pipeline is simply a series of transformers, often with an estimator at the end.
In the example below, we use a Pipeline class to perform min-max scaling or our feature data and then train a linear regression model using the scaled features.
Step28: Metrics
So far we have seen ways that scikit-learn can help you get data, modify that data, train a model, and finally, make predictions. But how do we know how good these predictions are?
Scikit-learn also comes with many functions for measuring model performance in the metrics package. Later in this course, you will learn about different ways to measure the performance of regression and classification models, as well as tradeoffs between the different metrics.
We can use the mean_squared_error function to find the mean squared error (MSE) between the target values that we used to train our linear regression model and the predicted values.
Step29: In this case, the MSE value alone doesn't have much meaning. Since the data that we fit the regression to isn't related to any real-world metrics, the MSE is hard to interpret alone.
As we learn more about machine learning and begin training models on real data, you'll learn how to interpret MSE and other metrics in the context of the data being analyzed and the problem being solved.
There are also metrics that come with each estimator class. These metrics can be extracted using the score method.
The regression class we created earlier can be scored, as can the pipeline.
Step30: The return value of the score method depends on the estimator being used. In the case of LinearRegression, the score is the $R^2$ score, where scores closer to 1.0 are better. You can find the metric that score returns in the documentation for the given estimator you're using.
Exercise 4
Use the Pipeline class to combine a data pre-processor and an estimator.
To accomplish this | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/03_regression/01_introduction_to_sklearn/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC.
End of explanation
from sklearn.datasets import load_iris
iris_data = load_iris()
iris_data
Explanation: Introduction to scikit-learn
Scikit-learn is a machine learning library in Python.
Scikit-learn is the first of the several machine learning libraries we will explore in this course. It is relatively approachable, supports a wide variety of traditional machine learning models, and is ubiquitous in the world of data science.
Datasets
Scikit-learn contains methods for loading, fetching, and making (generating) data. The methods for doing this all fall under the datasets subpackage. Most of the functions in this package have load, fetch, or make in the name to let you know what the method is doing under the hood.
Loading functions bring static datasets into your program. The data comes pre-packaged with scikit-learn, so no network access is required.
Fetching functions also bring static datasets into your program. However, the data is pulled from the internet, so if you don't have network access, these functions might fail.
Generating functions make dynamic datasets based on some equation.
These pre-packaged dataset functions exist for many popular datasets, such as the MNIST digits dataset and the Iris flower dataset. The generation functions reference classic dataset "shape" formations such as moons and swiss rolls. These datasets are perfect for getting familiar with machine learning.
Loading
Let us first look at an example of loading data. We will load the iris flowers dataset using the load_iris function.
End of explanation
type(iris_data)
Explanation: That's a lot to take in. Let's examine this loaded data a little more closely. First we'll see what data type this dataset is:
End of explanation
dir(iris_data)
Explanation: sklearn.utils.Bunch is a type that you'll see quite often when working with datasets built into scikit-learn. It is a dictionary-like container for feature and target data within a dataset.
You won't find much documentation about Bunch objects because they are not really meant for usage beyond containing data native to scikit-learn.
Let's look at the attributes of the iris dataset:
End of explanation
print(iris_data['DESCR'])
Explanation: DESCR is a description of the dataset.
End of explanation
print(iris_data['filename'])
Explanation: filename is the name of the source file where the data is stored.
End of explanation
print(iris_data['feature_names'])
Explanation: feature_names is the name of the feature columns.
End of explanation
print(iris_data['target_names'])
Explanation: target_names, despite the name, is not the names of the target columns. There is only one column of targets.
Instead, target_names is the human-readable names of the classes in the target list within the bunch. In this case,target_names is the names of the three species of iris in this dataset.
End of explanation
print(iris_data['target'])
Explanation: We can now examine target and see that it contains zeros, ones, and twos. These correspond to the target names 'setosa', 'versicolor', and 'virginica'.
End of explanation
iris_data['data']
Explanation: Last, we'll look at the data within the bunch. The data is an array of arrays. Each sub-array contains four values. These values match up with the feature_names. The first item in each sub-array is 'sepal length (cm)', the next is 'sepal width (cm)', and so on.
End of explanation
print(len(iris_data['data']))
print(len(iris_data['target']))
Explanation: The number of target values should always equal the number of rows in the data.
End of explanation
import pandas as pd
import numpy as np
iris_df = pd.DataFrame(
data=np.append(
iris_data['data'],
np.array(iris_data['target']).reshape(len(iris_data['target']), 1),
axis=1),
columns=np.append(iris_data['feature_names'], ['species'])
)
iris_df.sample(n=10)
Explanation: Bunch objects are an adequate container for data. They can be used directly to feed models. However, Bunch objects are not very good for analyzing and manipulating your data.
In this course, we will typically convert Bunch objects into Pandas DataFrame objects to make analysis, data cleaning, visualization, and train/test splitting easier.
To do this, we will take the matrix of feature data and append the target data to it to create a single matrix of data. We also take the list of feature names and append the word 'species' to represent the target classes in the matrix.
End of explanation
iris_df['species'] = iris_df['species'].astype('int64')
iris_df.sample(n=10)
Explanation: You might notice that the integer representation of species got converted to a floating point number along the way. We can change that back.
End of explanation
# Your answer goes here
Explanation: Exercise 1
Load the Boston house price dataset into a Pandas DataFrame. Append the target values to the last column of the DataFrame called boston_df. Name the target column 'PRICE'.
Student Solution
End of explanation
from sklearn.datasets import fetch_california_housing
housing_data = fetch_california_housing()
type(housing_data)
Explanation: Fetching
Fetching is similar to loading. Scikit-learn will first see if it can find the dataset locally, and, if so, it will simply load the data. Otherwise, it will attempt to pull the data from the internet.
We can see fetching in action with the fetch_california_housing function below.
End of explanation
dir(housing_data)
Explanation: The dataset is once again a Bunch.
If you follow the link to the fetch_california_housing documentation, you notice that the dataset is a regression dataset as opposed to the iris dataset, which was a classification dataset.
We can see the difference in the dataset by checking out the attributes of the Bunch.
End of explanation
print(housing_data['target'][:10])
Explanation: We see that four of the attributes that we expect are present, but 'target_names' is missing. This is because our target is now a continuous variable (home price) and not a discrete value (iris species).
End of explanation
import pandas as pd
import numpy as np
housing_df = pd.DataFrame(
data=np.append(
housing_data['data'],
np.array(housing_data['target']).reshape(len(housing_data['target']), 1),
axis=1),
columns=np.append(housing_data['feature_names'], ['price'])
)
housing_df.sample(n=10)
Explanation: Converting a Bunch of regression data to a DataFrame is no different than for a Bunch of classification data.
End of explanation
from sklearn.datasets import make_regression
features, targets = make_regression(n_samples=10, n_features=1, random_state=42)
features, targets
Explanation: Generating
In the example datasets we've seen so far in this Colab, the data is static and loaded from a file. Sometimes it makes more sense to generate a dataset. For this, we can use one of the many generator functions.
make_regression is a generator that creates a dataset with an underlying regression that you can then attempt to discover using various machine learning models.
In the example below, we create a dataset with 10 data points. For the sake of visualization, we have only one feature per datapoint, but we could ask for more.
The return values are the $X$ and $y$ values for the regression. $X$ is a matrix of features. $y$ is a list of targets.
Since a generator uses randomness to generate data, we are going to set a random_state in this Colab for reproducibility. This ensures we get the same result every time we run the code. You won't do this in your production code.
End of explanation
import matplotlib.pyplot as plt
plt.plot(features, targets, 'b.')
plt.show()
Explanation: We can use a visualization library to plot the regression data.
End of explanation
from sklearn.datasets import make_regression
features, targets = make_regression(n_samples=10, n_features=1, random_state=42, noise=5.0)
plt.plot(features, targets, 'b.')
plt.show()
Explanation: That data appears to have a very linear pattern!
If we want to make it more realistic (non-linear), we can add some noise during data generation.
Remember that random_state is for reproducibility only. Don't use this in your code unless you have a good reason to.
End of explanation
# Your answer goes here
Explanation: There are dozens of dataset loaders and generators in the scikit-learn datasets package. When you want to play with a new machine learning algorithm, they are a great source of data for getting started.
Exercise 2
Search the scikit-learn datasets documentation and find a function to make a "Moons" dataset. Create a dataset with 75 samples. Use a random state of 42 and a noise of 0.08. Store the $X$ return value in a variable called features and the $y$ return value in a variable called targets.
Student Solution
End of explanation
# Your answer goes here
Explanation: Exercise 3
In Exercise Two, you created a "moons" dataset. In that dataset, the features are $(x,y)$-coordinates that can be graphed in a scatterplot. The targets are zeros and ones that represent a binary classification.
Use matplotlib's scatter method to visualize the data as a scatterplot. Use the c argument to make the dots for each class a different color.
Student Solution
End of explanation
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
regression = LinearRegression()
regression.fit(features, targets)
predictions = regression.predict(features)
plt.plot(features, targets, 'b.')
plt.plot(features, predictions, 'r-')
plt.show()
Explanation: Models
Machine learning involves training a model to gain insight and predictive power from a dataset. Scikit-learn has support for many different types of models, ranging from classic algebraic models to more modern deep learning models.
Throughout the remainder of this course, you will learn about many of these models in much more depth. This section will walk you through some of the overarching concepts across all models.
Estimators
Most of the models in scikit-learn are considered estimators. An estimator is expected to implement two methods: fit and predict.
fit is used to train the model. At a minimum, it is passed the feature data used to train the model. In supervised models, it is also passed the target data.
predict is used to get predictions from the model. This method is passed features and returns target predictions.
Let's see an example of this in action.
Linear regression is a simple model that you might have encountered in a statistics class in the past. The model attempts to draw a straight line through a set of data points, so the line is as close to as many points as possible.
We'll use scikit-learn's LinearRegression class to fit a line to the regression data that we generated earlier in this Colab. To do that, we simply call the fit(features, targets) method.
After fitting, we can ask the model for predictions. To do this, we use the predict(features) method.
End of explanation
features
Explanation: At this point, don't worry too much about the details of what LinearRegression is doing. There is a deep-dive into regression problems coming up soon.
For now, just note the fit/predict pattern for training estimators, and know that you'll see it throughout our adventures with scikit-learn.
Transformers
In practice, it is rare that you will get perfectly clean data that is ready to feed into your model for training. Most of the time, you will need to perform some type of cleaning on the data first.
You've had some hands-on experience doing this in our Pandas Colabs. Scikit-learn can also be used to perform some data preprocessing.
Transformers are spread about within the scikit-learn library. Some are in the preprocessing module while others are in more specialized packages like compose, feature_extraction, impute, and others.
All transformers implement the fit and transform methods. The fit method calculates parameters necessary to perform the data transformation. The transform method actually applies the transformation. There is a convenience fit_transform method that performs both fitting and transformation in one method call.
Let's see a transformer in action.
We will use the MinMaxScaler to scale our feature data between zero and one. This scales the data with a linear transform so that the minimum value becomes 0 and the maximum value becomes 1, so all values are within 0 and 1.
Looking at our feature data pre-transformation, we can see values that are below zero and above one.
End of explanation
from sklearn.preprocessing import MinMaxScaler
transformer = MinMaxScaler()
transformer.fit(features)
transformer.data_min_, transformer.data_max_
Explanation: We will now create a MinMaxScaler and fit it to our feature data.
Each transformer has different information that it needs in order to perform a transformation. In the case of the MinMaxScaler, the smallest and largest values in the data are needed.
End of explanation
features = transformer.transform(features)
features
Explanation: You might notice that the values are stored in arrays. This is because transformers can operate on more than one feature. In this case, however, we have only one.
Next, we need to apply the transformation to our features. After the transformation, we can now see that all of the features fall between the range of zero to one. Moreover, you might notice that the minimum and maximum value in the untransformed features array correspond to the 0 and 1 in the transformed array, respectively.
End of explanation
from sklearn.pipeline import Pipeline
features, targets = make_regression(
n_samples=10, n_features=1, random_state=42, noise=5.0)
pipeline = Pipeline([
('scale', MinMaxScaler()),
('regression', LinearRegression())
])
pipeline.fit(features, targets)
predictions = pipeline.predict(features)
plt.plot(features, targets, 'b.')
plt.plot(features, predictions, 'r-')
plt.show()
Explanation: Pipelines
A pipeline is simply a series of transformers, often with an estimator at the end.
In the example below, we use a Pipeline class to perform min-max scaling or our feature data and then train a linear regression model using the scaled features.
End of explanation
from sklearn.metrics import mean_squared_error
mean_squared_error(targets, predictions)
Explanation: Metrics
So far we have seen ways that scikit-learn can help you get data, modify that data, train a model, and finally, make predictions. But how do we know how good these predictions are?
Scikit-learn also comes with many functions for measuring model performance in the metrics package. Later in this course, you will learn about different ways to measure the performance of regression and classification models, as well as tradeoffs between the different metrics.
We can use the mean_squared_error function to find the mean squared error (MSE) between the target values that we used to train our linear regression model and the predicted values.
End of explanation
print(regression.score(features, targets))
print(pipeline.score(features, targets))
Explanation: In this case, the MSE value alone doesn't have much meaning. Since the data that we fit the regression to isn't related to any real-world metrics, the MSE is hard to interpret alone.
As we learn more about machine learning and begin training models on real data, you'll learn how to interpret MSE and other metrics in the context of the data being analyzed and the problem being solved.
There are also metrics that come with each estimator class. These metrics can be extracted using the score method.
The regression class we created earlier can be scored, as can the pipeline.
End of explanation
# Your answer goes here
Explanation: The return value of the score method depends on the estimator being used. In the case of LinearRegression, the score is the $R^2$ score, where scores closer to 1.0 are better. You can find the metric that score returns in the documentation for the given estimator you're using.
Exercise 4
Use the Pipeline class to combine a data pre-processor and an estimator.
To accomplish this:
Find a preprocessor that uses the max absolute value for scaling.
Find a linear_model based on the Huber algorithm.
Combine this preprocessor and estimator into a pipeline.
Make a sample regression dataset with 200 samples and 1 feature. Use a random state of 85 and a noise of 5.0. Save the features in a variable called features and the targets in a variable called targets.
Fit the model.
Using the features that were created when the regression dataset was created, make predictions with the model and save them into a variable called predictions.
Plot the features and targets used to train the model on a scatterplot with blue dots.
Plot the features and predictions over the scatterplot as a red line.
Student Solution
End of explanation |
12,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming Microblaze Subsystems from Jupyter
In the Base I/O overlays that accompany the PYNQ release Microblazes are used to control peripherals attached to the various connectors. These can either be programmed with existing programs compiled externally or from within Jupyter. This notebook explains how the Microblazes can be integrated into Jupyter and Python.
The Microblaze is programmed in C as the limited RAM available (64 KB) limits what runtimes can be loaded - as an example, the MicroPython runtime requires 256 KB of code and data space. The PYNQ framework provides a mechanism to write the C code inside Jupyter, compile it, load it on to the Microblaze and then execute and interact with it.
The first stage is to load an overlay.
Step1: Now we can write some C code. The %%microblaze magic provides an environment where we can write the code and it takes a single argument - the Microblaze we wish to target this code at. This first example simply adds two numbers together and returns the result.
Step2: The functions we defined in the magic are now available for us to interact with in Python as any other function.
Step3: Data Motion
The main purpose of the Python bindings it to transfer data between the host and slave processors. For simple cases, any primitive C type can be used as function parameters and return values and Python values will be automatically converted as necessary.
Step4: Arrays can be passed in two different way. If a type other than void is provided then the data will be copied to the Microblaze and if non-const the data will be copied back as well. And iterable and modifiable object can be used as the argument in this case.
Step5: Finally we can pass a void pointer which will allow the Microblaze to directly access the memory of the host processing system for transferring large quantities of data. In Python these blocks of memory should be allocated using the pynq.allocate function and it is the responsibility of the programmer to make sure that the Python and C code agree on the types used.
Step6: Debug printing
One unique feature of the PYNQ Microblaze environment is the ability to print debug information directly on to the Jupyter or Python console using the new pyprintf function. This functions acts like printf and format in Python and allows for a format string and variables to be passed back to Python for printing. In this release on the %d format specifier is supported but this will increase over time.
To use pyprintf first the appropriate header needs to be included
Step7: Long running processes
So far all of the examples presented have been synchronous with the Python code with the Python code blocking until a result is available. Some applications call instead for a long-running process which is periodically queried by other functions. If a C function return void then the Python process will resume immediately leaving the function running on its own.
Other functions can be run while the long-running process is active but as there is no pre-emptive multithreading the persistent process will have to yield at non-timing critical points to allow other queued functions to run.
In this example we launch a simple counter process and then pull the value using a second function.
Step8: We can now start the counter going.
Step9: And interrogate its current value
Step10: There are some limitations with using pyprintf inside a persistent function in that the output will not be displayed until a subsequent function is called. If the buffer fills in the meantime this can cause the process to deadlock.
Only one persistent process can be called at once - if another is started it will block the first until it returns. If two many processes are stacked in this way a stack overflow may occur leading to undefined results.
Creating class-like objects
In the C code typedefs can be used to create pseudo classes in Python. If you have a typedef called my_class then any functions that being my_class_ are assumed to be associated with it. If one of those functions takes my_class as the first argument it is taken to be equivalent to self. Note that the typedef can only ultimately refer a primitive type. The following example does some basic modular arithmetic base 53 using this idiom.
Step11: We can now create instances using our create function and call the add method on the returned object. The underlying value of the typedef instance can be retrieved from the ._val attribute. | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay('base.bit')
Explanation: Programming Microblaze Subsystems from Jupyter
In the Base I/O overlays that accompany the PYNQ release Microblazes are used to control peripherals attached to the various connectors. These can either be programmed with existing programs compiled externally or from within Jupyter. This notebook explains how the Microblazes can be integrated into Jupyter and Python.
The Microblaze is programmed in C as the limited RAM available (64 KB) limits what runtimes can be loaded - as an example, the MicroPython runtime requires 256 KB of code and data space. The PYNQ framework provides a mechanism to write the C code inside Jupyter, compile it, load it on to the Microblaze and then execute and interact with it.
The first stage is to load an overlay.
End of explanation
%%microblaze base.PMODA
int add(int a, int b) {
return a + b;
}
Explanation: Now we can write some C code. The %%microblaze magic provides an environment where we can write the code and it takes a single argument - the Microblaze we wish to target this code at. This first example simply adds two numbers together and returns the result.
End of explanation
add(4,6)
Explanation: The functions we defined in the magic are now available for us to interact with in Python as any other function.
End of explanation
%%microblaze base.PMODA
float arg_passing(float a, char b, unsigned int c) {
return a + b + c;
}
arg_passing(1, 2, 3)
Explanation: Data Motion
The main purpose of the Python bindings it to transfer data between the host and slave processors. For simple cases, any primitive C type can be used as function parameters and return values and Python values will be automatically converted as necessary.
End of explanation
%%microblaze base.PMODA
int culm_sum(int* val, int len) {
int sum = 0;
for (int i = 0; i < len; ++i) {
sum += val[i];
val[i] = sum;
}
return sum;
}
numbers = [i for i in range(10)]
culm_sum(numbers, len(numbers))
print(numbers)
Explanation: Arrays can be passed in two different way. If a type other than void is provided then the data will be copied to the Microblaze and if non-const the data will be copied back as well. And iterable and modifiable object can be used as the argument in this case.
End of explanation
%%microblaze base.PMODA
long long big_sum(void* data, int len) {
int* int_data = (int*)data;
long long sum = 0;
for (int i = 0; i < len; ++i) {
sum += int_data[i];
}
return sum;
}
from pynq import allocate
buffer = allocate(shape=(1024 * 1024), dtype='i4')
buffer[:] = range(1024*1024)
big_sum(buffer, len(buffer))
Explanation: Finally we can pass a void pointer which will allow the Microblaze to directly access the memory of the host processing system for transferring large quantities of data. In Python these blocks of memory should be allocated using the pynq.allocate function and it is the responsibility of the programmer to make sure that the Python and C code agree on the types used.
End of explanation
%%microblaze base.PMODA
#include <pyprintf.h>
int debug_sum(int a, int b) {
int sum = a + b;
pyprintf("Adding %d and %d to get %d\n", a, b, sum);
return sum;
}
debug_sum(1,2)
Explanation: Debug printing
One unique feature of the PYNQ Microblaze environment is the ability to print debug information directly on to the Jupyter or Python console using the new pyprintf function. This functions acts like printf and format in Python and allows for a format string and variables to be passed back to Python for printing. In this release on the %d format specifier is supported but this will increase over time.
To use pyprintf first the appropriate header needs to be included
End of explanation
%%microblaze base.PMODA
#include <yield.h>
static int counter = 0;
void start_counter() {
while (1) {
++counter;
yield();
}
}
int counter_value() {
return counter;
}
Explanation: Long running processes
So far all of the examples presented have been synchronous with the Python code with the Python code blocking until a result is available. Some applications call instead for a long-running process which is periodically queried by other functions. If a C function return void then the Python process will resume immediately leaving the function running on its own.
Other functions can be run while the long-running process is active but as there is no pre-emptive multithreading the persistent process will have to yield at non-timing critical points to allow other queued functions to run.
In this example we launch a simple counter process and then pull the value using a second function.
End of explanation
start_counter()
Explanation: We can now start the counter going.
End of explanation
counter_value()
Explanation: And interrogate its current value
End of explanation
%%microblaze base.PMODA
typedef unsigned int mod_int;
mod_int mod_int_create(int val) { return val % 53; }
mod_int mod_int_add(mod_int lhs, int rhs) { return (lhs + rhs) % 53; }
Explanation: There are some limitations with using pyprintf inside a persistent function in that the output will not be displayed until a subsequent function is called. If the buffer fills in the meantime this can cause the process to deadlock.
Only one persistent process can be called at once - if another is started it will block the first until it returns. If two many processes are stacked in this way a stack overflow may occur leading to undefined results.
Creating class-like objects
In the C code typedefs can be used to create pseudo classes in Python. If you have a typedef called my_class then any functions that being my_class_ are assumed to be associated with it. If one of those functions takes my_class as the first argument it is taken to be equivalent to self. Note that the typedef can only ultimately refer a primitive type. The following example does some basic modular arithmetic base 53 using this idiom.
End of explanation
a = mod_int_create(63)
b = a.add(4)
print(b)
print(b._val)
Explanation: We can now create instances using our create function and call the add method on the returned object. The underlying value of the typedef instance can be retrieved from the ._val attribute.
End of explanation |
12,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GIScript-开放地理空间信息处理与分析Python库
GIScript是一个开放的地理空间心处理与分析Python框架,GIS内核采用SuperMap UGC封装,集成多种开源软件,也可以使用其它的商业软件引擎。
by [email protected], 2016-05-03。
本文档介绍GIScript的安装和配置,并进行简单的运行测试,以确认安装的软件正常运行。
本教程基于Anaconda3+python3.5.1科学计算环境,请参考:http
Step1: 2、使用Python的help(...)查看库的元数据信息获得帮助。
Step2: 3、设置测试数据目录。
Step3: 4、导入数据的测试函数。
Step4: 5、运行这个测试。
Step5: (三)查看生成的数据源文件UDB。
下面使用了<font color="green">IPython的Magic操作符 !</font>,可以直接运行操作系统的Shell命令行。
Step6: <font color="red">删除生成的测试文件。注意,不要误删其它文件!</font>
如果重复运行上面的Import_Test()将会发现GIScript_Test.udb和GIScript_Test.udd文件会不断增大。
但是打开UDB文件却只有一份数据,为什么呢?
* 因为UDB文件是增量存储的,不用的存储块需要使用SQLlite的存储空间紧缩处理才能回收。
Step7: 再次查看目录,文件是否存在。 | Python Code:
from PyUGC import *
from PyUGC.Stream import UGC
from PyUGC.Base import OGDC
from PyUGC import Engine
from PyUGC import FileParser
from PyUGC import DataExchange
import datasource
Explanation: GIScript-开放地理空间信息处理与分析Python库
GIScript是一个开放的地理空间心处理与分析Python框架,GIS内核采用SuperMap UGC封装,集成多种开源软件,也可以使用其它的商业软件引擎。
by [email protected], 2016-05-03。
本文档介绍GIScript的安装和配置,并进行简单的运行测试,以确认安装的软件正常运行。
本教程基于Anaconda3+python3.5.1科学计算环境,请参考:http://www.anaconda.org 。
本Notebook在Ubuntu 14.04/15.10/16.04运行通过,在本地服务器和阿里云服务器都可以运行。
可以在NBViewer上直接访问和下载本文档。
(一)安装与配置
GIScript的安装包括<font color="blue">系统库的设置、UGC Runtime设置和Python库</font>的设置,通过编写一个启动脚本,可以在给定环境下载入相应的运行库的路径。
1、下载GIScript支持库:
cd /home/supermap/GISpark
git clone https://github.com/supergis/GIScriptLib.git
2、UGC系统库的版本适配。
由于GIScript的几个编译库版本较新,在默认使用系统老版本库时部分函数找不到会引起调用失败,因此需要将这几个的系统调用指向到GIScript编译使用的的新版本。在Ubuntu上,具体操作包括:
cd ~/anaconda3/envs/GISpark/lib
mv libstdc++.so libstdc++.so.x
mv libstdc++.so.6 libstdc++.so.6.x
mv libsqlite3.so.0 libsqlite3.so.0.x
mv libsqlite3.so libsqlite3.so.x
mv libgomp.so.1.0.0 libgomp.so.1.0.0.x
mv libgomp.so.1 libgomp.so.1.x
mv libgomp.so libgomp.so.x
* 可以运行GIScriptLib/lib-giscript-x86-linux64/下的setup-giscript.sh来自动处理(请根据自己的目录布局修改路径)。
* 由于不同系统安装的软件和版本不同,如果还有其它的动态库冲突,可以使用ldd *.so来查看库的依赖关系,按照上述办法解决。
3、安装Python的支持库。
GIScript的Python封装库,默认存放在系统目录:/usr/lib/python3/dist-packages/PyUGC
使用Anaconda时,存在相应的env的目录下,如:[/home/supermap/Anaconda3]/envs/GISpark/lib/python3.5/site-packages
安装方法一:链接。在[...]/python3.5/site-packages下建立PyUGC的软连接。注意,原文件不可删除,否则就找不到了。
ln -s -f /home/supermap/GISpark/GIScriptLib/lib-giscript-x86-linux64/lib ~/anaconda3/envs/GISpark/lib/python3.5/site-packages/PyUGC
* 安装方法二:复制。*将lib-giscript-x86-linux64/lib(Python的UGC封装库)复制为Python下的site-packages/PyUGC目录,如下:
cd /home/supermap/GISpark/GIScriptLib
cp -r lib-giscript-x86-linux64/lib ~/anaconda3/envs/GISpark/lib/python3.5/site-packages/PyUGC
4、Jupyter启动之前,设置GIScript运行时 library 载入的路径:
编写脚本,启动前设置GIScript的运行时动态库路径,内容如下:
```
echo "Config GIScript2016..."
使用GIScript2015的开发工程目录,配置示例:
export SUPERMAP_HOME=/home/supermap/GIScript/GIScript2015/Linux64-gcc4.9
使用GIScriptLib运行时动态库,配置如下:
export SUPERMAP_HOME=/home/supermap/GISpark/GIScriptLib/lib-giscript-x86-linux64
export LD_LIBRARY_PATH=$SUPERMAP_HOME/Bin:$LD_LIBRARY_PATH
echo "Config: LD_LIBRARY_PATH="$LD_LIBRARY_PATH
```
将上面的内容与Jupyter启动命令放到start.sh脚本中,如下:
```
echo "Activate conda enviroment GISpark ..."
source activate GISpark
echo "Config GIScript 2016 for Jupyter ..."
export SUPERMAP_HOME=/home/supermap/GISpark/GIScriptLib/lib-giscript-x86-linux64
export LD_LIBRARY_PATH=$SUPERMAP_HOME/bin:/usr/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH
echo "Config: LD_LIBRARY_PATH="$LD_LIBRARY_PATH
echo "Start Jupyter notebook"
jupyter notebook
```
修改start.sh执行权限,运行Jupyter Notebook。
sudo chmod +x start.sh
./start.sh
默认配置下,将会自动打开浏览器,就可以开始使用Jupyter Notebook并调用GIScript的库了。
如果通过服务器使用,需要使用`jupyter notebook --generate-config`创建配置文件,然后进去修改参数,这里不再详述。
(二)运行测试,导入一些数据。
1、导入GIScript的Python库。
End of explanation
#help(UGC)
#help(OGDC)
#help(datasource)
Explanation: 2、使用Python的help(...)查看库的元数据信息获得帮助。
End of explanation
import os
basepath = os.path.join(os.getcwd(),"../data")
print("Data path: ", basepath)
file1 = basepath + u"/Shape/countries.shp"
print("Data file: ", file1)
file2 = basepath + u"/Raster/astronaut(CMYK)_32.tif"
print("Data file: ", file2)
file3 = basepath + u"/Grid/grid_Int32.grd"
print("Data file: ", file3)
datapath_out = basepath + u"/GIScript_Test.udb"
print("Output UDB: ",datapath_out)
Explanation: 3、设置测试数据目录。
End of explanation
def Import_Test():
print("Export to UDB: ",datapath_out)
ds = datasource.CreateDatasource(UGC.UDB,datapath_out)
datasource.ImportVector(file1,ds)
datasource.ImportRaster(file2,ds)
datasource.ImportGrid(file3,ds)
ds.Close()
del ds
print("Finished.")
Explanation: 4、导入数据的测试函数。
End of explanation
try:
Import_Test()
except Exception as ex:
print(ex)
Explanation: 5、运行这个测试。
End of explanation
!ls -l -h ../data/GIScript_Test.*
Explanation: (三)查看生成的数据源文件UDB。
下面使用了<font color="green">IPython的Magic操作符 !</font>,可以直接运行操作系统的Shell命令行。
End of explanation
!rm ../data/GIScript_Test.*
Explanation: <font color="red">删除生成的测试文件。注意,不要误删其它文件!</font>
如果重复运行上面的Import_Test()将会发现GIScript_Test.udb和GIScript_Test.udd文件会不断增大。
但是打开UDB文件却只有一份数据,为什么呢?
* 因为UDB文件是增量存储的,不用的存储块需要使用SQLlite的存储空间紧缩处理才能回收。
End of explanation
!ls -l -h ../data/GIScript_Test.*
Explanation: 再次查看目录,文件是否存在。
End of explanation |
12,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
同時方程式体系
『Rによる計量経済学』第10章「同時方程式体系」をPythonで実行する。
テキスト付属データセット(「k1001.csv」等)については出版社サイトよりダウンロードしてください。
また、以下の説明は本書の一部を要約したものですので、より詳しい説明は本書を参照してください。
例題10.1
次のような供給関数と需要関数を推定する。
$Q_{t} = \alpha_{0} + \alpha_{1} P_{t} + \alpha_{2} E_{t} + u_{t}$
$Q_{t} = \beta_{0} + \beta_{1} P_{t} + \beta_{2} A_{t} + v_{t}$
ただし、$Q_{t}$ は数量、$P_{t}$ は価格、$E_{t}$ は供給関数シフト要因、$A_{t}$ は需要関数シフト要因とする。
Step1: この結果から古典的最小二乗法による推定式をまとめると、
[供給関数]
$\hat Q_{i} = 4.8581 + 1.5094 P_{i} - 1.5202 E_{i} $
[需要関数]
$\hat Q_{i} = 16.6747 - 0.9088 P_{i} - 1.0369 A_{i}$
となる。
しかし、説明変数Pと誤差の間に関係があるため、同時方程式バイアスが生じてしまいます。
そこで、以下では同時方程式体系の推定法として代表的な二段階最小二乗法を用いて推定し直します。 | Python Code:
%matplotlib inline
# -*- coding:utf-8 -*-
from __future__ import print_function
import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.sandbox.regression.gmm import IV2SLS
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# データ読み込み
data = pd.read_csv('example/k1001.csv')
# 式1説明変数設定
X1 = data[['P', 'E']].as_matrix().reshape(-1, 2)
X1 = sm.add_constant(X1)
# 式2説明変数設定
X2 = data[['P', 'A']].as_matrix().reshape(-1, 2)
X2 = sm.add_constant(X2)
# 被説明変数設定
Y = data[['Q']].as_matrix().reshape(-1)
# OLSの実行(Ordinary Least Squares: 最小二乗法)
model1 = sm.OLS(Y, X1)
model2 = sm.OLS(Y, X2)
result1 = model1.fit()
result2 = model2.fit()
print(result1.summary())
print(result2.summary())
Explanation: 同時方程式体系
『Rによる計量経済学』第10章「同時方程式体系」をPythonで実行する。
テキスト付属データセット(「k1001.csv」等)については出版社サイトよりダウンロードしてください。
また、以下の説明は本書の一部を要約したものですので、より詳しい説明は本書を参照してください。
例題10.1
次のような供給関数と需要関数を推定する。
$Q_{t} = \alpha_{0} + \alpha_{1} P_{t} + \alpha_{2} E_{t} + u_{t}$
$Q_{t} = \beta_{0} + \beta_{1} P_{t} + \beta_{2} A_{t} + v_{t}$
ただし、$Q_{t}$ は数量、$P_{t}$ は価格、$E_{t}$ は供給関数シフト要因、$A_{t}$ は需要関数シフト要因とする。
End of explanation
# 外生変数設定
inst = data[[ 'A', 'E']].as_matrix()
inst = sm.add_constant(inst)
# 2SLSの実行(Two Stage Least Squares: 二段階最小二乗法)
model1 = IV2SLS(Y, X1, inst)
model2 = IV2SLS(Y, X2, inst)
result1 = model1.fit()
result2 = model2.fit()
print(result1.summary())
print(result2.summary())
Explanation: この結果から古典的最小二乗法による推定式をまとめると、
[供給関数]
$\hat Q_{i} = 4.8581 + 1.5094 P_{i} - 1.5202 E_{i} $
[需要関数]
$\hat Q_{i} = 16.6747 - 0.9088 P_{i} - 1.0369 A_{i}$
となる。
しかし、説明変数Pと誤差の間に関係があるため、同時方程式バイアスが生じてしまいます。
そこで、以下では同時方程式体系の推定法として代表的な二段階最小二乗法を用いて推定し直します。
End of explanation |
12,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading the model.
Step1: Loading the dataset
Step2: The TensorFlow graph
Look at all operations to find out the ones that are interesting for weights/activations.
Step3: TensorFlow exposes a graph API in which nodes correspond to operations and directed edges correspond to to tensors flowing from one operation to the next. Operations know their respective input and output tensors. Tensors know by which operations they get produced and for which operations they serve as input. c.f. https
Step4: Getting the weights
For getting the weights we need to run the output tensor of operations that read the weights or the seconde input tensor of the actual MatMul/Conv operations.
Step5: Looking at the actual kernels.
Step6: The same goes for the bias.
Step7: Getting the activations and net inputs
Activations and net inputs can be found as the output tensors of the respective operations.
Slices of the example code here are used to unit test the network package.
Step8: Look at the activations
Step9: Getting input and output shape
For each tensor the shape can be retrieved as a TensorShape object or as a list.
Step10: Getting layer properties
Internally TensorFlow represents graph as protobufs that only define nodes with their inputs. This definition can either be retrieved as globally graph.graph_def as as op.node_def for every single operation.
Step11: Kernel size
Kernel size is not defined in the protobuf, but has to be read from the weight tensor shape.
Step12: Padding
Step13: Strides
Step14: Pool size
Pool size is defined as ksize in the maxpool definition. | Python Code:
# Restore model.
sess = tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('tf_mnist_model_layers/tf_mnist_model.ckpt.meta')
saver.restore(sess,tf.train.latest_checkpoint('tf_mnist_model_layers/'))
Explanation: Loading the model.
End of explanation
dataset = mnist.load_data()
train_data = dataset[0][0] / 255
train_data = train_data[..., np.newaxis].astype('float32')
train_labels = np_utils.to_categorical(dataset[0][1]).astype('float32')
test_data = dataset[1][0] / 255
test_data = test_data[..., np.newaxis].astype('float32')
test_labels = np_utils.to_categorical(dataset[1][1]).astype('float32')
plt.imshow(train_data[0, ..., 0])
# Get the first input tensor.
network_input = sess.graph.get_operations()[0].outputs[0]
network_input
Explanation: Loading the dataset
End of explanation
sess.graph.get_operations()
Explanation: The TensorFlow graph
Look at all operations to find out the ones that are interesting for weights/activations.
End of explanation
conv_op = sess.graph.get_operation_by_name('conv2d_1/convolution')
print('inputs: ', list(conv_op.inputs))
print('outputs: ', list(conv_op.outputs))
weight_tensor = conv_op.inputs[1]
weight_tensor
# Producer ops for the tensor.
weight_tensor.op
# Consumer ops for the tensor.
weight_tensor.consumers()
Explanation: TensorFlow exposes a graph API in which nodes correspond to operations and directed edges correspond to to tensors flowing from one operation to the next. Operations know their respective input and output tensors. Tensors know by which operations they get produced and for which operations they serve as input. c.f. https://www.tensorflow.org/extend/tool_developers/
End of explanation
weight_read_op = sess.graph.get_operation_by_name('conv2d_1/kernel/read')
weight_read_op
conv_op = sess.graph.get_operation_by_name('conv2d_1/convolution')
conv_op
weights_1 = sess.run(weight_read_op.outputs[0])
weights_2 = sess.run(conv_op.inputs[-1])
np.all(weights_1 == weights_2)
Explanation: Getting the weights
For getting the weights we need to run the output tensor of operations that read the weights or the seconde input tensor of the actual MatMul/Conv operations.
End of explanation
print(weights_1.shape)
for i in range(32):
plt.imshow(weights_1[..., 0, i])
plt.show()
weights_1[..., 0, 0]
Explanation: Looking at the actual kernels.
End of explanation
bias_add_op = sess.graph.get_operation_by_name('conv2d_1/BiasAdd')
bias_add_op
bias = sess.run(bias_add_op.inputs[-1])
bias
Explanation: The same goes for the bias.
End of explanation
activation_op = sess.graph.get_operation_by_name('conv2d_1/Relu')
activation_op.outputs[0]
activations = sess.run(activation_op.outputs[0], feed_dict={network_input: test_data[0:1]})
activations.shape
Explanation: Getting the activations and net inputs
Activations and net inputs can be found as the output tensors of the respective operations.
Slices of the example code here are used to unit test the network package.
End of explanation
for i in range(32):
plt.imshow(activations[0, ..., i])
plt.show()
activations[0, 5:10, 5:10, 0]
net_input_op = sess.graph.get_operation_by_name('conv2d_1/BiasAdd')
net_input_op.outputs[0]
net_inputs = sess.run(net_input_op.outputs[0], feed_dict={network_input: test_data[0:1]})
net_inputs.shape
for i in range(32):
plt.imshow(net_inputs[0, ..., i])
plt.show()
net_inputs[0, 5:10, 5:10, 0]
Explanation: Look at the activations
End of explanation
conv_op.inputs[0].shape
conv_op.inputs[0].shape.as_list()
Explanation: Getting input and output shape
For each tensor the shape can be retrieved as a TensorShape object or as a list.
End of explanation
sess.graph_def
activation_op.node_def
Explanation: Getting layer properties
Internally TensorFlow represents graph as protobufs that only define nodes with their inputs. This definition can either be retrieved as globally graph.graph_def as as op.node_def for every single operation.
End of explanation
read_op = sess.graph.get_operation_by_name('conv2d_2/kernel/read')
read_op.outputs[0].shape.as_list()[:2]
Explanation: Kernel size
Kernel size is not defined in the protobuf, but has to be read from the weight tensor shape.
End of explanation
conv_op.node_def
# The padding value has to be decoded from bytes into a string.
conv_op.node_def.attr['padding'].s.decode('utf8')
Explanation: Padding
End of explanation
strides = conv_op.node_def.attr['strides']
strides.list.i[1], strides.list.i[2]
Explanation: Strides
End of explanation
max_pool_op = sess.graph.get_operation_by_name('max_pooling2d_1/MaxPool')
max_pool_op.node_def
kernel = max_pool_op.node_def.attr['ksize']
kernel.list.i[1], kernel.list.i[2]
Explanation: Pool size
Pool size is defined as ksize in the maxpool definition.
End of explanation |
12,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Symbulate Documentation
Random Processes
<a id='contents'></a>
RandomProcess and TimeIndex
Defining a RandomProcess explicitly as a function of time
Process values at particular time points
Mean function
Defining a RandomProcess incrementally
< Conditioning | Contents | Markov processes >
Be sure to import Symbulate using the following commands.
Step1: <a id='process'></a>
Random processes
A random process (a.k.a. stochastic process) is an indexed collection of random variables defined on some probability space. The index often represents "time", which can be either discrete or continuous.
- A discrete time stochastic process is a collection of countably many random variables, e.g. $X_n$ for $n=0 ,1, 2,\ldots$. For each outcome in the probability space, the outcome of a discrete time stochastic process is a sequence (in $n$). (Remember Python starts indexing at 0. The zero-based-index is often natural in stochastic process contexts in which there is a time 0, i.e. $X_0$ is the initial value of the process.)
- A continuous time stochastic process is a collection of uncountably many random variables, e.g. $X_t$ for $t\ge0$. For each outcome in the probability space, the outcome of a discrete time stochastic process is a function (a.k.a. sample path) (of $t$).
<a id='time'></a>
RandomProcess and TimeIndex
Much like RV, a RandomProcess can be defined on a ProbabilitySpace. For a RandomProcess, however, the TimeIndex must also be specified. TimeIndex takes a single parameter, the sampling frequency fs. While many values of fs are allowed, the two most common inputs for fs are
TimeIndex(fs=1), for a discrete time process $X_n, n = 0, 1, 2, \ldots$.
TimeIndex(fs=inf), for a continuous time process $X(t), t\ge0$.
<a id='Xt'></a>
Defining a RandomProcess explicity as a function of time
A random variable is a function $X$ which maps an outcome $\omega$ in a probability space $\Omega$ to a real value $X(\omega)$. Similarly, a random process is a function $X$ which maps an outcome $\omega$ and a time $t$ in the time index set to the process value at that time $X(\omega, t)$. In some situations, the function defining the random process can be specified explicitly.
Example. Let $X(t) = A + B t, t\ge0$ where $A$ and $B$ are independent with $A\sim$ Bernoulli(0.9) and $B\sim$ Bernoulli(0.7). In this case, there are only 4 possible sample paths.
$X(t) = 0$, when $A=0, B=0$, which occurs with probability $0.03$
$X(t) = 1$, when $A=1, B=0$, which occurs with probability $0.27$
$X(t) = t$, when $A=0, B=1$, which occurs with probability $0.07$
$X(t) = 1+t$, when $A=1, B=1$, which occurs with probability $0.63$
The following code defines a RandomProcess X by first defining an appropriate function f. Note that an outcome in the probability space consists of an $A, B$ pair, represented as $\omega_0$ and $\omega_1$ in the function. A RandomProcess is then defined by specifying
Step2: Like RV, RandomProcess only defines the random process. Values of the process can be simulated using the usual simulation tools. Since a stochastic process is a collection of random variables, many of the commands in the previous sections (Random variables, Multiple random variables, Conditioning) are useful when simulating stochastic processes.
For a given outcome in the probability space, a random process outputs a sample path which describes how the value of the process evolves over time for that particular outcome. Calling .plot() for a RandomProcess will return a plot of sample paths. The parameter alpha controls the weight of the line drawn in the plot. The paramaters tmin and tmax control the range of time values in the display.
Step3: Simulate and plot many sample paths, specifying the range of $t$ values to plot. Note that the darkness of a path represents its relative likelihood.
Step4: <a id='value'></a>
Process values at particular time points
The value $X(t)$ (or $X_n$) of a stochastic process at any particular point in time $t$ (or $n$) is a random variable. These random variables can be accessed using brackets []. Note that the value inside the brackets represents time $t$ or $n$. Many of the commands in the previous sections (Random variables, Multiple random variables, Conditioning) are useful when simulating stochastic processes.
Example. Let $X(t) = A + B t, t\ge0$ where $A$ and $B$ are independent with $A\sim$ Bernoulli(0.9) and $B\sim$ Bernoulli(0.7).
Find the distribution of $X(1.5)$, the process value at time $t=1.5$.
Step5: Find the joint distribution of process values at times 1 and 1.5.
Step6: Find the conditional distribution of $X(1.5)$ given $X(1) = 1)$.
Step7: <a id='mean'></a>
Mean function
The mean function of a stochastic process $X(t)$ is a deterministic function which maps $t$ to $E(X(t))$. The mean function can be estimated and plotted by simulating many sample paths of the process and using .mean().
Step8: The variance function maps $t$ to $Var(X(t))$; similarly for the standard deviation function. These functions can be used to give error bands about the mean function.
Step9: <a id='rw'></a>
Defining a RandomProcess incrementally
There are few situations like the linear process in the example above in which the random process can be expressed explicitly as a function of the probability space outcome and the time value. More commonly, random processes are often defined incrementally, by specifying the next value of the process given the previous value.
Example. At each point in time $n=0, 1, 2, \ldots$ a certain type of "event" either occurs or not. Suppose the probability that the event occurs at any particular time is $p=0.5$, and occurrences are independent from time to time. Let $Z_n=1$ if an event occurs at time $n$, and $Z_n=0$ otherwise. Then $Z_0, Z_1, Z_2,\ldots$ is a Bernoulli process. In a Bernoulli process, let $X_n$ count the number of events that have occurred up to and including time $n$, starting with 0 events at time 0. Since $Z_{n+1}=1$ if an event occurs at time $n+1$ and $Z_{n+1} = 0$ otherwise, $X_{n+1} = X_n + Z_{n+1}$.
The following code defines the random process $X$. The probability space corresponds to the independent Bernoulli random variables; note that inf allows for infinitely many values. Also notice how the process is defined incrementally through $X_{n+1} = X_n + Z_{n+1}$.
Step10: The above code defines a random process incrementally. Once a RandomProcess is defined, it can be manipulated the same way, regardless of how it is defined. | Python Code:
from symbulate import *
%matplotlib inline
Explanation: Symbulate Documentation
Random Processes
<a id='contents'></a>
RandomProcess and TimeIndex
Defining a RandomProcess explicitly as a function of time
Process values at particular time points
Mean function
Defining a RandomProcess incrementally
< Conditioning | Contents | Markov processes >
Be sure to import Symbulate using the following commands.
End of explanation
def f(omega, t):
return omega[0] + omega[1] * t
X = RandomProcess(Bernoulli(0.9) * Bernoulli(0.7), TimeIndex(fs=inf), f)
Explanation: <a id='process'></a>
Random processes
A random process (a.k.a. stochastic process) is an indexed collection of random variables defined on some probability space. The index often represents "time", which can be either discrete or continuous.
- A discrete time stochastic process is a collection of countably many random variables, e.g. $X_n$ for $n=0 ,1, 2,\ldots$. For each outcome in the probability space, the outcome of a discrete time stochastic process is a sequence (in $n$). (Remember Python starts indexing at 0. The zero-based-index is often natural in stochastic process contexts in which there is a time 0, i.e. $X_0$ is the initial value of the process.)
- A continuous time stochastic process is a collection of uncountably many random variables, e.g. $X_t$ for $t\ge0$. For each outcome in the probability space, the outcome of a discrete time stochastic process is a function (a.k.a. sample path) (of $t$).
<a id='time'></a>
RandomProcess and TimeIndex
Much like RV, a RandomProcess can be defined on a ProbabilitySpace. For a RandomProcess, however, the TimeIndex must also be specified. TimeIndex takes a single parameter, the sampling frequency fs. While many values of fs are allowed, the two most common inputs for fs are
TimeIndex(fs=1), for a discrete time process $X_n, n = 0, 1, 2, \ldots$.
TimeIndex(fs=inf), for a continuous time process $X(t), t\ge0$.
<a id='Xt'></a>
Defining a RandomProcess explicity as a function of time
A random variable is a function $X$ which maps an outcome $\omega$ in a probability space $\Omega$ to a real value $X(\omega)$. Similarly, a random process is a function $X$ which maps an outcome $\omega$ and a time $t$ in the time index set to the process value at that time $X(\omega, t)$. In some situations, the function defining the random process can be specified explicitly.
Example. Let $X(t) = A + B t, t\ge0$ where $A$ and $B$ are independent with $A\sim$ Bernoulli(0.9) and $B\sim$ Bernoulli(0.7). In this case, there are only 4 possible sample paths.
$X(t) = 0$, when $A=0, B=0$, which occurs with probability $0.03$
$X(t) = 1$, when $A=1, B=0$, which occurs with probability $0.27$
$X(t) = t$, when $A=0, B=1$, which occurs with probability $0.07$
$X(t) = 1+t$, when $A=1, B=1$, which occurs with probability $0.63$
The following code defines a RandomProcess X by first defining an appropriate function f. Note that an outcome in the probability space consists of an $A, B$ pair, represented as $\omega_0$ and $\omega_1$ in the function. A RandomProcess is then defined by specifying: the probability space, the time index set, and the $X(\omega, t)$ function.
End of explanation
X.sim(1).plot(alpha = 1)
Explanation: Like RV, RandomProcess only defines the random process. Values of the process can be simulated using the usual simulation tools. Since a stochastic process is a collection of random variables, many of the commands in the previous sections (Random variables, Multiple random variables, Conditioning) are useful when simulating stochastic processes.
For a given outcome in the probability space, a random process outputs a sample path which describes how the value of the process evolves over time for that particular outcome. Calling .plot() for a RandomProcess will return a plot of sample paths. The parameter alpha controls the weight of the line drawn in the plot. The paramaters tmin and tmax control the range of time values in the display.
End of explanation
X.sim(100).plot(tmin=0, tmax=2)
Explanation: Simulate and plot many sample paths, specifying the range of $t$ values to plot. Note that the darkness of a path represents its relative likelihood.
End of explanation
def f(omega, t):
return omega[0] * t + omega[1]
X = RandomProcess(Bernoulli(0.9) * Bernoulli(0.7), TimeIndex(fs=inf), f)
X[1.5].sim(10000).plot()
Explanation: <a id='value'></a>
Process values at particular time points
The value $X(t)$ (or $X_n$) of a stochastic process at any particular point in time $t$ (or $n$) is a random variable. These random variables can be accessed using brackets []. Note that the value inside the brackets represents time $t$ or $n$. Many of the commands in the previous sections (Random variables, Multiple random variables, Conditioning) are useful when simulating stochastic processes.
Example. Let $X(t) = A + B t, t\ge0$ where $A$ and $B$ are independent with $A\sim$ Bernoulli(0.9) and $B\sim$ Bernoulli(0.7).
Find the distribution of $X(1.5)$, the process value at time $t=1.5$.
End of explanation
(X[1] & X[1.5]).sim(1000).plot("tile")
Explanation: Find the joint distribution of process values at times 1 and 1.5.
End of explanation
(X[1.5] | (X[1] == 1)).sim(10000).plot()
Explanation: Find the conditional distribution of $X(1.5)$ given $X(1) = 1)$.
End of explanation
paths = X.sim(1000)
plot(paths)
plot(paths.mean(), 'r')
Explanation: <a id='mean'></a>
Mean function
The mean function of a stochastic process $X(t)$ is a deterministic function which maps $t$ to $E(X(t))$. The mean function can be estimated and plotted by simulating many sample paths of the process and using .mean().
End of explanation
# This illustrates the functionality, but is not an appropriate example for +/- 2SD
plot(paths)
paths.mean().plot('--')
(paths.mean() + 2 * paths.sd()).plot('--')
(paths.mean() - 2 * paths.sd()).plot('--')
Explanation: The variance function maps $t$ to $Var(X(t))$; similarly for the standard deviation function. These functions can be used to give error bands about the mean function.
End of explanation
P = Bernoulli(0.5)**inf
Z = RV(P)
X = RandomProcess(P, TimeIndex(fs=1))
X[0] = 0
for n in range(100):
X[n+1] = X[n] + Z[n+1]
Explanation: <a id='rw'></a>
Defining a RandomProcess incrementally
There are few situations like the linear process in the example above in which the random process can be expressed explicitly as a function of the probability space outcome and the time value. More commonly, random processes are often defined incrementally, by specifying the next value of the process given the previous value.
Example. At each point in time $n=0, 1, 2, \ldots$ a certain type of "event" either occurs or not. Suppose the probability that the event occurs at any particular time is $p=0.5$, and occurrences are independent from time to time. Let $Z_n=1$ if an event occurs at time $n$, and $Z_n=0$ otherwise. Then $Z_0, Z_1, Z_2,\ldots$ is a Bernoulli process. In a Bernoulli process, let $X_n$ count the number of events that have occurred up to and including time $n$, starting with 0 events at time 0. Since $Z_{n+1}=1$ if an event occurs at time $n+1$ and $Z_{n+1} = 0$ otherwise, $X_{n+1} = X_n + Z_{n+1}$.
The following code defines the random process $X$. The probability space corresponds to the independent Bernoulli random variables; note that inf allows for infinitely many values. Also notice how the process is defined incrementally through $X_{n+1} = X_n + Z_{n+1}$.
End of explanation
X.sim(1).plot(alpha = 1)
X.sim(100).plot(tmin = 0, tmax = 5)
X[5].sim(10000).plot()
(X[5] & X[10]).sim(10000).plot("tile")
(X[10] | (X[5] == 3)).sim(10000).plot()
(X[5] | (X[10] == 4)).sim(10000).plot()
Explanation: The above code defines a random process incrementally. Once a RandomProcess is defined, it can be manipulated the same way, regardless of how it is defined.
End of explanation |
12,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step5: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this
Step8: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step9: Building the model
Below is a function where I build the graph for the network.
Step10: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step11: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
Step12: Saved checkpoints
Read up on saving and loading checkpoints here
Step13: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step14: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
chars[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
train_x[:,:50]
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
#lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size)
# Add dropout to the cell
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
drop = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
#cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
cell = tf.nn.rnn_cell.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
#rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(1, num_steps, x_one_hot)]
# Run each sequence step through the RNN and collect the outputs
#outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
outputs, state = tf.nn.rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
#seq_output = tf.concat(outputs, axis=1)
seq_output = tf.concat(1, outputs)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN putputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
checkpoint = "checkpoints/____.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
12,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
StyleGAN2
Step1: Load stylegan
You can see what pre-trained models are available with stylegan2.get_pretrained_models.
Step2: The variable network_pkl refers to the location of the trained pkl file. You can load your own pkl locally from your computer, or get one of the pretrained models listed above.
Step3: Generate random samples
stylegan2.random_sample(n) will generate n random images from the latent space of your trained model. If your model is conditional, you must also supply a label vector.
Step4: You can quickly generate a "latent walk" video, a random interpolation through the latent space of your model of duration_sec seconds long, with stylegan2.generate_interpolation_video.
Step5: Display the generated video inline in the notebook. | Python Code:
%tensorflow_version 1.x
!pip3 install --quiet ml4a
Explanation: StyleGAN2: hi-res generative modeling
StyleGAN2 is a generative model architecture which generates state-of-the-art, high-resolution images. This module is based on the original code and paper by NVIDIA, and comes with several pre-trained models, as well as functions for sampling from the GAN, generating interpolations, and performing operations on the latent space to achieve style-transfer like effects.
Set up ml4a and enable GPU
If you don't already have ml4a installed, or you are opening this in Colab, first enable GPU (Runtime > Change runtime type), then run the following cell to install ml4a and its dependencies.
End of explanation
from ml4a import image
from ml4a.models import stylegan2
stylegan2.get_pretrained_models()
Explanation: Load stylegan
You can see what pre-trained models are available with stylegan2.get_pretrained_models.
End of explanation
network_pkl = stylegan2.get_pretrained_model('cats')
stylegan2.load_model(network_pkl, randomize_noise=False)
Explanation: The variable network_pkl refers to the location of the trained pkl file. You can load your own pkl locally from your computer, or get one of the pretrained models listed above.
End of explanation
samples, _ = stylegan2.random_sample(3, label=None, truncation=1.0)
image.display(samples)
Explanation: Generate random samples
stylegan2.random_sample(n) will generate n random images from the latent space of your trained model. If your model is conditional, you must also supply a label vector.
End of explanation
latent_video = stylegan2.generate_interpolation_video(
'latent_interpolation.mp4',
labels=None,
truncation=1,
duration_sec=5.0
)
Explanation: You can quickly generate a "latent walk" video, a random interpolation through the latent space of your model of duration_sec seconds long, with stylegan2.generate_interpolation_video.
End of explanation
image.display_local(latent_video)
Explanation: Display the generated video inline in the notebook.
End of explanation |
12,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a recommendation model for Google Analytics data using BigQuery ML
This notebook accompanies the article
Training a recommendation model for Google Analytics data using BigQuery ML
Use time spent on page as ranking
Step1: Scaling and clipping
Scale the duration by median and clip it to lie between [0,1] | Python Code:
%%bigquery df
WITH CTE_visitor_content_time AS (
SELECT
fullVisitorID AS visitorId,
visitNumber,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId,
hits.time AS hit_time
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
GROUP BY
fullVisitorId,
visitNumber,
latestContentId,
hits.time ),
CTE_visitor_page_content AS (
SELECT *,
# Schema: https://support.google.com/analytics/answer/3437719?hl=en
# For a completely unique visit-session ID, we combine combination of fullVisitorId and visitNumber:
(LEAD(hit_time, 1) OVER (PARTITION BY CONCAT(visitorId, visitNumber, latestContentId) ORDER BY hit_time ASC) - hit_time) AS session_duration
FROM CTE_visitor_content_time
)
-- Aggregate web stats
SELECT
visitorId,
latestContentId as contentId,
SUM(session_duration) AS session_duration
FROM
CTE_visitor_page_content
WHERE
latestContentId IS NOT NULL
GROUP BY
visitorId,
latestContentId
HAVING
session_duration > 0
df.head()
df.describe()
df[["session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5]);
Explanation: Training a recommendation model for Google Analytics data using BigQuery ML
This notebook accompanies the article
Training a recommendation model for Google Analytics data using BigQuery ML
Use time spent on page as ranking
End of explanation
%%bigquery
CREATE TEMPORARY FUNCTION CLIP_LESS(x FLOAT64, a FLOAT64) AS (
IF (x < a, a, x)
);
CREATE TEMPORARY FUNCTION CLIP_GT(x FLOAT64, b FLOAT64) AS (
IF (x > b, b, x)
);
CREATE TEMPORARY FUNCTION CLIP(x FLOAT64, a FLOAT64, b FLOAT64) AS (
CLIP_GT(CLIP_LESS(x, a), b)
);
CREATE OR REPLACE TABLE advdata.ga360_recommendations_data
AS
WITH CTE_visitor_page_content AS (
SELECT
# Schema: https://support.google.com/analytics/answer/3437719?hl=en
# For a completely unique visit-session ID, we combine combination of fullVisitorId and visitNumber:
CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId,
(LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) - hits.time) AS session_duration
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
GROUP BY
fullVisitorId,
visitNumber,
latestContentId,
hits.time ),
aggregate_web_stats AS (
-- Aggregate web stats
SELECT
visitorId,
latestContentId as contentId,
SUM(session_duration) AS session_duration
FROM
CTE_visitor_page_content
WHERE
latestContentId IS NOT NULL
GROUP BY
visitorId,
latestContentId
HAVING
session_duration > 0
),
normalized_session_duration AS (
SELECT APPROX_QUANTILES(session_duration,100)[OFFSET(50)] AS median_duration
FROM aggregate_web_stats
)
SELECT
* EXCEPT(session_duration, median_duration),
CLIP(0.3 * session_duration / median_duration, 0, 1.0) AS normalized_session_duration
FROM
aggregate_web_stats, normalized_session_duration
%%bigquery df_scaled
SELECT * FROM advdata.ga360_recommendations_data
df_scaled[["normalized_session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5]);
df_scaled.head()
%%bash
cd ../flex_slots
./run_query_on_flex_slots.sh
%%bigquery
SELECT
visitorId,
ARRAY_AGG(STRUCT(contentId, predicted_normalized_session_duration)
ORDER BY predicted_normalized_session_duration DESC
LIMIT 3)
FROM ML.RECOMMEND(MODEL advdata.ga360_recommendations_model)
WHERE predicted_normalized_session_duration < 1
GROUP BY visitorId
LIMIT 5
Explanation: Scaling and clipping
Scale the duration by median and clip it to lie between [0,1]
End of explanation |
12,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGSLIB
Introduction
This is a simple example on how to use raw pyslib to compute variograms with gridded data
Step1: Getting the data ready for work
You can use Pandas to import your data from csv, MS Excel, sql database, json, html, among others. If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: Testing variogram function gamv
This is the example in the book GSLIB User Guide (Problem set two
Step3: The output is a set of 3d arrays (pdis,pgam, phm,ptm,phv,ptv,pnump) with dimensions (nvarg, ndir, nlag+2), representing the experimental variograms output, and 1D array (cldi, cldj, cldg, cldh) representing variogram cloud for first variogram/direction.
This structure is complex but is like the standalone GSLIB gamv program.
To plot the variograms we need to use pdis (pair distances) and pgam (variogram values). Remember this arrays are 3D, to use the right values keep in mind that
Step4: Comparing results with GSLIB output
This is the first variogram output from gslib (direction 1 and 2)
Semivariogram tail
Step5: Plotting the results | Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
#make the plots inline
%matplotlib inline
Explanation: PyGSLIB
Introduction
This is a simple example on how to use raw pyslib to compute variograms with gridded data
End of explanation
#get the data in GSLIB format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../data/true.dat')
# This is a 2D grid file with two variables in addition we need a
# dummy BHID = 1 (this is like domain to avoid creating pairs from apples and oranges)
mydata['bhid']=1
# printing to verify results
print (' \n **** 5 first rows in my datafile \n\n ', mydata.tail(n=5))
Explanation: Getting the data ready for work
You can use Pandas to import your data from csv, MS Excel, sql database, json, html, among others. If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
# these are the parameters we need. Note that at difference of GSLIB this dictionary also stores
# the actual data
#important! python is case sensitive 'bhid' is not equal to 'BHID'
# the program gam (fortran code) use flatten array of 2D arrays (unlike gamv which
# works with 2D arrays). This is a work around to input the data in the right format
# WARNING: this is only for GAM and make sure you use FORTRAN order.
vr=mydata[['Primary', 'Secondary']].values.flatten(order='F')
U_var= mydata[['Primary']].var()
V_var= mydata[['Secondary']].var()
parameters = {
'nx' : 50, # number of rows in the gridded data
'ny' : 50, # number of columns in the gridded data
'nz' : 1, # number of levels in the gridded data
'xsiz' : 1, # size of the cell in x direction
'ysiz' : 1, # size of the cell in y direction
'zsiz' : 1, # size of the cell in z direction
'bhid' : mydata['bhid'], # bhid for downhole variogram, array('i') with bounds (nd)
'vr' : vr, # Variables, array('f') with bounds (nd,nv), nv is number of variables
'tmin' : -1.0e21, # trimming limits, float
'tmax' : 1.0e21, # trimming limits, float
'nlag' : 10, # number of lags, int
'ixd' : [1,0], # direction x
'iyd' : [0,1], # direction y
'izd' : [0,0], # direction z
'isill' : 1, # standardize sills? (0=no, 1=yes), int
'sills' : [U_var, V_var], # variance used to std the sills, array('f') with bounds (nv)
'ivtail' : [1,1,2,2], # tail var., array('i') with bounds (nvarg), nvarg is number of variograms
'ivhead' : [1,1,2,2], # head var., array('i') with bounds (nvarg)
'ivtype' : [1,3,1,3]} # variogram type, array('i') with bounds (nvarg)
#check the variogram is ok
#TODO: assert gslib.check_gam_par(parameters)==1 , 'sorry this parameter file is wrong'
#Now we are ready to calculate the veriogram
pdis,pgam, phm,ptm,phv,ptv,pnump= pygslib.gslib.gam(parameters)
Explanation: Testing variogram function gamv
This is the example in the book GSLIB User Guide (Problem set two: variograms). The variogram parameter file (modified) may look like this in gslib:
Parameters for GAM
******************
START OF PARAMETERS:
true.dat -file with data
2 1 2 - number of variables, column numbers
-1.0e21 1.0e21 - trimming limits
gam.out -file for variogram output
1 -grid or realization number
50 0.5 1.0 -nx, xmn, xsiz
50 0.5 1.0 -ny, ymn, ysiz
1 0.5 1.0 -nz, zmn, zsiz
2 10 -number of directions, number of lags
1 0 0 -ixd(1),iyd(1),izd(1)
0 1 0 -ixd(2),iyd(2),izd(2)
1 -standardize sill? (0=no, 1=yes)
4 -number of variograms
1 1 1 -tail variable, head variable, variogram type
1 1 3 -tail variable, head variable, variogram type
2 2 1 -tail variable, head variable, variogram type
2 2 3 -tail variable, head variable, variogram type
Reamember this is GSLIB... use this code to define variograms
type 1 = traditional semivariogram
2 = traditional cross semivariogram
3 = covariance
4 = correlogram
5 = general relative semivariogram
6 = pairwise relative semivariogram
7 = semivariogram of logarithms
8 = semimadogram
Note: The indicator variograms are not implemented in the fortran module. The user may define externally the indicator variables and run variogram (type 1) for indicator variography
End of explanation
nvrg = pdis.shape[0]
ndir = pdis.shape[1]
nlag = pdis.shape[2]-2
print ('nvrg: ', nvrg, '\nndir: ', ndir, '\nnlag: ', nlag)
Explanation: The output is a set of 3d arrays (pdis,pgam, phm,ptm,phv,ptv,pnump) with dimensions (nvarg, ndir, nlag+2), representing the experimental variograms output, and 1D array (cldi, cldj, cldg, cldh) representing variogram cloud for first variogram/direction.
This structure is complex but is like the standalone GSLIB gamv program.
To plot the variograms we need to use pdis (pair distances) and pgam (variogram values). Remember this arrays are 3D, to use the right values keep in mind that:
dim 0 is the variogram number
dim 1 is the direction
dim 2 is the lag
the number of variograms, directions and lags can be calculated as follows:
End of explanation
import pandas as pd
variogram=0
dir1=0
dir2=1
pd.DataFrame({'dis1': pdis[variogram, dir1, : -2],
'gam1': pgam[variogram, dir1, : -2],
'npr1': pnump[variogram, dir1, : -2],
'dis2': pdis[variogram, dir2, : -2],
'gam2': pgam[variogram, dir2, : -2],
'npr2': pnump[variogram, dir2, : -2]})
Explanation: Comparing results with GSLIB output
This is the first variogram output from gslib (direction 1 and 2)
Semivariogram tail:U head:U direction 1
1 1.000 0.49119 2450 2.53364 2.53909
2 2.000 0.62754 2400 2.49753 2.52300
3 3.000 0.75987 2350 2.46945 2.50771
4 4.000 0.81914 2300 2.45293 2.49312
5 5.000 0.83200 2250 2.46641 2.49146
6 6.000 0.88963 2200 2.48491 2.50075
7 7.000 0.93963 2150 2.50757 2.52079
8 8.000 0.96571 2100 2.53510 2.53821
9 9.000 1.00641 2050 2.55013 2.56459
10 10.000 0.99615 2000 2.57868 2.55968
Semivariogram tail:U head:U direction 2
1 1.000 0.49560 2450 2.61572 2.51775
2 2.000 0.63327 2400 2.64921 2.45918
3 3.000 0.67552 2350 2.68369 2.37889
4 4.000 0.72309 2300 2.72356 2.34000
5 5.000 0.75444 2250 2.75722 2.31314
6 6.000 0.83449 2200 2.79176 2.30304
7 7.000 0.85967 2150 2.78771 2.27861
8 8.000 0.89283 2100 2.78614 2.25039
9 9.000 0.87210 2050 2.79200 2.17433
10 10.000 0.89131 2000 2.81165 2.15231
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
#plotting the variogram 1 only
v=0
# in all the directions calculated
for d in range(ndir):
ixd=parameters['ixd'][d]
iyd=parameters['iyd'][d]
izd=parameters['izd'][d]
plt.plot (pdis[v, d, :-2], pgam[v, d, :-2], '-o', label=str(ixd) + '/' + str(iyd) + '/' + str(izd))
# adding nice features to the plot
plt.legend()
plt.grid(True)
plt.show()
Explanation: Plotting the results
End of explanation |
12,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is my fourth attempt at creating a model using sklearn alogithms
In this iteration of analysis we'll be looking at breaking out categorical varaibles and making them binary, and seeing if that makes our model more accurate.
My last three attempts at this are below
Step1: Load the data from our JSON file.
The data is stored as a dictionary of dictionaries in the json file. We store it that way beacause it's easy to add data to the existing master data file. Also, I haven't figured out how to get it in a database yet.
Step2: Clean up the data a bit
Right now the 'shared' and 'split' are included in number of bathrooms. If I were to convert that to a number I would consider a shared/split bathroom to be half or 0.5 of a bathroom.
Step3: Let's get a look at what the prices look like
To visualize it we need to get rid of null values. I haven't figured out the best way to clean this up yet. For now I'm going to drop any rows that have a null value, though I recognize that this is not a good analysis practice. We ended up dropping ~15% of data points.
😬
Also there were some CRAZY outliers, and this analysis is focused on finding a model for apartments for the 99% of us that can't afford crazy extravigant apartments
Step4: It looks like Portland!!!
Let's cluster the data. Start by creating a list of [['lat','long'], ...]
Step5: We'll use K Means Clustering because that's the clustering method I recently learned in class! There may be others that work better, but this is the tool that I know
Step6: We chose our neighborhoods!
I've found that every once in a while the centers end up in different points, but are fairly consistant. Now let's process our data points and figure out where the closest neighborhood center is to it!
Step7: Create a function that will label each point with a number coresponding to it's neighborhood
Step8: Here's the new Part. We're breaking out the neighborhood values into their own columns. Now the algorithms can read them as categorical data rather than continuous data.
Step9: Ok, lets put it through Decision Tree!
What about Random Forest?
Step10: Wow! up to .87! That's our best yet! What if we add more trees???
Step11: Up to .88!
So what is our goal now? I'd like to see if adjusting the number of neighborhoods increases the accuracy. same for the affect with the number of trees
Step12: Looks like the optimum is right around 10 or 11, and then starts to drop off. Let's get a little more granular and look at a smaller range
Step13: Trying a few times, it looks like 10, 11 and 12 get the best results at ~.85. Of course, we'll need to redo some of these optomizations after we properly process our data. Hopefully we'll see some more consistency then too. | Python Code:
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import json
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: This is my fourth attempt at creating a model using sklearn alogithms
In this iteration of analysis we'll be looking at breaking out categorical varaibles and making them binary, and seeing if that makes our model more accurate.
My last three attempts at this are below:
https://github.com/rileyrustad/CLCrawler/blob/master/First_Analysis.ipynb
https://github.com/rileyrustad/CLCrawler/blob/master/Second_Analysis.ipynb
https://github.com/rileyrustad/CLCrawler/blob/master/Third_Analysis.ipynb
Start with the Imports
End of explanation
with open('/Users/mac28/src/pdxapartmentfinder/pipeline/data/MasterApartmentData.json') as f:
my_dict = json.load(f)
dframe = DataFrame(my_dict)
dframe = dframe.T
dframe.describe()
Explanation: Load the data from our JSON file.
The data is stored as a dictionary of dictionaries in the json file. We store it that way beacause it's easy to add data to the existing master data file. Also, I haven't figured out how to get it in a database yet.
End of explanation
dframe.bath = dframe.bath.replace('shared',0.5)
dframe.bath = dframe.bath.replace('split',0.5)
Explanation: Clean up the data a bit
Right now the 'shared' and 'split' are included in number of bathrooms. If I were to convert that to a number I would consider a shared/split bathroom to be half or 0.5 of a bathroom.
End of explanation
df = dframe[dframe.price < 10000][['bath','bed','feet','price']].dropna()
sns.distplot(df.price)
data = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
plt.figure(figsize=(15,10))
plt.scatter(data = data, x = 'long',y='lat')
Explanation: Let's get a look at what the prices look like
To visualize it we need to get rid of null values. I haven't figured out the best way to clean this up yet. For now I'm going to drop any rows that have a null value, though I recognize that this is not a good analysis practice. We ended up dropping ~15% of data points.
😬
Also there were some CRAZY outliers, and this analysis is focused on finding a model for apartments for the 99% of us that can't afford crazy extravigant apartments
End of explanation
XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index]
Explanation: It looks like Portland!!!
Let's cluster the data. Start by creating a list of [['lat','long'], ...]
End of explanation
from sklearn.cluster import KMeans
km = KMeans(n_clusters=40)
km.fit(data)
neighborhoods = km.cluster_centers_
%pylab inline
figure(1,figsize=(20,12))
plot([row[1] for row in data],[row[0] for row in data],'b.')
for i in km.cluster_centers_:
plot(i[1],i[0], 'g*',ms=25)
'''Note to Riley: come back and make it look pretty'''
Explanation: We'll use K Means Clustering because that's the clustering method I recently learned in class! There may be others that work better, but this is the tool that I know
End of explanation
neighborhoods = neighborhoods.tolist()
for i in enumerate(neighborhoods):
i[1].append(i[0])
print neighborhoods
Explanation: We chose our neighborhoods!
I've found that every once in a while the centers end up in different points, but are fairly consistant. Now let's process our data points and figure out where the closest neighborhood center is to it!
End of explanation
def clusterer(X, Y,neighborhoods):
neighbors = []
for i in neighborhoods:
distance = ((i[0]-X)**2 + (i[1]-Y)**2)
neighbors.append(distance)
closest = min(neighbors)
return neighbors.index(closest)
neighborhoodlist = []
for i in dframe.index:
neighborhoodlist.append(clusterer(dframe['lat'][i],dframe['long'][i],neighborhoods))
dframe['neighborhood'] = neighborhoodlist
dframe
Explanation: Create a function that will label each point with a number coresponding to it's neighborhood
End of explanation
from sklearn import preprocessing
def CategoricalToBinary(dframe,column_name):
le = preprocessing.LabelEncoder()
listy = le.fit_transform(dframe[column_name])
dframe[column_name] = listy
unique = dframe[column_name].unique()
serieslist = [list() for _ in xrange(len(unique))]
for column, _ in enumerate(serieslist):
for i, item in enumerate(dframe[column_name]):
if item == column:
serieslist[column].append(1)
else:
serieslist[column].append(0)
dframe[column_name+str(column)] = serieslist[column]
return dframe
pd.set_option('max_columns', 100)
dframe = CategoricalToBinary(dframe,'housingtype')
dframe = CategoricalToBinary(dframe,'parking')
dframe = CategoricalToBinary(dframe,'laundry')
dframe = CategoricalToBinary(dframe,'smoking')
dframe = CategoricalToBinary(dframe,'wheelchair')
dframe = CategoricalToBinary(dframe,'neighborhood')
dframe
dframe = dframe.drop('date',1)
dframe = dframe.drop('housingtype',1)
dframe = dframe.drop('parking',1)
dframe = dframe.drop('laundry',1)
dframe = dframe.drop('smoking',1)
dframe = dframe.drop('wheelchair',1)
dframe = dframe.drop('neighborhood',1)
dframe = dframe.drop('time',1)
columns=list(dframe.columns)
from __future__ import division
print len(dframe)
df2 = dframe[dframe.price < 10000][columns].dropna()
print len(df2)
print len(df2)/len(dframe)
price = df2[['price']].values
columns.pop(columns.index('price'))
features = df2[columns].values
from sklearn.cross_validation import train_test_split
features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1, random_state=42)
Explanation: Here's the new Part. We're breaking out the neighborhood values into their own columns. Now the algorithms can read them as categorical data rather than continuous data.
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
reg = RandomForestRegressor()
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
print r2_score(forest_pred, price_test)
plt.scatter(forest_pred,price_test)
df2['predictions'] = reg.predict(df2[columns])
df2['predictions_diff'] = df2['predictions']-df2['price']
sd = np.std(df2['predictions_diff'])
sns.kdeplot(df2['predictions_diff'][df2['predictions_diff']>-150][df2['predictions_diff']<150])
sns.plt.xlim(-150,150)
data = df2[dframe.lat > 45.45][df2.lat < 45.6][df2.long < -122.4][df2.long > -122.8][df2['predictions_diff']>-150][df2['predictions_diff']<150]
plt.figure(figsize=(15,10))
plt.scatter(data = data, x = 'long',y='lat', c = 'predictions_diff',s=10,cmap='coolwarm')
dframe
print np.mean([1,2,34,np.nan])
def averager(dframe):
dframe = dframe.T
dframe.dropna()
averages = {}
for listing in dframe:
try:
key = str(dframe[listing]['bed'])+','+str(dframe[listing]['bath'])+','+str(dframe[listing]['neighborhood'])+','+str(dframe[listing]['feet']-dframe[listing]['feet']%50)
if key not in averages:
averages[key] = {'average_list':[dframe[listing]['price']], 'average':0}
elif key in averages:
averages[key]['average_list'].append(dframe[listing]['price'])
except TypeError:
continue
for entry in averages:
averages[entry]['average'] = np.mean(averages[entry]['average_list'])
return averages
averages = averager(dframe)
print averages
dframe['averages']= averages[str(dframe['bed'])+','+str(dframe['bath'])+','+str(dframe['neighborhood'])+','+str(dframe['feet']-dframe['feet']%50)]
dframe.T
Explanation: Ok, lets put it through Decision Tree!
What about Random Forest?
End of explanation
reg = RandomForestRegressor(n_estimators = 100)
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
print r2_score(forest_pred, price_test)
print plt.scatter(pred,price_test)
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth = 5)
reg.fit(features_train, price_train)
print len(features_train[0])
columns = [str(x) for x in columns]
print columns
from sklearn.tree import export_graphviz
export_graphviz(reg,feature_names=columns)
Explanation: Wow! up to .87! That's our best yet! What if we add more trees???
End of explanation
def neighborhood_optimizer(dframe,neighborhood_number_range, counter_num):
XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index]
r2_dict = []
for i in neighborhood_number_range:
counter = counter_num
average_accuracy_list = []
while counter > 0:
km = KMeans(n_clusters=i)
km.fit(data)
neighborhoods = km.cluster_centers_
neighborhoods = neighborhoods.tolist()
for x in enumerate(neighborhoods):
x[1].append(x[0])
neighborhoodlist = []
for z in dframe.index:
neighborhoodlist.append(clusterer(dframe['lat'][z],dframe['long'][z],neighborhoods))
dframecopy = dframe.copy()
dframecopy['neighborhood'] = Series((neighborhoodlist), index=dframe.index)
df2 = dframecopy[dframe.price < 10000][['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'price','neighborhood']].dropna()
features = df2[['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'neighborhood']].values
price = df2[['price']].values
features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1)
reg = RandomForestRegressor()
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
counter -= 1
average_accuracy_list.append(r2_score(forest_pred, price_test))
total = 0
for entry in average_accuracy_list:
total += entry
r2_accuracy = total/len(average_accuracy_list)
r2_dict.append((i,r2_accuracy))
print r2_dict
return r2_dict
neighborhood_number_range = [i for _,i in enumerate(range(2,31,2))]
neighborhood_number_range
r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10)
r2_dict[:][0]
plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict])
Explanation: Up to .88!
So what is our goal now? I'd like to see if adjusting the number of neighborhoods increases the accuracy. same for the affect with the number of trees
End of explanation
neighborhood_number_range = [i for _,i in enumerate(range(7,15))]
neighborhood_number_range
r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10)
print r2_dict
plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict])
Explanation: Looks like the optimum is right around 10 or 11, and then starts to drop off. Let's get a little more granular and look at a smaller range
End of explanation
r2_dict = neighborhood_optimizer(dframe,[10,11,12],25)
Explanation: Trying a few times, it looks like 10, 11 and 12 get the best results at ~.85. Of course, we'll need to redo some of these optomizations after we properly process our data. Hopefully we'll see some more consistency then too.
End of explanation |
12,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS224U Homework 5
This homework is distributed in three content-identical formats (html, py, ipynb) as part of the SippyCup codebase. All seven problems are required. You're encouraged to turn your work in as iPython HTML output or as a Python script. This work is due by the start of class on May 16.
Be sure to put this code, or run this notebook, inside the SippyCup codebase.
Arithmetic domain
Question 1
Question 2
Question 3
Travel domain
Question 4
Question 5
GeoQuery domain
Question 6
Question 7
Arithmetic domain
SippyCup includes a module arithmetic with a class ArithmeticDomain that brings together
the examples from unit 1. Here's an example of the sort of use you'll make of this domain
for these homework problems.
Step1: This is a convenience function we'll use for seeing what the grammar is doing
Step2: Question 1
Your task is to extend ArithmeticDomain to include the unary operators squared and cubed,
which form expressions like three squared and nine cubed. The following code will help
you get started on this. Submit
Step3: Question 2
Your task is to extend ArithmeticDomain to support numbers with decimals, so
that you can parse all expressions of the form N point D where N and D both denote
ints. You can assume that both numbers are both spelled out as single words, as in
"one point twelve" rather than "one point one two". (The grammar fragment has only "one",
"two", "three", and "four" anyway.) Submit
Step4: Question 3
Extend the grammar to support the multi-word expression the average of as
in the average of one and four.
Your solution is required to treat each word in this multi-word expression as its own lexical item.
Other than that, you have a lot of design freedom.
Your solution can be limited to the case where the conjunction consists of just two numbers.
Submit
Step5: Travel domain
Here's an illustration of how parsing and interpretation work in this domain
Step7: For these questions, we'll combine grammars with machine learning.
Here's now to train and evaluate a model using the grammar
that is included in TravelDomain along with a basic feature
function.
Step8: Question 4
With the default travel grammar, many of the errors on training examples occur because the origin
isn't marked by "from". You might have noticed that "directions New York to Philadelphia"
is not handled properly in our opening example. Other examples include
"transatlantic cruise southampton to tampa",
"fly boston to myrtle beach spirit airlines", and
"distance usa to peru". Your tasks
Step9: Question 5
Your extended grammar for question 4 likely did some harm to the space of parses we
consider. Consider how number of parses has changed. (If it's not intuitively clear why,
check out some of the parses to see what's happening!)
Your task
Step10: GeoQuery domain
Here are a few simple examples from the GeoQuery domain
Step11: And we can train models just as we did for the travel domain, though we have to be more
attentive to special features of semantic parsing in this domain. Here's a run with the
default scoring model, metrics, etc.
Step12: Question 6
Two deficiencies of the current grammar
Step13: Question 7
The success of the empty_denotation feature demonstrates the potential of denotation features. Can we go further? Experiment with a feature or features that characterize the size of the denotation (that is, the number of answers). This will involve extending geo_domain.features and running another assessment. Submit | Python Code:
from arithmetic import ArithmeticDomain
from parsing import parse_to_pretty_string
# Import the domain and make sure all is well:
math_domain = ArithmeticDomain()
# Core grammar:
math_grammar = math_domain.grammar()
# A few examples:
parses = math_grammar.parse_input("minus two plus three")
for parse in parses:
print('\nParse:\n', parse_to_pretty_string(parse, show_sem=True))
print('Denotation:', math_domain.execute(parse.semantics))
Explanation: CS224U Homework 5
This homework is distributed in three content-identical formats (html, py, ipynb) as part of the SippyCup codebase. All seven problems are required. You're encouraged to turn your work in as iPython HTML output or as a Python script. This work is due by the start of class on May 16.
Be sure to put this code, or run this notebook, inside the SippyCup codebase.
Arithmetic domain
Question 1
Question 2
Question 3
Travel domain
Question 4
Question 5
GeoQuery domain
Question 6
Question 7
Arithmetic domain
SippyCup includes a module arithmetic with a class ArithmeticDomain that brings together
the examples from unit 1. Here's an example of the sort of use you'll make of this domain
for these homework problems.
End of explanation
def display_examples(utterances, grammar=None, domain=None):
for utterance in utterances:
print("="*70)
print(utterance)
parses = grammar.parse_input(utterance)
for parse in parses:
print('\nParse:\n', parse_to_pretty_string(parse, show_sem=True))
print('Denotation:', domain.execute(parse.semantics))
Explanation: This is a convenience function we'll use for seeing what the grammar is doing:
End of explanation
from arithmetic import ArithmeticDomain
from parsing import Rule, add_rule
# Resort to the original grammar to avoid repeatedly adding the same
# rules to the grammar during debugging, which multiplies the number
# of parses without changing the set of parses produced:
math_domain = ArithmeticDomain()
math_grammar = math_domain.grammar()
# Here's where your work should go:
# Add rules to the grammar:
# Extend domain.ops appropriately:
# Make sure things are working:
display_examples(('three squared', 'minus three squared', 'four cubed'),
grammar=math_grammar, domain=math_domain)
Explanation: Question 1
Your task is to extend ArithmeticDomain to include the unary operators squared and cubed,
which form expressions like three squared and nine cubed. The following code will help
you get started on this. Submit: your completion of this code.
End of explanation
from arithmetic import ArithmeticDomain
from parsing import Rule, add_rule
# Clear out the grammar; remove this if you want your question 1
# extension to combine with these extensions:
math_domain = ArithmeticDomain()
math_grammar = math_domain.grammar()
# Remember to add these rules to the grammar!
integer_rules = [
Rule('$I', 'one', 1),
Rule('$I', 'two', 2),
Rule('$I', 'three', 3),
Rule('$I', 'four', 4) ]
tens_rules = [
Rule('$T', 'one', 1),
Rule('$T', 'two', 2),
Rule('$T', 'three', 3),
Rule('$T', 'four', 4) ]
# Add the above rules to math_grammar:
# Add rules to the grammar for using the above:
# Extend domain.ops:
# Make sure things are working:
display_examples(('four point two', 'minus four point one', 'two minus four point one'),
grammar=math_grammar, domain=math_domain)
Explanation: Question 2
Your task is to extend ArithmeticDomain to support numbers with decimals, so
that you can parse all expressions of the form N point D where N and D both denote
ints. You can assume that both numbers are both spelled out as single words, as in
"one point twelve" rather than "one point one two". (The grammar fragment has only "one",
"two", "three", and "four" anyway.) Submit: your completion of the following code.
Important: your grammar should not create spurious parses like (two times three) point four.
This means that you can't treat point like the other binary operators in your syntactic grammar.
This will require you to add special rules to handle the internal structure of these decimal numbers.
End of explanation
from arithmetic import ArithmeticDomain
from parsing import Rule, add_rule
import numpy as np
math_domain = ArithmeticDomain()
math_grammar = math_domain.grammar()
# Add rules to the grammar:
# Extend domain.ops:
# Make sure things are working:
display_examples(('the one', 'the average of one and four'),
grammar=math_grammar, domain=math_domain)
Explanation: Question 3
Extend the grammar to support the multi-word expression the average of as
in the average of one and four.
Your solution is required to treat each word in this multi-word expression as its own lexical item.
Other than that, you have a lot of design freedom.
Your solution can be limited to the case where the conjunction consists of just two numbers.
Submit: your completion of this starter code.
End of explanation
from travel import TravelDomain
travel_domain = TravelDomain()
travel_grammar = travel_domain.grammar()
display_examples(
("flight from Boston to San Francisco",
"directions from New York to Philadelphia",
"directions New York to Philadelphia"),
grammar=travel_grammar,
domain=travel_domain)
Explanation: Travel domain
Here's an illustration of how parsing and interpretation work in this domain:
End of explanation
from travel import TravelDomain
from scoring import Model
from experiment import train_test
from travel_examples import travel_train_examples, travel_test_examples
from collections import defaultdict
travel_domain = TravelDomain()
travel_grammar = travel_domain.grammar()
def basic_feature_function(parse):
Features for the rule used for the root node and its children
features = defaultdict(float)
features[str(parse.rule)] += 1.0
for child in parse.children:
features[str(child.rule)] += 1.0
return features
# This code evaluates the current grammar:
train_test(
model=Model(grammar=travel_grammar, feature_fn=basic_feature_function),
train_examples=travel_train_examples,
test_examples=travel_test_examples,
print_examples=False)
Explanation: For these questions, we'll combine grammars with machine learning.
Here's now to train and evaluate a model using the grammar
that is included in TravelDomain along with a basic feature
function.
End of explanation
from travel import TravelDomain
from parsing import Rule, add_rule
from scoring import Model
from experiment import train_test
from travel_examples import travel_train_examples, travel_test_examples
travel_domain = TravelDomain()
travel_grammar = travel_domain.grammar()
# Add your rule here:
# This code evaluates the new grammar:
train_test(
model=Model(grammar=travel_grammar, feature_fn=basic_feature_function),
train_examples=travel_train_examples,
test_examples=travel_test_examples,
print_examples=False)
Explanation: Question 4
With the default travel grammar, many of the errors on training examples occur because the origin
isn't marked by "from". You might have noticed that "directions New York to Philadelphia"
is not handled properly in our opening example. Other examples include
"transatlantic cruise southampton to tampa",
"fly boston to myrtle beach spirit airlines", and
"distance usa to peru". Your tasks: (i) extend the grammar with a single rule to handle examples
like these, and run another evaluation using this expanded grammar (submit your completion
of the following starter code); (ii) in 1–2 sentences,
summarize what happened to the post-training performance metrics when this rule was added.
End of explanation
from parsing import Parse
def expanded_feature_function(parse):
pass
# Evaluate the new grammar:
train_test(
model=Model(grammar=travel_grammar, feature_fn=expanded_feature_function),
train_examples=travel_train_examples,
test_examples=travel_test_examples,
print_examples=False)
Explanation: Question 5
Your extended grammar for question 4 likely did some harm to the space of parses we
consider. Consider how number of parses has changed. (If it's not intuitively clear why,
check out some of the parses to see what's happening!)
Your task: to try to make amends, expand the feature function to improve the
ability of the optimizer to distinguish good parses from bad. You can write your
own function and/or combine it with scoring functions that are available inside
SippyCup. You should be able to achieve a gain in post-training train and
test semantics accuracy. (Note: you should not spend hours continually improving
your score unless you are planning to develop this into a project. Any gain over
the previous run will suffice here.) Submit: your completion of this code.
End of explanation
from geoquery import GeoQueryDomain
geo_domain = GeoQueryDomain()
geo_grammar = geo_domain.grammar()
display_examples(
("what is the biggest city in california ?",
"how many people live in new york ?",
"where is rochester ?"),
grammar=geo_grammar,
domain=geo_domain)
Explanation: GeoQuery domain
Here are a few simple examples from the GeoQuery domain:
End of explanation
from geoquery import GeoQueryDomain
from scoring import Model
from experiment import train_test
geo_domain = GeoQueryDomain()
geo_grammar = geo_domain.grammar()
# We'll use this as our generic assessment interface for these questions:
def special_geo_evaluate(grammar=None, feature_fn=geo_domain.features):
# Build the model by hand so that we can see all the pieces:
geo_mod = Model(
grammar=grammar,
feature_fn=feature_fn,
weights=geo_domain.weights(),
executor=geo_domain.execute)
# This can be done with less fuss using experiment.train_test_for_domain,
# but we want full access to the model, metrics, etc.
train_test(
model=geo_mod,
train_examples=geo_domain.train_examples(),
test_examples=geo_domain.test_examples(),
metrics=geo_domain.metrics(),
training_metric=geo_domain.training_metric(),
seed=0,
print_examples=False)
special_geo_evaluate(grammar=geo_grammar)
Explanation: And we can train models just as we did for the travel domain, though we have to be more
attentive to special features of semantic parsing in this domain. Here's a run with the
default scoring model, metrics, etc.:
End of explanation
from geoquery import GeoQueryDomain
from parsing import Rule, add_rule
geo_domain = GeoQueryDomain()
geo_grammar = geo_domain.grammar()
# Your rules go here:
# Evaluation of the new grammar:
special_geo_evaluate(grammar=geo_grammar)
Explanation: Question 6
Two deficiencies of the current grammar:
The words "where" and "is" are treated as being of category $Optional, which means they are ignored. As a result, the grammar construes all questions of the form "where is X" as being about the identity of X!
Queries like "how many people live in Florida" are not handled correctly.
Your task: Add grammar rules that address these problems and assess impact of the changes
using the train_test based interface illustrated above.
Submit: your expanded version of the starter code below.
End of explanation
from geoquery import GeoQueryDomain
def feature_function(parse):
# Bring in all the default features:
f = geo_domain.features(parse)
# Extend dictionary f with your new denotation-count feature
return f
# Evaluation of the new grammar:
special_geo_evaluate(grammar=geo_grammar, feature_fn=feature_function)
Explanation: Question 7
The success of the empty_denotation feature demonstrates the potential of denotation features. Can we go further? Experiment with a feature or features that characterize the size of the denotation (that is, the number of answers). This will involve extending geo_domain.features and running another assessment. Submit: your completion of the
code below and 1–2 sentences saying how this feature seems to behave in the model.
End of explanation |
12,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec
Learning Objectives
Compile all steps into one function
Prepare training data for Word2Vec
Model and Training
Embedding lookup and analysis
Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note
Step1: Please check your tensorflow version using the cell below.
Step2: Vectorize an example sentence
Consider the following sentence
Step3: Create a vocabulary to save mappings from tokens to integer indices.
Step4: Create an inverse vocabulary to save mappings from integer indices to tokens.
Step5: Vectorize your sentence.
Step6: Generate skip-grams from one sentence
The tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for Word2Vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).
Note
Step7: Take a look at few positive skip-grams.
Step8: Negative sampling for one skip-gram
The skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point
Step9: Construct one training example
For a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labelled as 1) and negative samples (labelled as 0) for each target word.
Step10: Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
Step11: A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)
Step12: Summary
This picture summarizes the procedure of generating training example from a sentence.
Lab Task 1
Step13: sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf's distribution of the word frequencies for sampling.
Key point
Step14: Lab Task 2
Step15: Read text from the file and take a look at the first few lines.
Step16: Use the non empty lines to construct a tf.data.TextLineDataset object for next steps.
Step17: Vectorize sentences from the corpus
You can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text Classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.
Step18: Call adapt on the text dataset to create vocabulary.
Step19: Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with get_vocabulary(). This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
Step20: The vectorize_layer can now be used to generate vectors for each element in the text_ds.
Step21: Obtain sequences from the dataset
You now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note
Step22: Take a look at few examples from sequences.
Step23: Generate training examples from sequences
sequences is now a list of int encoded sentences. Just call the generate_training_data() function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
Step24: Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your Word2Vec model!
Step25: Add cache() and prefetch() to improve performance.
Step26: Lab Task 3
Step27: Define loss function and compile model
For simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows
Step28: Also define a callback to log training statistics for tensorboard.
Step29: Train the model with dataset prepared above for some number of epochs.
Step30: Tensorboard now shows the Word2Vec model's accuracy and loss.
Step31: Run the following command in Cloud Shell
Step32: Create and save the vectors and metadata file.
Step33: Download the vectors.tsv and metadata.tsv to analyze the obtained embeddings in the Embedding Projector. | Python Code:
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
Explanation: Word2Vec
Learning Objectives
Compile all steps into one function
Prepare training data for Word2Vec
Model and Training
Embedding lookup and analysis
Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on Efficient Estimation of Word Representations in Vector Space and
Distributed
Representations of Words and Phrases and their Compositionality. It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
Continuous Bag-of-Words Model which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
Continuous Skip-gram Model which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the TensorFlow Embedding Projector.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of (target_word, context_word) where context_word appears in the neighboring context of target_word.
Consider the following sentence of 8 words.
The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a target_word that can be considered context word. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of n implies n words on each side with a total window span of 2*n+1 words across a word.
The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>, the objective can be written as the average log probability
where c is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.
where v and v<sup>'<sup> are target and context vector representations of words and W is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The Noise Contrastive Estimation loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be simplified to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from num_ns negative samples drawn from noise distribution P<sub>n</sub>(w) of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and num_ns negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the window_size neighborhood of the target_word. For the example sentence, these are few potential negative samples (when window_size is 2).
(hot, shimmered)
(wide, hot)
(wide, sun)
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
Setup
End of explanation
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
Explanation: Please check your tensorflow version using the cell below.
End of explanation
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
Explanation: Vectorize an example sentence
Consider the following sentence:
The wide road shimmered in the hot sun.
Tokenize the sentence:
End of explanation
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
Explanation: Create a vocabulary to save mappings from tokens to integer indices.
End of explanation
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
Explanation: Create an inverse vocabulary to save mappings from integer indices to tokens.
End of explanation
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
Explanation: Vectorize your sentence.
End of explanation
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
Explanation: Generate skip-grams from one sentence
The tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for Word2Vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).
Note: negative_samples is set to 0 here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
End of explanation
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
Explanation: Take a look at few positive skip-grams.
End of explanation
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
Explanation: Negative sampling for one skip-gram
The skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: num_ns (number of negative samples per positive context word) between [5, 20] is shown to work best for smaller datasets, while num_ns between [2,5] suffices for larger datasets.
End of explanation
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
Explanation: Construct one training example
For a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labelled as 1) and negative samples (labelled as 0) for each target word.
End of explanation
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
Explanation: Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
End of explanation
print(f"target :", target)
print(f"context :", context )
print(f"label :", label )
Explanation: A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)
End of explanation
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
Explanation: Summary
This picture summarizes the procedure of generating training example from a sentence.
Lab Task 1: Compile all steps into one function
Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as the, is, on) don't add much useful information for the model to learn from. Mikolov et al. suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The tf.keras.preprocessing.sequence.skipgrams function accepts a sampling table argument to encode probabilities of sampling any token. You can use the tf.keras.preprocessing.sequence.make_sampling_table to generate a word-frequency rank based probabilistic sampling table and pass it to skipgrams function. Take a look at the sampling probabilities for a vocab_size of 10.
End of explanation
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
Explanation: sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf's distribution of the word frequencies for sampling.
Key point: The tf.random.log_uniform_candidate_sampler already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Explanation: Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
End of explanation
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
Explanation: Read text from the file and take a look at the first few lines.
End of explanation
# TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
Explanation: Use the non empty lines to construct a tf.data.TextLineDataset object for next steps.
End of explanation
# We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
Explanation: Vectorize sentences from the corpus
You can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text Classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.
End of explanation
vectorize_layer.adapt(text_ds.batch(1024))
Explanation: Call adapt on the text dataset to create vocabulary.
End of explanation
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
Explanation: Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with get_vocabulary(). This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
End of explanation
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
Explanation: The vectorize_layer can now be used to generate vectors for each element in the text_ds.
End of explanation
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
Explanation: Obtain sequences from the dataset
You now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the generate_training_data() defined earlier uses non-TF python/numpy functions, you could also use a tf.py_function or tf.numpy_function with tf.data.Dataset.map().
End of explanation
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
Explanation: Take a look at few examples from sequences.
End of explanation
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels))
Explanation: Generate training examples from sequences
sequences is now a list of int encoded sentences. Just call the generate_training_data() function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
End of explanation
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
Explanation: Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your Word2Vec model!
End of explanation
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
Explanation: Add cache() and prefetch() to improve performance.
End of explanation
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
Explanation: Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
Subclassed Word2Vec Model
Use the Keras Subclassing API to define your Word2Vec model with the following layers:
target_embedding: A tf.keras.layers.Embedding layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are (vocab_size * embedding_dim).
context_embedding: Another tf.keras.layers.Embedding layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in target_embedding, i.e. (vocab_size * embedding_dim).
dots: A tf.keras.layers.Dot layer that computes the dot product of target and context embeddings from a training pair.
flatten: A tf.keras.layers.Flatten layer to flatten the results of dots layer into logits.
With the sublassed model, you can define the call() function that accepts (target, context) pairs which can then be passed into their corresponding embedding layer. Reshape the context_embedding to perform a dot product with target_embedding and return the flattened result.
Key point: The target_embedding and context_embedding layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
End of explanation
# TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Define loss function and compile model
For simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the tf.keras.optimizers.Adam optimizer.
End of explanation
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
Explanation: Also define a callback to log training statistics for tensorboard.
End of explanation
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
Explanation: Train the model with dataset prepared above for some number of epochs.
End of explanation
!tensorboard --bind_all --port=8081 --load_fast=false --logdir logs
Explanation: Tensorboard now shows the Word2Vec model's accuracy and loss.
End of explanation
# TODO 4a
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
Explanation: Run the following command in Cloud Shell:
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.
To quit the TensorBoard, click Kernel > Interrupt kernel.
Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.
End of explanation
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
Explanation: Create and save the vectors and metadata file.
End of explanation
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
Explanation: Download the vectors.tsv and metadata.tsv to analyze the obtained embeddings in the Embedding Projector.
End of explanation |
12,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
the orange slope is the most important and you expect a slope of around -1 unless you work with cells badly synchronized or with weird karyotype
resolution= size of bins
Step1: In Y we have number of reads per 500kb bins
It reflects different number of copies, for example in chromosome 2 the last part of the chromosome has less copies than the rest, therefore the lower number of reads.
Big peaks correspond to PCR duplicates | Python Code:
from pytadbit.mapping.analyze import plot_genomic_distribution
plot_genomic_distribution("results/HindIII/03_filtering/reads12.tsv", resolution=500000, show=True)
Explanation: the orange slope is the most important and you expect a slope of around -1 unless you work with cells badly synchronized or with weird karyotype
resolution= size of bins
End of explanation
from pytadbit.mapping.analyze import hic_map
hic_map("results/HindIII/03_filtering/reads12.tsv", resolution=1000000, show=True)
from pytadbit.mapping.analyze import insert_sizes
insert_sizes("results/HindIII/03_filtering/reads12.tsv", show=True, nreads=100000)
from pytadbit.mapping.filter import filter_reads
filter_reads("results/HindIII/03_filtering/reads12.tsv", max_molecule_length=750, min_dist_to_re=500)
masked = {1: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_self-circle.tsv',
'name': 'self-circle',
'reads': 37383},
2: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_dangling-end.tsv',
'name': 'dangling-end',
'reads': 660146},
3: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_error.tsv',
'name': 'error',
'reads': 37395},
4: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_extra_dangling-end.tsv',
'name': 'extra dangling-end',
'reads': 3773498},
5: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_too_close_from_RES.tsv',
'name': 'too close from RES',
'reads': 3277369},
6: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_too_short.tsv',
'name': 'too short',
'reads': 296853},
7: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_too_large.tsv',
'name': 'too large',
'reads': 1843},
8: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_over-represented.tsv',
'name': 'over-represented',
'reads': 411157},
9: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_duplicated.tsv',
'name': 'duplicated',
'reads': 324490},
10: {'fnam': 'results/HindIII/03_filtering/reads12.tsv_random_breaks.tsv',
'name': 'random breaks',
'reads': 968492}}
from pytadbit.mapping.filter import apply_filter
apply_filter ("results/HindIII/03_filtering/reads12.tsv",
"results/HindIII/03_filtering/reads12_valid.tsv",
masked, filters=[1, 2, 3, 4, 9, 10])
Explanation: In Y we have number of reads per 500kb bins
It reflects different number of copies, for example in chromosome 2 the last part of the chromosome has less copies than the rest, therefore the lower number of reads.
Big peaks correspond to PCR duplicates
End of explanation |
12,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Init config
Select appropriate
Step1: Count number of tweets per day for every news, calculate cummulative diffusion
Step2: Plot diffusion for every day for all news together
Step3: Plot cummulative diffusion of all news together
Step4: Plot cummulative diffusion for every news headline
Step5: Average diffusion per day for all news
Step6: The same graph but in logarithmic scale
Step7: Calculate and plot standart deviation
Step8: Calculate and plot share of values inside one standard deviation for every day
Step9: Store average diffusion data on hard drive to use by another jupyter notebook
Step10: Plot average diffusion for both real and fake news on one graph
Step11: In logarithmic scale
Step12: Calculate average diffusion duration (number of days until difussion is dead) | Python Code:
client = pymongo.MongoClient("46.101.236.181")
db = client.allfake
# get collection names
collections = sorted([collection for collection in db.collection_names()])
Explanation: Init config
Select appropriate:
- database server (line 1): give pymongo.MongoClient() an appropriate parameter, else it is localhost
- database (line 2): either client.databasename or client.['databasename']
End of explanation
day = {} # number of tweets per day per collection
diff = {} # cumullative diffusion on day per colletion
for collection in collections:
# timeframe
relevant_from = db[collection].find().sort("timestamp", pymongo.ASCENDING).limit(1)[0]['timestamp']
relevant_till = db[collection].find().sort("timestamp", pymongo.DESCENDING).limit(1)[0]['timestamp']
i = 0
day[collection] = [] # number of tweets for every collection for every day
diff[collection] = [] # cummulative diffusion for every collection for every day
averagediff = [] # average diffusion speed for every day for all news
d = relevant_from
delta = datetime.timedelta(days=1)
while d <= relevant_till:
# tweets per day per collection
day[collection].append(db[collection].find({"timestamp":{"$gte": d, "$lt": d + delta}}).count())
# cummulative diffusion per day per collection
if i == 0:
diff[collection].append( day[collection][i] )
else:
diff[collection].append( diff[collection][i-1] + day[collection][i] )
d += delta
i += 1
Explanation: Count number of tweets per day for every news, calculate cummulative diffusion
End of explanation
# the longest duration of diffusion among all news headlines
max_days = max([len(day[coll]) for coll in \
[days_col for days_col in day] ])
summ_of_diffusions = [0] * max_days # summary diffusion for every day
# calculate summary diffusion for every day
for d in range(max_days):
for c in collections:
# if there is an entry for this day for this collection, add its number of tweets to the number of this day
if d < len(day[c]):
summ_of_diffusions[d] += day[c][d]
plt.step(range(len(summ_of_diffusions)),summ_of_diffusions, 'g')
plt.xlabel('Day')
plt.ylabel('Number of tweets')
plt.title('Diffusion of all real news together')
plt.show()
Explanation: Plot diffusion for every day for all news together
End of explanation
summ_of_diffusions_cumulative = [0] * max_days #
summ_of_diffusions_cumulative[0] = summ_of_diffusions[0]
for d in range(1, max_days):
summ_of_diffusions_cumulative[d] += summ_of_diffusions_cumulative[d-1] + summ_of_diffusions[d]
plt.step(range(len(summ_of_diffusions_cumulative)),summ_of_diffusions_cumulative, 'g')
plt.xlabel('Day')
plt.ylabel('Cummulative number of tweets')
plt.title('Cummulative diffusion of all real news together')
plt.show()
Explanation: Plot cummulative diffusion of all news together
End of explanation
for collection in collections:
plt.step([d+1 for d in range(len(diff[collection]))], diff[collection])
plt.xlabel('Day')
plt.ylabel('Cummulative number of tweets')
plt.title('Cumulative diffusion of real news headlines')
plt.show()
Explanation: Plot cummulative diffusion for every news headline
End of explanation
averagediff = [0 for _ in range(max_days)] # average diffusion for every day
for collection in collections:
for i,d in enumerate(day[collection]):
averagediff[i] += d / len(collections)
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'g')
plt.title('Average diffusion of real news')
plt.show()
Explanation: Average diffusion per day for all news
End of explanation
plt.ylabel('Average number of tweets')
plt.xlabel('Day')
plt.yscale('log')
plt.step(range(1,len(averagediff)+1),averagediff, 'g')
plt.show()
Explanation: The same graph but in logarithmic scale
End of explanation
avgdiff_std = [0 for _ in range(max_days)] # standard deviation for every day for all collections
number_tweets = [[] for _ in range(max_days)] # number of tweets for every day for every collection
for d in range(max_days):
for c in collections:
# if there is an entry for this day for this collection
if d < len(day[c]):
# add number of tweets for this day for this colletion to the number_tweets for this day
number_tweets[d].append(day[c][d])
# calculate standard deviation for this day
avgdiff_std[d] = np.std(number_tweets[d])
plt.ylabel('Standart deviation for average number of tweets per day')
plt.xlabel('Day')
plt.step(range(1,len(avgdiff_std)+1),avgdiff_std, 'g')
plt.title('Standard deviation for real news average')
plt.show()
Explanation: Calculate and plot standart deviation
End of explanation
inside_std = [0 for _ in range(max_days)] # number of values inside one standard deviation for every day
inside_std_share = [0 for _ in range(max_days)] # share of values inside one standard deviation for every day
for d in range(max_days):
for c in collections:
# set borders of mean plusminus one std
lowest = averagediff[d] - avgdiff_std[d]
highest = averagediff[d] + avgdiff_std[d]
# if there is entray for this day for this collection and its value is inside the borderes
if d < len(day[c]) and (day[c][d] >= lowest and day[c][d] <= highest):
# increment number of values inside one std for this day
inside_std[d] += 1
# calculate the share of values inside one std for this day
inside_std_share[d] = inside_std[d] / float(len(number_tweets[d]))
plt.ylabel('Percent of values in 1 std from average')
plt.xlabel('Day')
plt.scatter(range(1,len(inside_std_share)+1),inside_std_share, c='g')
plt.title('Percentage of values inside the range\n of one standard deviation from mean for real news')
plt.show()
Explanation: Calculate and plot share of values inside one standard deviation for every day
End of explanation
averagediff_real = averagediff
%store averagediff_real
Explanation: Store average diffusion data on hard drive to use by another jupyter notebook
End of explanation
# from hard drive, load data for average diffusion of fake news
%store -r averagediff_fake
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'g', label="real news")
plt.step(range(1,len(averagediff_fake)+1),averagediff_fake, 'r', label="fake news")
plt.legend()
plt.title('Average diffusion for both types of news')
plt.show()
Explanation: Plot average diffusion for both real and fake news on one graph
End of explanation
plt.ylabel('Average number of tweets')
plt.xlabel('Day')
plt.yscale('log')
plt.step(range(1,len(averagediff_fake)+1),averagediff_fake, 'r', range(1,len(averagediff)+1),averagediff, 'g')
plt.show()
Explanation: In logarithmic scale
End of explanation
diffDurationAvg = 0; # average duration of diffusion
durations = [len(day[col]) for col in collections] # all durations
diffDurationAvg = np.mean(durations) # mean duration
diffDurationAvg_std = np.std(durations) # standard deviation for the mean
print "Average diffusion duration: %.2f days" % diffDurationAvg
print "Standard deviation: %.2f days" % diffDurationAvg_std
Explanation: Calculate average diffusion duration (number of days until difussion is dead)
End of explanation |
12,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Datasets and Compute Options
Step3: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
Step4: Now we add compute options for the 'legacy' backend.
Step5: And set the two RV datasets to use the correct methods (for both compute options)
Step6: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step7: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
Step8: Let's also disable other special effect such as heating, gravity, and light-time effects.
Step9: Finally, let's compute all of our models
Step10: Plotting
Light Curve
Step11: Now let's plot the residuals between these two models
Step12: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
Step13: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
Step14: Numerical (flux-weighted) RVs | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2. In order to run this backend, you'll need to have PHOEBE 1.0 installed and manually build the python bindings in the phoebe-py directory.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.7
b['requiv@secondary'] = 0.7
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')
Explanation: Adding Datasets and Compute Options
End of explanation
b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')
b.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')
Explanation: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
End of explanation
b.add_compute('legacy', compute='phoebe1', irrad_method='none')
Explanation: Now we add compute options for the 'legacy' backend.
End of explanation
b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')
b.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')
Explanation: And set the two RV datasets to use the correct methods (for both compute options)
End of explanation
b.set_value_all('atm', 'extern_planckint')
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
b.set_value_all('gridsize', 30)
Explanation: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
End of explanation
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
Explanation: Let's also disable other special effect such as heating, gravity, and light-time effects.
End of explanation
b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')
b.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')
b.run_compute(compute='phoebe1', model='phoebe1model')
Explanation: Finally, let's compute all of our models
End of explanation
colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'}
afig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)
Explanation: Plotting
Light Curve
End of explanation
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-0.003, 0.003)
Explanation: Now let's plot the residuals between these two models
End of explanation
afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)
Explanation: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
End of explanation
artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1.5e-12, 1.5e-12)
Explanation: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
End of explanation
afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True)
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1e-2, 1e-2)
Explanation: Numerical (flux-weighted) RVs
End of explanation |
12,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
$$
\renewcommand{\like}{{\cal L}}
\renewcommand{\loglike}{{\ell}}
\renewcommand{\err}{{\cal E}}
\renewcommand{\dat}{{\cal D}}
\renewcommand{\hyp}{{\cal H}}
\renewcommand{\Ex}[2]{E_{#1}[#2]}
\renewcommand{\x}{{\mathbf x}}
\renewcommand{\v}[1]{{\mathbf #1}}
$$
Step1: A Motivating Example Using sklearn
Step2: First, we try a basic Logistic Regression
Step3: Tuning the Model
We use the following cv_score function to perform K-fold cross-validation and apply a scoring function to each test fold. In this incarnation we use accuracy score as the default scoring function.
Step4: Below is an example of using the cv_score function for a basic logistic regression model without regularization.
Step5: Black Box Grid Search in sklearn
Step6: A Walkthrough of the Math Behind Logistic Regression
Step7: Logistic Regression
Step8: Discriminative vs Generative Classifier | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
import sklearn.cross_validation
c0=sns.color_palette()[0]
c1=sns.color_palette()[1]
c2=sns.color_palette()[2]
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light,
cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):
h = .02
X=np.concatenate((Xtr, Xte))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
#plt.figure(figsize=(10,6))
if zfunc:
p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z=zfunc(p0, p1)
else:
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
ZZ = Z.reshape(xx.shape)
if mesh:
plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)
if predicted:
showtr = clf.predict(Xtr)
showte = clf.predict(Xte)
else:
showtr = ytr
showte = yte
ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold,
s=psize, alpha=alpha,edgecolor="k")
# and testing points
ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold,
alpha=alpha, marker="s", s=psize+10)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
return ax,xx,yy
def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light,
cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):
ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False,
colorscale=colorscale, cdiscrete=cdiscrete,
psize=psize, alpha=alpha, predicted=True)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)
cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)
return ax
Explanation: Classification
$$
\renewcommand{\like}{{\cal L}}
\renewcommand{\loglike}{{\ell}}
\renewcommand{\err}{{\cal E}}
\renewcommand{\dat}{{\cal D}}
\renewcommand{\hyp}{{\cal H}}
\renewcommand{\Ex}[2]{E_{#1}[#2]}
\renewcommand{\x}{{\mathbf x}}
\renewcommand{\v}[1]{{\mathbf #1}}
$$
End of explanation
dflog = pd.read_csv("data/01_heights_weights_genders.csv")
dflog.head()
# Checkup Exercise Set I:
# Create a scatter plot of Weight vs. Height
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.scatter(dflog.Weight, dflog.Height)
plt.xlabel("Weight")
plt.ylabel("Height")
plt.title("Relationship between Weight and Height")
# Checkup Exercise Set I:
# Color the points differently by Gender.
plt.scatter(dflog.Weight, dflog.Height, c=['Blue' if gender == 'Male' else 'Pink' for gender in dflog['Gender']])
Explanation: A Motivating Example Using sklearn: Heights and Weights
We'll use a dataset of heights and weights of males and females to hone our understanding of classifiers. We load the data into a dataframe and plot it.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Split the data into a training and test set.
Xlr, Xtestlr, ylr, ytestlr = train_test_split(dflog[['Height','Weight']].values,
(dflog.Gender == "Male").values,random_state=5)
clf = LogisticRegression()
# Fit the model on the training data.
clf.fit(Xlr, ylr)
# Print the accuracy score of the testing data.
print(accuracy_score(clf.predict(Xtestlr), ytestlr))
Explanation: First, we try a basic Logistic Regression:
Split the data into a training and test (hold-out) set
Train on the training set, and test for accuracy on the testing set
End of explanation
from sklearn.cross_validation import KFold
from sklearn.metrics import accuracy_score
def cv_score(clf, x, y, score_func=accuracy_score):
result = 0
nfold = 5
for train, test in KFold(nfold).split(x): # split data into train/test groups, 5 times
clf.fit(x[train], y[train]) # fit
result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data
return result / nfold # average
Explanation: Tuning the Model
We use the following cv_score function to perform K-fold cross-validation and apply a scoring function to each test fold. In this incarnation we use accuracy score as the default scoring function.
End of explanation
# First, we try a basic Logistic Regression:
# Split the data into a training and test set
# Train on the training set, and test for accuracy on the testing set
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
Xlr, Xtestlr, ylr, ytestlr = train_test_split(dflog[['Height','Weight']].values,
(dflog.Gender=="Male").values,random_state=5)
clf = LogisticRegression()
clf.fit(Xlr,ylr)
#the grid of parameters to search over
Cs = [0.001, 0.1, 1, 10, 100]
# your turn
# Checkup Exercise Set II
# Find the best model parameters based only on training set.
# For each C:
# 1) Create a logistic regression model with that value of C
# 2) Find the average score for this model using the cv_score
# function only on the training set (Xlr, ylr)
# 3) Pick the C with the highest average score
results = []
max_score = 0
for c in Cs:
clf = LogisticRegression(C=c)
clf.fit(Xlr, ylr)
score = accuracy_score(clf.predict(Xlr),ylr)
print("For c value of %f score is: %f" % (c, score))
if (score > max_score):
max_score = score
best_c = c
print("Best c value is: %f with a score of: %f" % (best_c, max_score))
# your turn
# Checkup Exercise Set III
# 1) Use the C obtained from procedure above and train
# a Logistic Regression on the training data.
clf = LogisticRegression(C=best_c)
clf.fit(Xlr, ylr)
# 2) Calculate the accurace on the test data.
accuracy = accuracy_score(clf.predict(Xtestlr), ytestlr)
print(accuracy)
Explanation: Below is an example of using the cv_score function for a basic logistic regression model without regularization.
End of explanation
# your turn
# Checkup Exercise Set IV
# 1) Use GridSearchCV to find the best model over the training set
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
# GridSearchCV(estimator, param_grid, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score='raise')[source]¶
param_grid = {'C': [0.001, 0.1, 1, 10, 100]}
clf = GridSearchCV(LogisticRegression(C=1), param_grid=param_grid)
clf.fit(Xlr, ylr)
print("Grid scores:")
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print("Accuracy is: %f" % accuracy_score(clf.predict(Xtestlr), ytestlr))
print()
print("Classification report:")
print()
print("The best model of training set based on GridSearchCV:")
print()
y_true, y_pred = ytestlr, clf.predict(Xtestlr)
print(classification_report(y_true, y_pred))
Explanation: Black Box Grid Search in sklearn
End of explanation
def cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5):
gs = sklearn.grid_search.GridSearchCV(clf, param_grid=parameters, cv=n_folds)
gs.fit(Xtrain, ytrain)
print("BEST PARAMS", gs.best_params_)
best = gs.best_estimator_
return best
from sklearn.cross_validation import train_test_split
def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8):
subdf=indf[featurenames]
if standardize:
subdfstd=(subdf - subdf.mean())/subdf.std()
else:
subdfstd=subdf
X=subdfstd.values
y=(indf[targetname].values==target1val)*1
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)
clf = cv_optimize(clf, parameters, Xtrain, ytrain)
clf=clf.fit(Xtrain, ytrain)
training_accuracy = clf.score(Xtrain, ytrain)
test_accuracy = clf.score(Xtest, ytest)
print("Accuracy on training data: {:0.2f}".format(training_accuracy))
print("Accuracy on test data: {:0.2f}".format(test_accuracy))
return clf, Xtrain, ytrain, Xtest, ytest
Explanation: A Walkthrough of the Math Behind Logistic Regression
End of explanation
h = lambda z: 1. / (1 + np.exp(-z))
zs=np.arange(-5, 5, 0.1)
plt.plot(zs, h(zs), alpha=0.5);
dflog.head()
clf_l, Xtrain_l, ytrain_l, Xtest_l, ytest_l = do_classify(LogisticRegression(),
{"C": [0.01, 0.1, 1, 10, 100]},
dflog, ['Weight', 'Height'], 'Gender','Male')
plt.figure()
ax=plt.gca()
points_plot(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, alpha=0.2);
clf_l.predict_proba(Xtest_l)
Explanation: Logistic Regression: The Math
End of explanation
plt.figure()
ax = plt.gca()
points_plot_prob(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, psize=20, alpha=0.1);
Explanation: Discriminative vs Generative Classifier
End of explanation |
12,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer learning
In the previous exercise we introduced the TinyImageNet-100-A dataset, and combined a handful of pretrained models on this dataset to improve our classification performance.
In this exercise we will explore several ways to adapt one of these same pretrained models to the TinyImageNet-100-B dataset, which does not share any images or object classes with TinyImage-100-A. We will see that we can use a pretrained classfier together with a small amount of training data from TinyImageNet-100-B to achieve reasonable performance on the TinyImageNet-100-B validation set.
Step1: Load data and model
You should already have downloaded the TinyImageNet-100-A and TinyImageNet-100-B datasets along with the pretrained models. Run the cell below to load (a subset of) the TinyImageNet-100-B dataset and one of the models that was pretrained on TinyImageNet-100-A.
TinyImageNet-100-B contains 50,000 training images in total (500 per class for all 100 classes) but for this exercise we will use only 5,000 training images (50 per class on average).
Step2: TinyImageNet-100-B classes
In the previous assignment we printed out a list of all classes in TinyImageNet-100-A. We can do the same on TinyImageNet-100-B; if you compare with the list in the previous exercise you will see that there is no overlap between the classes in TinyImageNet-100-A and TinyImageNet-100-B.
Step3: Visualize Examples
Similar to the previous exercise, we can visualize examples from the TinyImageNet-100-B dataset. The images are similar to TinyImageNet-100-A, but the images and classes in the two datasets are disjoint.
Step4: Extract features
ConvNets tend to learn generalizable high-level image features. For the five layer ConvNet architecture, we will use the (rectified) activations of the first fully-connected layer as our high-level image features.
Open the file cs231n/classifiers/convnet.py and modify the five_layer_convnet function to return features when the extract_features flag is True. This should be VERY simple.
Once you have done that, fill in the cell below, which should use the pretrained model in the model variable to extract features from all images in the training and validation sets.
Step5: kNN with ConvNet features
A simple way to implement transfer learning is to use a k-nearest neighborhood classifier. However instead of computing the distance between images using their pixel values as we did in Assignment 1, we will instead say that the distance between a pair of images is equal to the L2 distance between their feature vectors extracted using our pretrained ConvNet.
Implement this idea in the cell below. You can use the KNearestNeighbor class in the file cs321n/classifiers/k_nearest_neighbor.py.
Step6: Visualize neighbors
Recall that the kNN classifier computes the distance between all of its training instances and all of its test instances. We can use this distance matrix to help understand what the ConvNet features care about; specifically, we can select several random images from the validation set and visualize their nearest neighbors in the training set.
You will see that many times the nearest neighbors are quite far away from each other in pixel space; for example two images that show the same object from different perspectives may appear nearby in ConvNet feature space.
Since the following cell selects random validation images, you can run it several times to get different results.
Step7: Softmax on ConvNet features
Another way to implement transfer learning is to train a linear classifier on top of the features extracted from our pretrained ConvNet.
In the cell below, train a softmax classifier on the features extracted from the training set of TinyImageNet-100-B and use this classifier to predict on the validation set for TinyImageNet-100-B. You can use the Softmax class in the file cs231n/classifiers/linear_classifier.py.
Step8: Fine-tuning
We can improve our classification results on TinyImageNet-100-B further by fine-tuning our ConvNet. In other words, we will train a new ConvNet with the same architecture as our pretrained model, and use the weights of the pretrained model as an initialization to our new model.
Usually when fine-tuning you would re-initialize the weights of the final affine layer randomly, but in this case we will initialize the weights of the final affine layer using the weights of the trained softmax classifier from above.
In the cell below, use fine-tuning to improve your classification performance on TinyImageNet-100-B. You should be able to outperform the softmax classifier from above using fewer than 5 epochs over the training data.
You will need to adjust the learning rate and regularization to achieve good fine-tuning results. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from time import time
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Transfer learning
In the previous exercise we introduced the TinyImageNet-100-A dataset, and combined a handful of pretrained models on this dataset to improve our classification performance.
In this exercise we will explore several ways to adapt one of these same pretrained models to the TinyImageNet-100-B dataset, which does not share any images or object classes with TinyImage-100-A. We will see that we can use a pretrained classfier together with a small amount of training data from TinyImageNet-100-B to achieve reasonable performance on the TinyImageNet-100-B validation set.
End of explanation
# Load the TinyImageNet-100-B dataset
from cs231n.data_utils import load_tiny_imagenet, load_models
tiny_imagenet_b = 'cs231n/datasets/tiny-imagenet-100-B'
class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_b)
# Zero-mean the data
mean_img = np.mean(X_train, axis=0)
X_train -= mean_img
X_val -= mean_img
X_test -= mean_img
# We will use a subset of the TinyImageNet-B training data
mask = np.random.choice(X_train.shape[0], size=5000, replace=False)
X_train = X_train[mask]
y_train = y_train[mask]
# Load a pretrained model; it is a five layer convnet.
models_dir = 'cs231n/datasets/tiny-100-A-pretrained'
model = load_models(models_dir)['model1']
Explanation: Load data and model
You should already have downloaded the TinyImageNet-100-A and TinyImageNet-100-B datasets along with the pretrained models. Run the cell below to load (a subset of) the TinyImageNet-100-B dataset and one of the models that was pretrained on TinyImageNet-100-A.
TinyImageNet-100-B contains 50,000 training images in total (500 per class for all 100 classes) but for this exercise we will use only 5,000 training images (50 per class on average).
End of explanation
for names in class_names:
print ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-B classes
In the previous assignment we printed out a list of all classes in TinyImageNet-100-A. We can do the same on TinyImageNet-100-B; if you compare with the list in the previous exercise you will see that there is no overlap between the classes in TinyImageNet-100-A and TinyImageNet-100-B.
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(class_names), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(y_train == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = X_train[train_idx] + mean_img
img = img.transpose(1, 2, 0).astype('uint8')
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(class_names[class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Similar to the previous exercise, we can visualize examples from the TinyImageNet-100-B dataset. The images are similar to TinyImageNet-100-A, but the images and classes in the two datasets are disjoint.
End of explanation
from cs231n.classifiers.convnet import five_layer_convnet
# These should store extracted features for the training and validation sets
# respectively.
#
# More concretely, X_train_feats should be an array of shape
# (X_train.shape[0], 512) where X_train_feats[i] is the 512-dimensional
# feature vector extracted from X_train[i] using model.
#
# Similarly X_val_feats should have shape (X_val.shape[0], 512) and
# X_val_feats[i] should be the 512-dimensional feature vector extracted from
# X_val[i] using model.
X_train_feats = None
X_val_feats = None
# Use our pre-trained model to extract features on the subsampled training set
# and the validation set.
################################################################################
# TODO: Use the pretrained model to extract features for the training and #
# validation sets for TinyImageNet-100-B. #
# #
# HINT: Similar to computing probabilities in the previous exercise, you #
# should split the training and validation sets into small batches to avoid #
# using absurd amounts of memory. #
################################################################################
X_train_feats = five_layer_convnet(X_train, model, y=None, reg=0.0,
extract_features=True)
X_val_feats = five_layer_convnet(X_val, model, y=None, reg=0.0,
extract_features=True)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Extract features
ConvNets tend to learn generalizable high-level image features. For the five layer ConvNet architecture, we will use the (rectified) activations of the first fully-connected layer as our high-level image features.
Open the file cs231n/classifiers/convnet.py and modify the five_layer_convnet function to return features when the extract_features flag is True. This should be VERY simple.
Once you have done that, fill in the cell below, which should use the pretrained model in the model variable to extract features from all images in the training and validation sets.
End of explanation
from cs231n.classifiers.k_nearest_neighbor import KNearestNeighbor
# Predicted labels for X_val using a k-nearest-neighbor classifier trained on
# the features extracted from X_train. knn_y_val_pred[i] = c indicates that
# the kNN classifier predicts that X_val[i] has label c.
knn_y_val_pred = None
################################################################################
# TODO: Use a k-nearest neighbor classifier to compute knn_y_val_pred. #
# You may need to experiment with k to get the best performance. #
################################################################################
knn = KNearestNeighbor()
knn.train(X_train_feats, y_train)
knn_y_val_pred = knn.predict(X_val_feats, k=25)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
print 'Validation set accuracy: %f' % np.mean(knn_y_val_pred == y_val)
Explanation: kNN with ConvNet features
A simple way to implement transfer learning is to use a k-nearest neighborhood classifier. However instead of computing the distance between images using their pixel values as we did in Assignment 1, we will instead say that the distance between a pair of images is equal to the L2 distance between their feature vectors extracted using our pretrained ConvNet.
Implement this idea in the cell below. You can use the KNearestNeighbor class in the file cs321n/classifiers/k_nearest_neighbor.py.
End of explanation
dists = knn.compute_distances_no_loops(X_val_feats)
num_imgs = 5
neighbors_to_show = 6
query_idxs = np.random.randint(X_val.shape[0], size=num_imgs)
next_subplot = 1
first_row = True
for query_idx in query_idxs:
query_img = X_val[query_idx] + mean_img
query_img = query_img.transpose(1, 2, 0).astype('uint8')
plt.subplot(num_imgs, neighbors_to_show + 1, next_subplot)
plt.imshow(query_img)
plt.gca().axis('off')
if first_row:
plt.title('query')
next_subplot += 1
o = np.argsort(dists[query_idx])
for i in xrange(neighbors_to_show):
img = X_train[o[i]] + mean_img
img = img.transpose(1, 2, 0).astype('uint8')
plt.subplot(num_imgs, neighbors_to_show + 1, next_subplot)
plt.imshow(img)
plt.gca().axis('off')
if first_row:
plt.title('neighbor %d' % (i + 1))
next_subplot += 1
first_row = False
Explanation: Visualize neighbors
Recall that the kNN classifier computes the distance between all of its training instances and all of its test instances. We can use this distance matrix to help understand what the ConvNet features care about; specifically, we can select several random images from the validation set and visualize their nearest neighbors in the training set.
You will see that many times the nearest neighbors are quite far away from each other in pixel space; for example two images that show the same object from different perspectives may appear nearby in ConvNet feature space.
Since the following cell selects random validation images, you can run it several times to get different results.
End of explanation
from cs231n.classifiers.linear_classifier import Softmax
softmax_y_train_pred = None
softmax_y_val_pred = None
################################################################################
# TODO: Train a softmax classifier to predict a TinyImageNet-100-B class from #
# features extracted from our pretrained ConvNet. Use this classifier to make #
# predictions for the TinyImageNet-100-B training and validation sets, and #
# store them in softmax_y_train_pred and softmax_y_val_pred. #
# #
# You may need to experiment with number of iterations, regularization, and #
# learning rate in order to get good performance. The softmax classifier #
# should achieve a higher validation accuracy than the kNN classifier. #
################################################################################
softmax = Softmax()
# NOTE: the input X of softmax classifier if an array of shape D x N
softmax.train(X_train_feats.T, y_train,
learning_rate=1e-2, reg=1e-4, num_iters=1000)
y_train_pred = softmax.predict(X_train_feats.T)
y_val_pred = softmax.predict(X_val_feats.T)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
print y_val_pred.shape, y_train_pred.shape
train_acc = np.mean(y_train == y_train_pred)
val_acc = np.mean(y_val_pred == y_val)
print train_acc, val_acc
Explanation: Softmax on ConvNet features
Another way to implement transfer learning is to train a linear classifier on top of the features extracted from our pretrained ConvNet.
In the cell below, train a softmax classifier on the features extracted from the training set of TinyImageNet-100-B and use this classifier to predict on the validation set for TinyImageNet-100-B. You can use the Softmax class in the file cs231n/classifiers/linear_classifier.py.
End of explanation
from cs231n.classifier_trainer import ClassifierTrainer
# Make a copy of the pretrained model
model_copy = {k: v.copy() for k, v in model.iteritems()}
# Initialize the weights of the last affine layer using the trained weights from
# the softmax classifier above
model_copy['W5'] = softmax.W.T.copy().astype(model_copy['W5'].dtype)
model_copy['b5'] = np.zeros_like(model_copy['b5'])
# Fine-tune the model. You will need to adjust the training parameters to get good results.
trainer = ClassifierTrainer()
learning_rate = 1e-4
reg = 1e-1
dropout = 0.5
num_epochs = 2
finetuned_model = trainer.train(X_train, y_train, X_val, y_val,
model_copy, five_layer_convnet,
learning_rate=learning_rate, reg=reg, update='rmsprop',
dropout=dropout, num_epochs=num_epochs, verbose=True)[0]
Explanation: Fine-tuning
We can improve our classification results on TinyImageNet-100-B further by fine-tuning our ConvNet. In other words, we will train a new ConvNet with the same architecture as our pretrained model, and use the weights of the pretrained model as an initialization to our new model.
Usually when fine-tuning you would re-initialize the weights of the final affine layer randomly, but in this case we will initialize the weights of the final affine layer using the weights of the trained softmax classifier from above.
In the cell below, use fine-tuning to improve your classification performance on TinyImageNet-100-B. You should be able to outperform the softmax classifier from above using fewer than 5 epochs over the training data.
You will need to adjust the learning rate and regularization to achieve good fine-tuning results.
End of explanation |
12,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-AERCHEM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
12,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: get email of author
compare to list of known persons of interest
return boolean if author is person of interest
aggregate count over all emails to person
Step2: Beware of BUGS!!!
When Katie was working on the Enron POI identifier, she engineered a feature that identified when a given person was on the same email as a POI. So for example, if Ken Lay and Katie Malone are both recipients of the same email message, then Katie Malone should have her "shared receipt" feature incremented. If she shares lots of emails with POIs, maybe she's a POI herself.
Here's the problem
Step3: This is an interative process
- start off with a peered down version of the dataset
- run a decision tree on it
- get the accuracy, should be rather high
- get the important features definesd by coefs over 0.2
- remove those features
- run again until very fews have 0.2 importance value | Python Code:
from __future__ import division
data_point = data_dict['METTS MARK']
frac = data_point["from_poi_to_this_person"] / data_point["to_messages"]
print frac
def computeFraction( poi_messages, all_messages ):
given a number messages to/from POI (numerator)
and number of all messages to/from a person (denominator),
return the fraction of messages to/from that person
that are from/to a POI
### you fill in this code, so that it returns either
### the fraction of all messages to this person that come from POIs
### or
### the fraction of all messages from this person that are sent to POIs
### the same code can be used to compute either quantity
### beware of "NaN" when there is no known email address (and so
### no filled email features), and integer division!
### in case of poi_messages or all_messages having "NaN" value, return 0.
fraction = 0
if poi_messages != 'NaN':
fraction = float(poi_messages) / float(all_messages)
return fraction
submit_dict = {}
for name in data_dict:
data_point = data_dict[name]
from_poi_to_this_person = data_point["from_poi_to_this_person"]
to_messages = data_point["to_messages"]
fraction_from_poi = computeFraction( from_poi_to_this_person, to_messages )
print'{:5}{:35}{:.2f}'.format('FROM ', name, fraction_from_poi)
data_point["fraction_from_poi"] = fraction_from_poi
from_this_person_to_poi = data_point["from_this_person_to_poi"]
from_messages = data_point["from_messages"]
fraction_to_poi = computeFraction( from_this_person_to_poi, from_messages )
#print fraction_to_poi
print'{:5}{:35}{:.2f}'.format('TO: ', name, fraction_to_poi)
submit_dict[name]={"from_poi_to_this_person":fraction_from_poi,
"from_this_person_to_poi":fraction_to_poi}
data_point["fraction_to_poi"] = fraction_to_poi
#####################
def submitDict():
return submit_dict
Explanation: get email of author
compare to list of known persons of interest
return boolean if author is person of interest
aggregate count over all emails to person
End of explanation
sys.path.append(dataPath+'text_learning/')
words_file = "your_word_data.pkl"
authors_file = "your_email_authors.pkl"
word_data = pickle.load( open(words_file, "r"))
authors = pickle.load( open(authors_file, "r") )
### test_size is the percentage of events assigned to the test set (the
### remainder go into training)
### feature matrices changed to dense representations for compatibility with
### classifier functions in versions 0.15.2 and earlier
from sklearn import cross_validation
features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(word_data,
authors, test_size=0.1, random_state=42)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')
features_train = vectorizer.fit_transform(features_train)
features_test = vectorizer.transform(features_test).toarray()
### a classic way to overfit is to use a small number
### of data points and a large number of features;
### train on only 150 events to put ourselves in this regime
features_train = features_train[:150].toarray()
labels_train = labels_train[:150]
Explanation: Beware of BUGS!!!
When Katie was working on the Enron POI identifier, she engineered a feature that identified when a given person was on the same email as a POI. So for example, if Ken Lay and Katie Malone are both recipients of the same email message, then Katie Malone should have her "shared receipt" feature incremented. If she shares lots of emails with POIs, maybe she's a POI herself.
Here's the problem: there was a subtle bug, that Ken Lay's "shared receipt" counter would also be incremented when this happens. And of course, then Ken Lay always shares receipt with a POI, because he is a POI. So the "shared receipt" feature became extremely powerful in finding POIs, because it effectively was encoding the label for each person as a feature.
We found this first by being suspicious of a classifier that was always returning 100% accuracy. Then we removed features one at a time, and found that this feature was driving all the performance. Then, digging back through the feature code, we found the bug outlined above. We changed the code so that a person's "shared receipt" feature was only incremented if there was a different POI who received the email, reran the code, and tried again. The accuracy dropped to a more reasonable level.
We take a couple of lessons from this:
- Anyone can make mistakes--be skeptical of your results!
- 100% accuracy should generally make you suspicious. Extraordinary claims require extraordinary proof.
- If there's a feature that tracks your labels a little too closely, it's very likely a bug!
- If you're sure it's not a bug, you probably don't need machine learning--you can just use that feature alone to assign labels.
Feature Selection Mini Project
End of explanation
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf.fit(features_train, labels_train)
print"{}{:.2f}".format("Classifier accurancy: ", clf.score(features_test, labels_test))
import operator
featuresImportance = clf.feature_importances_
featuresSortedByScore = []
for feature in range(len(featuresImportance)):
if featuresImportance[feature] > 0.2:
featuresSortedByScore.append([feature, featuresImportance[feature]])
df = sorted(featuresSortedByScore, key=operator.itemgetter(1), reverse=True)
for i in range(len(df)):
print "{:5d}: {:f}".format(df[i][0], df[i][1])
for i in range(len(df)):
print vectorizer.get_feature_names()[df[i][0]]
Explanation: This is an interative process
- start off with a peered down version of the dataset
- run a decision tree on it
- get the accuracy, should be rather high
- get the important features definesd by coefs over 0.2
- remove those features
- run again until very fews have 0.2 importance value
End of explanation |
12,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step10: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step11: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step12: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step13: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step14: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step15: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step16: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step17: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
End of explanation
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
12,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wiki2vec
Jupyter notebook for creating a Word2vec model from a Wikipedia dump. This model file can then be read into gensim's Word2Vec class. Feel free to edit this script as you see fit.
Dependencies
Python 3
Jupyter
Gensim
Steps
Download a Wikipedia dump by visiting
```
https
Step3: Train Word2vec on Wikipedia dump
Here is where we train the word2vec model on the given Wikipedia dump. Specifically we,
Read given Wikipedia dump with gensim
Write to temporary text file (will get deleted)
Train word2vec model
Save word2vec model
NB
Step4: Demo word2vec
Read in the saved word2vec model and perform some basic analysis on it. | Python Code:
WIKIPEDIA_DUMP_PATH = './data/wiki-corpuses/enwiki-latest-pages-articles.xml.bz2'
# Choose a path that the word2vec model should be saved to
# (during training), and read from afterwards.
WIKIPEDIA_W2V_PATH = './data/enwiki.model'
Explanation: Wiki2vec
Jupyter notebook for creating a Word2vec model from a Wikipedia dump. This model file can then be read into gensim's Word2Vec class. Feel free to edit this script as you see fit.
Dependencies
Python 3
Jupyter
Gensim
Steps
Download a Wikipedia dump by visiting
```
https://dumps.wikimedia.org/<locale>wiki/latest/<locale>wiki-latest-pages-articles.xml.bz2
E.x. https://dumps.wikimedia.org/itwiki/latest/itwiki-latest-pages-articles.xml.bz2
```
- Once downloaded assign the following paths below:
End of explanation
import sys
import os
import tempfile
import multiprocessing
import logging
from gensim.corpora import WikiCorpus
from gensim.models.word2vec import LineSentence
from gensim.models import Word2Vec
def write_wiki_corpus(wiki, output_file):
Write a WikiCorpus as plain text to file.
i = 0
for text in wiki.get_texts():
text_output_file.write(b' '.join(text) + b'\n')
i = i + 1
if (i % 10000 == 0):
print('\rSaved %d articles' % i, end='', flush=True)
print('\rFinished saving %d articles' % i, end='', flush=True)
def build_trained_model(text_file):
Reads text file and returns a trained model.
sentences = LineSentence(text_file)
model = Word2Vec(sentences, size=400, window=5, min_count=5,
workers=multiprocessing.cpu_count())
# Trim unneeded model memory to reduce RAM usage
model.init_sims(replace=True)
return model
logging_format = '%(asctime)s : %(levelname)s : %(message)s'
logging.basicConfig(format=logging_format, level=logging.INFO)
with tempfile.NamedTemporaryFile(suffix='.txt') as text_output_file:
# Create wiki corpus, and save text to temp file
wiki_corpus = WikiCorpus(WIKIPEDIA_DUMP_PATH, lemmatize=False, dictionary={})
write_wiki_corpus(wiki_corpus, text_output_file)
del wiki_corpus
# Train model on wiki corpus
model = build_trained_model(text_output_file)
model.save(WIKIPEDIA_W2V_PATH)
Explanation: Train Word2vec on Wikipedia dump
Here is where we train the word2vec model on the given Wikipedia dump. Specifically we,
Read given Wikipedia dump with gensim
Write to temporary text file (will get deleted)
Train word2vec model
Save word2vec model
NB: 1 Wikipedia article is fed into word2vec as a single sentence.
End of explanation
import random
%time
model = Word2Vec.load(WIKIPEDIA_W2V_PATH)
vocab = list(model.vocab.keys())
print('Vocabulary sample:', vocab[:5])
word = random.choice(vocab)
print('Similar words to:', word)
model.most_similar(word)
word1 = random.choice(vocab)
word2 = random.choice(vocab)
print('similarity(%s, %s) = %f' % (word1, word2, model.similarity(word1, word2)))
Explanation: Demo word2vec
Read in the saved word2vec model and perform some basic analysis on it.
End of explanation |
12,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 使用 tf.function 提高性能
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 定义一个辅助函数来演示可能遇到的错误类型:
Step3: 基础知识
用法
您定义的 Function 就像核心 TensorFlow 运算:您可以在 Eager 模式下执行,可以计算梯度,等等。
Step4: Function 中可以嵌套其他 Function。
Step5: Function 的执行速度比 Eager 代码快,尤其是对于包含很多简单运算的计算图。但是,对于包含一些复杂运算(如卷积)的计算图,速度提升不会太明显。
Step6: 跟踪
Python 的动态类型意味着您可以调用包含各种参数类型的函数,在各种场景下,Python 的行为可能有所不同。
但是,创建 TensorFlow 计算图需要静态 dtype 和形状维度。tf.function 通过包装一个 Python 函数来创建 Function 对象,弥补了这一缺陷。根据提供的输入,Function 为其选择相应的计算图,从而在必要时追溯 Python 函数。理解发生跟踪的原因和时机后,有效运用 tf.function 就会容易得多!
您可以通过调用包含不同类型参数的 Function 来切实观察这种多态行为。
Step7: 请注意,如果重复调用包含相同参数类型的 Function,TensorFlow 会重复使用之前跟踪的计算图,因为后面的调用生成的计算图将相同。
Step8: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
您可以使用 pretty_printed_concrete_signatures() 查看所有可用跟踪:
Step9: 目前,您已经了解 tf.function 通过 TensorFlow 的计算图跟踪逻辑创建缓存的动态调度层。对于术语的含义,更具体的解释如下:
tf.Graph 与语言无关,是对计算的原始可移植表示。
ConcreteFunction 是 tf.Graph 的 Eeager 执行包装器。
Function 管理 ConcreteFunction 的缓存,并为输入选择正确的缓存。
tf.function 包装 Python 函数,并返回一个 Function 对象。
获取具体函数
每次跟踪函数时都会创建一个新的具体函数。您可以使用 get_concrete_function 直接获取具体函数。
Step10: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
打印 ConcreteFunction 会显示其输入参数(及类型)和输出类型的摘要。
Step11: 您也可以直接检索具体函数的签名。
Step12: 对不兼容的类型使用具体跟踪会引发错误
Step13: 您可能会注意到,在具体函数的输入签名中对 Python 参数进行了特别处理。TensorFlow 2.3 之前的版本会将 Python 参数直接从具体函数的签名中删除。从 TensorFlow 2.3 开始,Python 参数会保留在签名中,但是会受到约束,只能获取在跟踪期间设置的值。
Step14: 获取计算图
每个具体函数都是 tf.Graph 的可调用包装器。虽然一般不需要检索实际 tf.Graph 对象,不过,您可以从任何具体函数轻松获得实际对象。
Step15: 调试
通常,在 Eager 模式下调试代码比在 tf.function 中简单。在使用 tf.function 进行装饰之前,进行装饰之前,您应该先确保代码可在 Eager 模式下无错误执行。为了帮助调试,您可以调用 tf.config.run_functions_eagerly(True) 来全局停用和重新启用 tf.function。
追溯仅在 tf.function 中出现的问题时,可参考下面的几点提示:
普通旧 Python print 调用仅在跟踪期间执行,可用于追溯(重新)跟踪函数的时间。
tf.print 调用每次都会执行,可用于追溯执行过程中产生的中间值。
利用 tf.debugging.enable_check_numerics 很容易追溯到 NaN 和 Inf 在何处创建。
pdb 可以帮助您理解跟踪的详细过程。(提醒:使用 PDB 调试时,AutoGraph 会自动转换 Python 源代码。)
跟踪语义
缓存键规则
通过从输入的参数和关键词参数计算缓存键,Function 可以确定是否重复使用跟踪的具体函数。
为 tf.Tensor 参数生成的键是其形状和 dtype。
从 TensorFlow 2.3 开始,为 tf.Variable 参数生成的键是其 id()。
为 Python 基元生成的键是其值。为嵌套 dict、 list、 tuple、 namedtuple 和 attr 生成的键是扁平化元祖。(由于这种扁平化处理,如果调用的具体函数的嵌套结构与跟踪期间使用的不同,则会导致 TypeError)。
对于所有其他 Python 类型,键基于对象 id(),以便为类的每个实例独立跟踪方法。
控制回溯
回溯可以确保 TensorFlow 为每组输入生成正确的计算图。但是,跟踪操作非常消耗资源!如果 Function 为每一次调用都回溯新的计算图,您会发现代码的执行速度远不如不使用 tf.function。
要控制跟踪行为,可以采用以下技巧:
在 tf.function 中指定 input_signature 来限制跟踪。
Step16: 在 tf.TensorSpec 中指定 [None] 维度可灵活运用跟踪重用。
由于 TensorFlow 根据其形状匹配张量,因此,对于可变大小输入,使用 None 维度作为通配符可以让 Function 重复使用跟踪。对于每个批次,如果有不同长度的序列或不同大小的计算图,则会出现可变大小输入(请参阅 Transformer 和 Deep Dream 教程了解示例)。
Step17: 将 Python 参数转换为张量以减少回溯。
通常,Python 参数用于控制超参数和计算图构造,例如 num_layers=10、training=True 或 nonlinearity='relu'。所以,如果 Python 参数改变,则有必要回溯计算图。
但是,Python 参数有可能并未用于控制计算图构造。在这些情况下,Python 值的改变可能触发非必要的回溯。例如,在此训练循环中,AutoGraph 会动态展开。尽管有多个跟踪,但生成的计算图实际上是相同的,所以没有必要进行回溯。
Step18: 如果需要强制执行回溯,可以创建一个新的 Function。单独的 Function 对象肯定不会共享跟踪记录。
Step19: Python 副作用
Python 副作用(如打印、追加到列表、改变全局变量)仅在第一次使用一组输入调用 Function 时才会发生。随后重新执行跟踪的 tf.Graph,而不执行 Python 代码。
一般经验法则是仅使用 Python 副作用来调试跟踪记录。另外,对于每一次调用,TensorFlow 运算(如 tf.Variable.assign、tf.print 和 tf.summary)是确保代码得到 TensorFlow 运行时跟踪并执行的最佳方法。
Step20: 很多 Python 功能(如生成器和迭代器)依赖 Python 运行时来跟踪状态。通常,虽然这些构造在 Eager 模式下可以正常工作,但由于跟踪行为,tf.function 中会发生许多意外情况:
举一个例子,推进迭代器状态是 Python 的一个副作用,因此只在跟踪过程中发生。
Step21: 某些迭代构造通过 AutoGraph 获得支持。有关概述,请参阅 AutoGraph 转换部分。
如果希望在每次调用 Function 时都执行 Python 代码,tf.py_function 可以作为退出舱口。tf.py_function 的缺点是不可移植,性能不高,并且在分布式(多 GPU、TPU)设置中效果不佳。另外,由于 tf.py_function 必须连接到计算图中,它会将所有输入/输出转换为张量。
tf.gather、tf.stack 和 tf.TensorArray 之类的 API 可帮助您在原生 TensorFlow 中实现常见循环模式。
Step22: 变量
在函数中创建新的 tf.Variable 时可能遇到错误。该错误是为了防止重复调用发生行为背离:在 Eager 模式下,每次调用函数时都会创建一个新变量,但是在 Function 中则不一定,这是因为重复使用了跟踪记录。
Step23: 您也可以在 Function 内部创建变量,不过只能在第一次执行该函数时创建这些变量。
Step24: 您可能遇到的另一个错误是变量被回收。与常规 Python 函数不同,具体函数只会保留对它们闭包时所在变量的弱引用,因此,您必须保留对任何变量的引用。
Step25: AutoGraph 转换
AutoGraph 是一个库,在 tf.function 中默认处于启用状态。它可以将 Python Eager 代码的子集转换为与计算图兼容的 TensorFlow 运算。这包括 if、for、while 等控制流。
tf.cond 和 tf.while_loop 等 TensorFlow 运算仍然可以运行,但是使用 Python 编写时,控制流通常更易于编写,代码也更易于理解。
Step26: 如果您有兴趣,可以检查 Autograph 生成的代码。
Step27: 条件语句
AutoGraph 会将某些 if <condition> 语句转换为等效的 tf.cond 调用。如果 <condition> 是张量,则会执行这种替换,否则会将 if 语句作为 Python 条件语句执行。
Python 条件语句在跟踪时执行,因此会将该条件语句的一个分支添加到计算图。如果不使用 AutoGraph,当存在依赖于数据的控制流时,此跟踪计算图将无法选择替代分支。
tf.cond 跟踪并将条件的两个分支添加到计算图,在执行时动态选择分支。跟踪可能产生意外的副作用;有关详细信息,请参阅 AutoGraph 跟踪作用。
Step28: 有关 AutoGraph 转换的 if 语句的其他限制,请参阅参考文档。
循环
AutoGraph 会将某些 for 和 while 语句转换为等效的 TensorFlow 循环运算,例如 tf.while_loop。如果不转换,则会将 for 或 while 循环作为 Python 循环执行。
以下情形会执行这种替换:
for x in y:如果 y 是一个张量,则转换为 tf.while_loop。在特殊情况下,如果 y 是 tf.data.Dataset,则会生成 tf.data.Dataset 运算的组合。
while <condition>:如果 <condition> 是张量,则转换为 tf.while_loop。
Python 循环在跟踪时执行,因而循环每迭代一次,都会将额外的运算添加到 tf.Graph。
TensorFlow 循环会跟踪循环体,并在执行时动态选择迭代的运行次数。循环体仅在生成的 tf.Graph 中出现一次。
有关 AutoGraph 转换的 for 和 while 语句的其他限制,请参阅参考文档。
在 Python 数据上循环
一个常见陷阱是在 tf.function 中的 Python/Numpy 数据上循环。此循环在跟踪过程中执行,因而循环每迭代一次,都会将模型的一个副本添加到 tf.Graph。
如果要在 tf.function 中包装整个训练循环,最安全的方法是将数据包装为 tf.data.Dataset,以便 AutoGraph 动态展开训练循环。
Step29: 在数据集中包装 Python/Numpy 数据时,要注意 tf.data.Dataset.from_generator 与 tf.data.Dataset.from_tensors。前者将数据保留在 Python 中,并通过 tf.py_function 获取,这可能会影响性能;后者将数据的副本捆绑成计算图中的一个大 tf.constant() 节点,这可能会消耗较多内存。
通过 TFRecordDataset/CsvDataset 等从文件中读取数据是最高效的数据使用方式,因为这样 TensorFlow 就可以自行管理数据的异步加载和预提取,不必利用 Python。要了解详细信息,请参阅 tf.data 指南。
累加循环值
一种常见模式是不断累加循环的中间值。通常,这可以通过将元素追加到 Python 列表或将条目添加到 Python 字典来实现。但是,由于存在 Python 副作用,在动态展开循环中,这些方法无法达到预期效果。要从动态展开循环累加结果,可以使用 tf.TensorArray 来实现。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
Explanation: 使用 tf.function 提高性能
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/function" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/function.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/function.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/function.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
在 TensorFlow 2 中,默认情况下会打开 Eager Execution 模式。这种模式下的用户界面非常灵活直观(执行一次性运算要简单快速得多),但可能会牺牲一定的性能和可部署性。
您可以使用 tf.function 将程序转换为计算图。这是一个转换工具,用于从 Python 代码创建独立于 Python 的数据流图。它可以帮助您创建高效且可移植的模型,并且如果要使用 SavedModel,则必须使用此工具。
本指南介绍 tf.function 的底层工作原理,让您形成概念化理解,从而有效地加以利用。
要点和建议包括:
先在 Eager 模式下调试,然后使用 @tf.function 进行装饰。
不依赖 Python 的副作用,如对象变异或列表追加。
tf.function 最适合处理 TensorFlow 运算;NumPy 和 Python 调用会转换为常量。
设置
End of explanation
import traceback
import contextlib
# Some helper code to demonstrate the kinds of errors you might encounter.
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}:'.format(error_class))
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
Explanation: 定义一个辅助函数来演示可能遇到的错误类型:
End of explanation
@tf.function
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
Explanation: 基础知识
用法
您定义的 Function 就像核心 TensorFlow 运算:您可以在 Eager 模式下执行,可以计算梯度,等等。
End of explanation
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
Explanation: Function 中可以嵌套其他 Function。
End of explanation
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(image):
return conv_layer(image)
image = tf.zeros([1, 200, 200, 100])
# warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")
Explanation: Function 的执行速度比 Eager 代码快,尤其是对于包含很多简单运算的计算图。但是,对于包含一些复杂运算(如卷积)的计算图,速度提升不会太明显。
End of explanation
# Functions are polymorphic
@tf.function
def double(a):
print("Tracing with", a)
return a + a
print(double(tf.constant(1)))
print()
print(double(tf.constant(1.1)))
print()
print(double(tf.constant("a")))
print()
Explanation: 跟踪
Python 的动态类型意味着您可以调用包含各种参数类型的函数,在各种场景下,Python 的行为可能有所不同。
但是,创建 TensorFlow 计算图需要静态 dtype 和形状维度。tf.function 通过包装一个 Python 函数来创建 Function 对象,弥补了这一缺陷。根据提供的输入,Function 为其选择相应的计算图,从而在必要时追溯 Python 函数。理解发生跟踪的原因和时机后,有效运用 tf.function 就会容易得多!
您可以通过调用包含不同类型参数的 Function 来切实观察这种多态行为。
End of explanation
# This doesn't print 'Tracing with ...'
print(double(tf.constant("b")))
Explanation: 请注意,如果重复调用包含相同参数类型的 Function,TensorFlow 会重复使用之前跟踪的计算图,因为后面的调用生成的计算图将相同。
End of explanation
print(double.pretty_printed_concrete_signatures())
Explanation: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
您可以使用 pretty_printed_concrete_signatures() 查看所有可用跟踪:
End of explanation
print("Obtaining concrete trace")
double_strings = double.get_concrete_function(tf.constant("a"))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
# You can also call get_concrete_function on an InputSpec
double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))
print(double_strings_from_inputspec(tf.constant("c")))
Explanation: 目前,您已经了解 tf.function 通过 TensorFlow 的计算图跟踪逻辑创建缓存的动态调度层。对于术语的含义,更具体的解释如下:
tf.Graph 与语言无关,是对计算的原始可移植表示。
ConcreteFunction 是 tf.Graph 的 Eeager 执行包装器。
Function 管理 ConcreteFunction 的缓存,并为输入选择正确的缓存。
tf.function 包装 Python 函数,并返回一个 Function 对象。
获取具体函数
每次跟踪函数时都会创建一个新的具体函数。您可以使用 get_concrete_function 直接获取具体函数。
End of explanation
print(double_strings)
Explanation: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
打印 ConcreteFunction 会显示其输入参数(及类型)和输出类型的摘要。
End of explanation
print(double_strings.structured_input_signature)
print(double_strings.structured_outputs)
Explanation: 您也可以直接检索具体函数的签名。
End of explanation
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
Explanation: 对不兼容的类型使用具体跟踪会引发错误
End of explanation
@tf.function
def pow(a, b):
return a ** b
square = pow.get_concrete_function(a=tf.TensorSpec(None, tf.float32), b=2)
print(square)
assert square(tf.constant(10.0)) == 100
with assert_raises(TypeError):
square(tf.constant(10.0), b=3)
Explanation: 您可能会注意到,在具体函数的输入签名中对 Python 参数进行了特别处理。TensorFlow 2.3 之前的版本会将 Python 参数直接从具体函数的签名中删除。从 TensorFlow 2.3 开始,Python 参数会保留在签名中,但是会受到约束,只能获取在跟踪期间设置的值。
End of explanation
graph = double_strings.graph
for node in graph.as_graph_def().node:
print(f'{node.input} -> {node.name}')
Explanation: 获取计算图
每个具体函数都是 tf.Graph 的可调用包装器。虽然一般不需要检索实际 tf.Graph 对象,不过,您可以从任何具体函数轻松获得实际对象。
End of explanation
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def next_collatz(x):
print("Tracing with", x)
return tf.where(x % 2 == 0, x // 2, 3 * x + 1)
print(next_collatz(tf.constant([1, 2])))
# We specified a 1-D tensor in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([[1, 2], [3, 4]]))
# We specified an int32 dtype in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([1.0, 2.0]))
Explanation: 调试
通常,在 Eager 模式下调试代码比在 tf.function 中简单。在使用 tf.function 进行装饰之前,进行装饰之前,您应该先确保代码可在 Eager 模式下无错误执行。为了帮助调试,您可以调用 tf.config.run_functions_eagerly(True) 来全局停用和重新启用 tf.function。
追溯仅在 tf.function 中出现的问题时,可参考下面的几点提示:
普通旧 Python print 调用仅在跟踪期间执行,可用于追溯(重新)跟踪函数的时间。
tf.print 调用每次都会执行,可用于追溯执行过程中产生的中间值。
利用 tf.debugging.enable_check_numerics 很容易追溯到 NaN 和 Inf 在何处创建。
pdb 可以帮助您理解跟踪的详细过程。(提醒:使用 PDB 调试时,AutoGraph 会自动转换 Python 源代码。)
跟踪语义
缓存键规则
通过从输入的参数和关键词参数计算缓存键,Function 可以确定是否重复使用跟踪的具体函数。
为 tf.Tensor 参数生成的键是其形状和 dtype。
从 TensorFlow 2.3 开始,为 tf.Variable 参数生成的键是其 id()。
为 Python 基元生成的键是其值。为嵌套 dict、 list、 tuple、 namedtuple 和 attr 生成的键是扁平化元祖。(由于这种扁平化处理,如果调用的具体函数的嵌套结构与跟踪期间使用的不同,则会导致 TypeError)。
对于所有其他 Python 类型,键基于对象 id(),以便为类的每个实例独立跟踪方法。
控制回溯
回溯可以确保 TensorFlow 为每组输入生成正确的计算图。但是,跟踪操作非常消耗资源!如果 Function 为每一次调用都回溯新的计算图,您会发现代码的执行速度远不如不使用 tf.function。
要控制跟踪行为,可以采用以下技巧:
在 tf.function 中指定 input_signature 来限制跟踪。
End of explanation
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def g(x):
print('Tracing with', x)
return x
# No retrace!
print(g(tf.constant([1, 2, 3])))
print(g(tf.constant([1, 2, 3, 4, 5])))
Explanation: 在 tf.TensorSpec 中指定 [None] 维度可灵活运用跟踪重用。
由于 TensorFlow 根据其形状匹配张量,因此,对于可变大小输入,使用 None 维度作为通配符可以让 Function 重复使用跟踪。对于每个批次,如果有不同长度的序列或不同大小的计算图,则会出现可变大小输入(请参阅 Transformer 和 Deep Dream 教程了解示例)。
End of explanation
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = ", num_steps)
tf.print("Executing with num_steps = ", num_steps)
for _ in tf.range(num_steps):
train_one_step()
print("Retracing occurs for different Python arguments.")
train(num_steps=10)
train(num_steps=20)
print()
print("Traces are reused for Tensor arguments.")
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
Explanation: 将 Python 参数转换为张量以减少回溯。
通常,Python 参数用于控制超参数和计算图构造,例如 num_layers=10、training=True 或 nonlinearity='relu'。所以,如果 Python 参数改变,则有必要回溯计算图。
但是,Python 参数有可能并未用于控制计算图构造。在这些情况下,Python 值的改变可能触发非必要的回溯。例如,在此训练循环中,AutoGraph 会动态展开。尽管有多个跟踪,但生成的计算图实际上是相同的,所以没有必要进行回溯。
End of explanation
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
Explanation: 如果需要强制执行回溯,可以创建一个新的 Function。单独的 Function 对象肯定不会共享跟踪记录。
End of explanation
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
Explanation: Python 副作用
Python 副作用(如打印、追加到列表、改变全局变量)仅在第一次使用一组输入调用 Function 时才会发生。随后重新执行跟踪的 tf.Graph,而不执行 Python 代码。
一般经验法则是仅使用 Python 副作用来调试跟踪记录。另外,对于每一次调用,TensorFlow 运算(如 tf.Variable.assign、tf.print 和 tf.summary)是确保代码得到 TensorFlow 运行时跟踪并执行的最佳方法。
End of explanation
external_var = tf.Variable(0)
@tf.function
def buggy_consume_next(iterator):
external_var.assign_add(next(iterator))
tf.print("Value of external_var:", external_var)
iterator = iter([0, 1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
Explanation: 很多 Python 功能(如生成器和迭代器)依赖 Python 运行时来跟踪状态。通常,虽然这些构造在 Eager 模式下可以正常工作,但由于跟踪行为,tf.function 中会发生许多意外情况:
举一个例子,推进迭代器状态是 Python 的一个副作用,因此只在跟踪过程中发生。
End of explanation
external_list = []
def side_effect(x):
print('Python side effect')
external_list.append(x)
@tf.function
def f(x):
tf.py_function(side_effect, inp=[x], Tout=[])
f(1)
f(1)
f(1)
# The list append happens all three times!
assert len(external_list) == 3
# The list contains tf.constant(1), not 1, because py_function casts everything to tensors.
assert external_list[0].numpy() == 1
Explanation: 某些迭代构造通过 AutoGraph 获得支持。有关概述,请参阅 AutoGraph 转换部分。
如果希望在每次调用 Function 时都执行 Python 代码,tf.py_function 可以作为退出舱口。tf.py_function 的缺点是不可移植,性能不高,并且在分布式(多 GPU、TPU)设置中效果不佳。另外,由于 tf.py_function 必须连接到计算图中,它会将所有输入/输出转换为张量。
tf.gather、tf.stack 和 tf.TensorArray 之类的 API 可帮助您在原生 TensorFlow 中实现常见循环模式。
End of explanation
@tf.function
def f(x):
v = tf.Variable(1.0)
v.assign_add(x)
return v
with assert_raises(ValueError):
f(1.0)
Explanation: 变量
在函数中创建新的 tf.Variable 时可能遇到错误。该错误是为了防止重复调用发生行为背离:在 Eager 模式下,每次调用函数时都会创建一个新变量,但是在 Function 中则不一定,这是因为重复使用了跟踪记录。
End of explanation
class Count(tf.Module):
def __init__(self):
self.count = None
@tf.function
def __call__(self):
if self.count is None:
self.count = tf.Variable(0)
return self.count.assign_add(1)
c = Count()
print(c())
print(c())
Explanation: 您也可以在 Function 内部创建变量,不过只能在第一次执行该函数时创建这些变量。
End of explanation
external_var = tf.Variable(3)
@tf.function
def f(x):
return x * external_var
traced_f = f.get_concrete_function(4)
print("Calling concrete function...")
print(traced_f(4))
del external_var
print()
print("Calling concrete function after garbage collecting its closed Variable...")
with assert_raises(tf.errors.FailedPreconditionError):
traced_f(4)
Explanation: 您可能遇到的另一个错误是变量被回收。与常规 Python 函数不同,具体函数只会保留对它们闭包时所在变量的弱引用,因此,您必须保留对任何变量的引用。
End of explanation
# Simple loop
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
Explanation: AutoGraph 转换
AutoGraph 是一个库,在 tf.function 中默认处于启用状态。它可以将 Python Eager 代码的子集转换为与计算图兼容的 TensorFlow 运算。这包括 if、for、while 等控制流。
tf.cond 和 tf.while_loop 等 TensorFlow 运算仍然可以运行,但是使用 Python 编写时,控制流通常更易于编写,代码也更易于理解。
End of explanation
print(tf.autograph.to_code(f.python_function))
Explanation: 如果您有兴趣,可以检查 Autograph 生成的代码。
End of explanation
@tf.function
def fizzbuzz(n):
for i in tf.range(1, n + 1):
print('Tracing for loop')
if i % 15 == 0:
print('Tracing fizzbuzz branch')
tf.print('fizzbuzz')
elif i % 3 == 0:
print('Tracing fizz branch')
tf.print('fizz')
elif i % 5 == 0:
print('Tracing buzz branch')
tf.print('buzz')
else:
print('Tracing default branch')
tf.print(i)
fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
Explanation: 条件语句
AutoGraph 会将某些 if <condition> 语句转换为等效的 tf.cond 调用。如果 <condition> 是张量,则会执行这种替换,否则会将 if 语句作为 Python 条件语句执行。
Python 条件语句在跟踪时执行,因此会将该条件语句的一个分支添加到计算图。如果不使用 AutoGraph,当存在依赖于数据的控制流时,此跟踪计算图将无法选择替代分支。
tf.cond 跟踪并将条件的两个分支添加到计算图,在执行时动态选择分支。跟踪可能产生意外的副作用;有关详细信息,请参阅 AutoGraph 跟踪作用。
End of explanation
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # Some dummy computation.
return loss
small_data = [(1, 1)] * 3
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
Explanation: 有关 AutoGraph 转换的 if 语句的其他限制,请参阅参考文档。
循环
AutoGraph 会将某些 for 和 while 语句转换为等效的 TensorFlow 循环运算,例如 tf.while_loop。如果不转换,则会将 for 或 while 循环作为 Python 循环执行。
以下情形会执行这种替换:
for x in y:如果 y 是一个张量,则转换为 tf.while_loop。在特殊情况下,如果 y 是 tf.data.Dataset,则会生成 tf.data.Dataset 运算的组合。
while <condition>:如果 <condition> 是张量,则转换为 tf.while_loop。
Python 循环在跟踪时执行,因而循环每迭代一次,都会将额外的运算添加到 tf.Graph。
TensorFlow 循环会跟踪循环体,并在执行时动态选择迭代的运行次数。循环体仅在生成的 tf.Graph 中出现一次。
有关 AutoGraph 转换的 for 和 while 语句的其他限制,请参阅参考文档。
在 Python 数据上循环
一个常见陷阱是在 tf.function 中的 Python/Numpy 数据上循环。此循环在跟踪过程中执行,因而循环每迭代一次,都会将模型的一个副本添加到 tf.Graph。
如果要在 tf.function 中包装整个训练循环,最安全的方法是将数据包装为 tf.data.Dataset,以便 AutoGraph 动态展开训练循环。
End of explanation
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
Explanation: 在数据集中包装 Python/Numpy 数据时,要注意 tf.data.Dataset.from_generator 与 tf.data.Dataset.from_tensors。前者将数据保留在 Python 中,并通过 tf.py_function 获取,这可能会影响性能;后者将数据的副本捆绑成计算图中的一个大 tf.constant() 节点,这可能会消耗较多内存。
通过 TFRecordDataset/CsvDataset 等从文件中读取数据是最高效的数据使用方式,因为这样 TensorFlow 就可以自行管理数据的异步加载和预提取,不必利用 Python。要了解详细信息,请参阅 tf.data 指南。
累加循环值
一种常见模式是不断累加循环的中间值。通常,这可以通过将元素追加到 Python 列表或将条目添加到 Python 字典来实现。但是,由于存在 Python 副作用,在动态展开循环中,这些方法无法达到预期效果。要从动态展开循环累加结果,可以使用 tf.TensorArray 来实现。
End of explanation |
12,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Splitting a simulation
Included in this notebook
Step1: The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an AnalysisStorage object. This makes the analysis much faster.
Step2: Store all trajectories completely in the data file
Step3: Add a single snapshot as a reference and create the appropriate stores
Step4: Store only shallow trajectories (empty snapshots) in the main file
fix CVs first, rest is fine
Step5: fill weak cache from stored cache. This should be fast and we can later
use the weak cache (as long as q exists) to fill the cache of the data file.
Step6: Now that we have cached the CV values we can save the CVs in the new store.
This will also set the disk cache to the new file and since the file is new
this one is empty.
Step7: if all cvs are really cached we can store snapshots now and the auto-complete will fill
the CV disk store automatically when snapshots are saved. This takes a little while.
Step8: Fill trajectory store only with trajectories and their snapshots. We are using lots of small snapshots and these are slow in comparison to large ones. So this will also take a minute or so.
Step9: Finally try storing all steps from the simulation. This should contain ALL you need.
Step10: And compare file sizes
Step11: now we do the trick and use the small data file instead of the full simulation and see if that works. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import numpy as np
Explanation: Splitting a simulation
Included in this notebook:
Split a full simulation file into trajectories and the rest
End of explanation
%%time
storage = paths.AnalysisStorage("mstis.nc")
st_split = paths.Storage('mstis_strip.nc', 'w')
# st_traj = paths.Storage('mstis_traj.nc', 'w')
# st_data = paths.Storage('mstis_data.nc', 'w')
st_split.fallback = storage
# st_data.fallback = storage
Explanation: The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an AnalysisStorage object. This makes the analysis much faster.
End of explanation
# st_data.snapshots.save(storage.snapshots[0])
# st_traj.snapshots.save(storage.snapshots[0])
Explanation: Store all trajectories completely in the data file
End of explanation
st_split.snapshots.save(storage.snapshots[0])
Explanation: Add a single snapshot as a reference and create the appropriate stores
End of explanation
cvs = storage.cvs
q = storage.snapshots.all()
Explanation: Store only shallow trajectories (empty snapshots) in the main file
fix CVs first, rest is fine
End of explanation
%%time
_ = [cv(q) for cv in cvs]
Explanation: fill weak cache from stored cache. This should be fast and we can later
use the weak cache (as long as q exists) to fill the cache of the data file.
End of explanation
%%time
# this will also switch the storage cache to the new file
_ = map(st_split.cvs.save, storage.cvs)
# %%time
# # this will also switch the storage cache to the new file
# _ = map(st_data.cvs.save, storage.cvs)
Explanation: Now that we have cached the CV values we can save the CVs in the new store.
This will also set the disk cache to the new file and since the file is new
this one is empty.
End of explanation
len(st_split.snapshots)
%%time
_ = map(st_split.trajectories.mention, storage.trajectories)
print len(st_split.snapshotspshots)
# %%time
# _ = map(st_data.trajectories.mention, storage.trajectories)
Explanation: if all cvs are really cached we can store snapshots now and the auto-complete will fill
the CV disk store automatically when snapshots are saved. This takes a little while.
End of explanation
%%time
_ = map(st_traj.trajectories.save, storage.trajectories)
Explanation: Fill trajectory store only with trajectories and their snapshots. We are using lots of small snapshots and these are slow in comparison to large ones. So this will also take a minute or so.
End of explanation
%%time
_ = map(st_data.steps.save, storage.steps)
Explanation: Finally try storing all steps from the simulation. This should contain ALL you need.
End of explanation
print 'Original file:', storage.file_size_str
print 'Data file:', st_data.file_size_str
print 'Traj file:', st_traj.file_size_str
print 'So we saved about %2.0f %%' % ((1.0 - st_data.file_size / float(storage.file_size)) * 100.0)
Explanation: And compare file sizes
End of explanation
st_data.close()
st_traj.close()
storage.close()
st_data.snapshots.only_mention = True
Explanation: now we do the trick and use the small data file instead of the full simulation and see if that works.
End of explanation |
12,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Activation Maximization on MNIST
Lets build the mnist model and train it for 5 epochs. It should get to about ~99% test accuracy.
Step1: Dense Layer Visualizations
To visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.
If this swapping is not done, the results might be suboptimal. We will start by swapping out 'softmax' for 'linear' and compare what happens if we dont do this at the end.
Lets start by visualizing input that maximizes the output of node 0. Hopefully this looks like a 0.
Step2: Hmm, it sort of looks like a 0, but not as clear as we hoped for. Activation maximization is notorious because regularization parameters needs to be tuned depending on the problem. Lets enumerate all the possible reasons why this didn't work very well.
The input to network is preprocessed to range (0, 1). We should specify input_range = (0., 1.) to constrain the input to this range.
The regularization parameter default weights might be dominating activation maximization loss weight. One way to debug this is to use verbose=True and examine individual loss values.
Lets do these step by step and see if we can improve it.
Debugging step 1
Step3: Much better but still seems noisy. Lets examining the losses with verbose=True and tuning the weights.
Debugging step 2
Step4: In this case, ActivationMax Loss is not bouncing bouncing around and converging? Perhaps we could get that loss to be lower by reducing weights of other losses that might be dominating the overall loss being minimized.
The simplest way to tune these weights is to first start with 0. weights for all regularization losses.
Step5: It does indeed go to much lower values, but the image looks less natural. Let's try varous range of total variation weights to enforce naturalness.
Step6: We can see how total variation loss is enforcing blobbiness. In this case the default value of tv_weight=10 seems to work very well. The point of this exercise was to show how weights can be tuned.
Lets visualize all other output categories and see what we get.
Step7: Pretty cool. Its amazing that we can even generate an input image via backprop!
Obviously you can tune the visualizations to look better by experimenting with image_modifiers, lp-norm weight etc. Basically, a regularizer is needed to enforce image naturalness prior to limit the input image search space. By this point, GANs should come to your mind. We could easily take a GAN trained on mnist and use discriminator loss as a regularizer. For using custom loss, you can use visualize_activation_with_losses API.
Feel free to submit a PR if you try the GAN regularizer
Step8: Compare this to the 1 generated above and you should be able to see the difference. Nifty indded!
Visualizations without swapping softmax
As alluded at the beginning of the tutorial, we want to compare and see what happens if we didnt swap out softmax for linear activation. | Python Code:
from __future__ import print_function
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Activation, Input
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 5
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax', name='preds'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Activation Maximization on MNIST
Lets build the mnist model and train it for 5 epochs. It should get to about ~99% test accuracy.
End of explanation
from vis.visualization import visualize_activation
from vis.utils import utils
from keras import activations
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (18, 6)
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
# This is the output node we want to maximize.
filter_idx = 0
img = visualize_activation(model, layer_idx, filter_indices=filter_idx)
plt.imshow(img[..., 0])
Explanation: Dense Layer Visualizations
To visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.
If this swapping is not done, the results might be suboptimal. We will start by swapping out 'softmax' for 'linear' and compare what happens if we dont do this at the end.
Lets start by visualizing input that maximizes the output of node 0. Hopefully this looks like a 0.
End of explanation
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, input_range=(0., 1.))
plt.imshow(img[..., 0])
Explanation: Hmm, it sort of looks like a 0, but not as clear as we hoped for. Activation maximization is notorious because regularization parameters needs to be tuned depending on the problem. Lets enumerate all the possible reasons why this didn't work very well.
The input to network is preprocessed to range (0, 1). We should specify input_range = (0., 1.) to constrain the input to this range.
The regularization parameter default weights might be dominating activation maximization loss weight. One way to debug this is to use verbose=True and examine individual loss values.
Lets do these step by step and see if we can improve it.
Debugging step 1: Specifying input_range
End of explanation
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, input_range=(0., 1.), verbose=True)
plt.imshow(img[..., 0])
Explanation: Much better but still seems noisy. Lets examining the losses with verbose=True and tuning the weights.
Debugging step 2: Tuning regularization weights
One of the issues with activation maximization is that the input can go out of the training distribution space. Total variation and L-p norm are used to provide some hardcoded image priors for natural images. For example, Total variation ensures that images are blobber and not scattered. Unfotunately, sometimes these losses can dominate the main ActivationMaximization loss.
Lets see what individual losses are, with verbose=True
End of explanation
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, input_range=(0., 1.),
tv_weight=0., lp_norm_weight=0., verbose=True)
plt.imshow(img[..., 0])
Explanation: In this case, ActivationMax Loss is not bouncing bouncing around and converging? Perhaps we could get that loss to be lower by reducing weights of other losses that might be dominating the overall loss being minimized.
The simplest way to tune these weights is to first start with 0. weights for all regularization losses.
End of explanation
for tv_weight in [1e-3, 1e-2, 1e-1, 1, 10]:
# Lets turn off verbose output this time to avoid clutter and just see the output.
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, input_range=(0., 1.),
tv_weight=tv_weight, lp_norm_weight=0.)
plt.figure()
plt.imshow(img[..., 0])
Explanation: It does indeed go to much lower values, but the image looks less natural. Let's try varous range of total variation weights to enforce naturalness.
End of explanation
for output_idx in np.arange(10):
# Lets turn off verbose output this time to avoid clutter and just see the output.
img = visualize_activation(model, layer_idx, filter_indices=output_idx, input_range=(0., 1.))
plt.figure()
plt.title('Networks perception of {}'.format(output_idx))
plt.imshow(img[..., 0])
Explanation: We can see how total variation loss is enforcing blobbiness. In this case the default value of tv_weight=10 seems to work very well. The point of this exercise was to show how weights can be tuned.
Lets visualize all other output categories and see what we get.
End of explanation
img = visualize_activation(model, layer_idx, filter_indices=[1, 7], input_range=(0., 1.))
plt.imshow(img[..., 0])
Explanation: Pretty cool. Its amazing that we can even generate an input image via backprop!
Obviously you can tune the visualizations to look better by experimenting with image_modifiers, lp-norm weight etc. Basically, a regularizer is needed to enforce image naturalness prior to limit the input image search space. By this point, GANs should come to your mind. We could easily take a GAN trained on mnist and use discriminator loss as a regularizer. For using custom loss, you can use visualize_activation_with_losses API.
Feel free to submit a PR if you try the GAN regularizer :)
Other fun stuff
The API to visualize_activation accepts filter_indices. This is generally meant for multi label classifiers, but nothing prevents us from having some fun.
By setting filter_indices=[1, 7], we can generate an input that the network thinks is both 1 and 7 simultaneously. Its like asking the network
Generate input image that you think is both 1 and a 7.
End of explanation
# Swap linear back with softmax
model.layers[layer_idx].activation = activations.softmax
model = utils.apply_modifications(model)
for output_idx in np.arange(10):
# Lets turn off verbose output this time to avoid clutter and just see the output.
img = visualize_activation(model, layer_idx, filter_indices=output_idx, input_range=(0., 1.))
plt.figure()
plt.title('Networks perception of {}'.format(output_idx))
plt.imshow(img[..., 0])
Explanation: Compare this to the 1 generated above and you should be able to see the difference. Nifty indded!
Visualizations without swapping softmax
As alluded at the beginning of the tutorial, we want to compare and see what happens if we didnt swap out softmax for linear activation.
End of explanation |
12,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ePSproc LF/AF function verification & tests
26/06/20 v2
19/06/20 v1
For LF and AF calculations, trying to get to the bottom of issues with magnitudes and/or phases and/or formalism differences with raw ePS matrix elements.
Formalism
Test cases
Step1: Test N2 case
Load data
Step2: Reference results from GetCro
Step3: Test LF calculations (CG version)
Without sym summation
Step4: With sym summation | Python Code:
# Imports
import numpy as np
import pandas as pd
import xarray as xr
# Special functions
# from scipy.special import sph_harm
import spherical_functions as sf
import quaternion
# Performance & benchmarking libraries
# from joblib import Memory
# import xyzpy as xyz
import numba as nb
# Timings with ttictoc or time
# https://github.com/hector-sab/ttictoc
# from ttictoc import TicToc
import time
# Package fns.
# For module testing, include path to module here
import sys
import os
if sys.platform == "win32":
modPath = r'D:\code\github\ePSproc' # Win test machine
else:
modPath = r'/home/femtolab/github/ePSproc/' # Linux test machine
sys.path.append(modPath)
import epsproc as ep
# TODO: tidy this up!
from epsproc.util import matEleSelector
from epsproc.geomFunc import geomCalc, geomUtils
from epsproc.geomFunc.lfblmGeom import lfblmXprod
Explanation: ePSproc LF/AF function verification & tests
26/06/20 v2
19/06/20 v1
For LF and AF calculations, trying to get to the bottom of issues with magnitudes and/or phases and/or formalism differences with raw ePS matrix elements.
Formalism
Test cases:
ePS matrix elements with formalism from [1], for LF cross-sections and $\beta_{2}$
ePSproc AF calculations, for LF cross-sections and $\beta_{2}$.
The AF calculations should reduce to the LF case for an isotropic ensemble, and both cases should match the "direct" ePS GetCro outputs (LF). Hopefully this should clear up any outstanding issues with normalisation, units, scale-factors, phase conventions etc. For details of the AF code, see the method dev notes.
(For MF verification, see the MFPADs and associated $\beta_{LM}$ notebooks, where the numerics are verified for the NO2 test case, although the total cross-sections may still have issues (for more discussion, see the Matlab code release software paper). The geometric tensor version of the MF calculations is also verified against the same test case.)
[1] Cross section and asymmetry parameter calculation for sulfur 1s photoionization of SF6, A. P. P. Natalense and R. R. Lucchese, J. Chem. Phys. 111, 5344 (1999), http://dx.doi.org/10.1063/1.479794
[2] Reid, Katharine L., and Jonathan G. Underwood. “Extracting Molecular Axis Alignment from Photoelectron Angular Distributions.” The Journal of Chemical Physics 112, no. 8 (2000): 3643. https://doi.org/10.1063/1.480517.
[3] Underwood, Jonathan G., and Katharine L. Reid. “Time-Resolved Photoelectron Angular Distributions as a Probe of Intramolecular Dynamics: Connecting the Molecular Frame and the Laboratory Frame.” The Journal of Chemical Physics 113, no. 3 (2000): 1067. https://doi.org/10.1063/1.481918.
[4] Stolow, Albert, and Jonathan G. Underwood. “Time-Resolved Photoelectron Spectroscopy of Non-Adiabatic Dynamics in Polyatomic Molecules.” In Advances in Chemical Physics, edited by Stuart A. Rice, 139:497–584. Advances in Chemical Physics. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2008. https://doi.org/10.1002/9780470259498.ch6.
Formalism: LF case with CG terms
As given in ref. [1]. This is now implemented in implemented in ePSproc.lfblmGeom. NOTE - that the $M$ term here is an MF projection term, and should be summed over for the final LF result.
The matrix elements $I_{\mathbf{k},\hat{n}}^{(L,V)}$ of Eqs. (8)
and (9) can be expanded in terms of the $X_{lh}^{p\mu}$ functions
of Eq. (7) as$^{14}$
\begin{equation}
I_{\mathbf{k},\hat{n}}^{(L,V)}=\left[\frac{4\pi}{3}\right]^{1/2}\sum_{p\mu lhv}I_{lhv}^{p\mu(L,V)}X_{lh}^{p\mu}(\hat{k})X_{1v}^{p_{v}\mu_{v}}(\hat{n}).
\end{equation}
{[}Note here the final term gives polarization (dipole) terms, with
$l=1$, $h=v$, corresponding to a photon with one unit of angular
momentum and projections $v=-1,0,1$, correlated with irreducible
representations $p_{v}\mu_{v}$.{]}
The differential cross section is given by
\begin{equation}
\frac{d\sigma^{L,V}}{d\Omega_{\mathbf{k}}}=\frac{\sigma^{L,V}}{4\pi}[1+\beta_{\mathbf{k}}^{L,V}P_{2}(\cos\theta)],
\end{equation}
where the asymmetry parameter can be written as$^{14}$
\begin{eqnarray}
\beta_{\mathbf{k}}^{L,V} & = & \frac{3}{5}\frac{1}{\sum_{p\mu lhv}|I_{\mathbf{k},\hat{n}}^{(L,V)}|^{2}}\sum_{\stackrel{p\mu lhvmm_{v}}{p'\mu'l'h'v'm'm'{v}}}(-1)^{m'-m{v}}I_{\mathbf{k},\hat{n}}^{(L,V)}\nonumber \
& \times & \left(I_{\mathbf{k},\hat{n}}^{(L,V)}\right)^{}b_{lhm}^{p\mu}b_{l'h'm'}^{p'\mu'}b_{1vm_{v}}^{p_{v}\mu_{v}}b_{1v'm'{v}}^{p'{v}\mu'{v}*}\nonumber \
& \times & [(2l+1)(2l'+1)]^{1/2}(1100|20)(l'l00|20)\nonumber \
& \times & (11-m'{v}m_{v}|2M')(l'l-m'm|2-M'),
\end{eqnarray}
and the $(l'lm'm|L'M')$ are the usual Clebsch--Gordan coefficients.
The total cross section is
\begin{equation}
\sigma^{L,V}=\frac{4\pi^{2}}{3c}E\sum_{p\mu lhv}|I_{\mathbf{k},\hat{n}}^{(L,V)}|^{2},
\end{equation}
where c is the speed of light.
AF formalism
The original (full) form for the AF equations, as implemented in ePSproc.afblm (NOTE - there are some corrections to be made here, which are not yet implemented in the base code, but are now in the geometric version):
\begin{eqnarray}
\beta_{L,-M}^{\mu_{i},\mu_{f}} & = & \sum_{l,m,\mu}\sum_{l',m',\mu'}(-1)^{M}(-1)^{m}(-1)^{(\mu'-\mu_{0})}\left(\frac{(2l+1)(2l'+1)(2L+1)}{4\pi}\right)^{1/2}\left(\begin{array}{ccc}
l & l' & L\
0 & 0 & 0
\end{array}\right)\left(\begin{array}{ccc}
l & l' & L\
-m & m' & -M
\end{array}\right)\nonumber \
& \times & I_{l,m,\mu}^{p_{i}\mu_{i},p_{f}\mu_{f}}(E)I_{l',m',\mu'}^{p_{i}\mu_{i},p_{f}\mu_{f}*}(E)\
& \times & \sum_{P,R,R'}(2P+1)(-1)^{(R'-R)}\left(\begin{array}{ccc}
1 & 1 & P\
\mu_{0} & -\mu_{0} & R
\end{array}\right)\left(\begin{array}{ccc}
1 & 1 & P\
\mu & -\mu' & R'
\end{array}\right)\
& \times & \sum_{K,Q,S}(2K+1)^{1/2}(-1)^{K+Q}\left(\begin{array}{ccc}
P & K & L\
R & -Q & -M
\end{array}\right)\left(\begin{array}{ccc}
P & K & L\
R' & -S & S-R'
\end{array}\right)A_{Q,S}^{K}(t)
\end{eqnarray}
Where $I_{l,m,\mu}^{p_{i}\mu_{i},p_{f}\mu_{f}}(E)$ are the energy-dependent dipole matrix elements, and $A_{Q,S}^{K}(t)$ define the alignment parameters.
In terms of the geometric parameters, this can be rewritten as:
\begin{eqnarray}
\beta_{L,-M}^{\mu_{i},\mu_{f}} & =(-1)^{M} & \sum_{P,R',R}{[P]^{\frac{1}{2}}}{E_{P-R}(\hat{e};\mu_{0})}\sum_{l,m,\mu}\sum_{l',m',\mu'}(-1)^{(\mu'-\mu_{0})}{\Lambda_{R'}(\mu,P,R')B_{L,-M}(l,l',m,m')}I_{l,m,\mu}^{p_{i}\mu_{i},p_{f}\mu_{f}}(E)I_{l',m',\mu'}^{p_{i}\mu_{i},p_{f}\mu_{f}*}(E)\sum_{K,Q,S}\Delta_{L,M}(K,Q,S)A_{Q,S}^{K}(t)\label{eq:BLM-tidy-prod-2}
\end{eqnarray}
See the method dev notebook for more details. Both methods gave the same results for N2 test cases, so are at least consistent, but do not currently match ePS GetCro outputs for the LF case.
Numerics
In both LF and AF cases, the numerics tested herein are based on the geometric tensor expansion code, which has been verified for the MF case as noted above (for PADs at a single energy).
A few additional notes on the implementations...
The matrix elements used are taken from the DumpIdy output segments of the ePS output file, which provide "phase corrected and properly normalized dynamical coefs".
The matrix elements output by ePS are assumed to correspond to $I_{lhv}^{p\mu(L,V)}$ as defined above.
The Scale Factor (SF) "to sqrt Mbarn" output with the matrix elements is assumed to correspond to the $\frac{4\pi^{2}}{3c}E$ term defined above, plus any other required numerical factors ($4\pi$ terms and similar).
The SF is energy dependent, but not continuum (or partial wave) dependent.
If correct, then using matrix elements * scale factor, should give correct results (as a function of $E$), while omitting the scale factor should still give correct PADs at any given $E$, but incorrect total cross-section and energy scaling.
This may be incorrect, and some other assumptions are tested herein.
The AF and LF case should match for an isotropic distribution, defined as $A^{0}_{0,0}=1$. Additional normalisation required here...?
A factor of $\sqrt{(2K+1)}/8\pi^2$ might be required for correct normalisation, although shouldn't matter in this case. (See eqn. 47 in [4].)
For the LF case, as defined above, conversion from Legendre-normalised $\beta$ to spherical harmonic normalised $\beta$ is required for comparison with the AF formalism, where $\beta^{Sph}_{L,0} = \sqrt{(2L+1)/4\pi}\beta^{Lg}$
Set up
End of explanation
# Load data from modPath\data
dataPath = os.path.join(modPath, 'data', 'photoionization')
dataFile = os.path.join(dataPath, 'n2_3sg_0.1-50.1eV_A2.inp.out') # Set for sample N2 data for testing
# Scan data file
dataSet = ep.readMatEle(fileIn = dataFile)
dataXS = ep.readMatEle(fileIn = dataFile, recordType = 'CrossSection') # XS info currently not set in NO2 sample file.
Explanation: Test N2 case
Load data
End of explanation
# Plot cross sections using Xarray functionality
dataXS[0].sel({'Type':'L', 'XC':'SIGMA'}).plot.line(x='Eke');
# Plot B2
dataXS[0].sel({'Type':'L', 'XC':'BETA'}).plot.line(x='Eke');
Explanation: Reference results from GetCro
End of explanation
# Set parameters
SFflag = False # Multiply matrix elements by SF?
symSum = False # Sum over symmetries?
phaseConvention = 'S'
thres = 1e-2
selDims = {'it':1, 'Type':'L'}
thresDims = 'Eke'
# Set terms for testing - NOTE ORDERING HERE may affect CG term!!!
dlistMatE = ['lp', 'l', 'L', 'mp', 'm', 'M'] # Match published terms
dlistP = ['p1', 'p2', 'L', 'mup', 'mu', 'M']
# dlistMatE = ['l', 'lp', 'L', 'm', 'mp', 'M'] # Standard terms
# dlistP = ['p1', 'p2', 'L', 'mu', 'mup', 'M']
# Set matrix elements
matE = dataSet[0].copy()
# Calculate betas
BetaNormXS, BetaNorm, BetaRaw, XSmatE = lfblmXprod(matE, symSum = symSum, SFflag = SFflag,
thres = thres, thresDims = thresDims, selDims = selDims,
phaseConvention = phaseConvention,
dlistMatE = dlistMatE, dlistP = dlistP)
# Here BetaNormXS includes the correct normalisation term as per the original formalism, and XSmatE is the sum of the squared matrix elements, as used for the normalisation (== cross section without correct scaling).
plotThres = None
ep.util.matEleSelector(XSmatE, thres = plotThres, dims='Eke', sq=True, drop=True).real.plot.line(x='Eke', col='Sym');
ep.util.matEleSelector(BetaNormXS, thres = plotThres, dims='Eke', sq=True, drop=True).real.plot.line(x='Eke', col='Sym');
# Summing over M gives the final LF terms, as defined above.
# The B0 term (==cross section) is not correctly scaled here.
# The B2 term matches the GetCro reference results.
ep.util.matEleSelector(BetaNormXS.unstack('LM').sum('M'), thres = plotThres, dims='Eke', sq=True, drop=True).real.plot.line(x='Eke', col='Sym');
Explanation: Test LF calculations (CG version)
Without sym summation
End of explanation
# Set parameters
SFflag = False # Multiply matrix elements by SF?
symSum = True # Sum over symmetries?
phaseConvention = 'S'
thres = 1e-2
selDims = {'it':1, 'Type':'L'}
thresDims = 'Eke'
# Set terms for testing - NOTE ORDERING HERE may affect CG term!!!
dlistMatE = ['lp', 'l', 'L', 'mp', 'm', 'M'] # Match published terms
dlistP = ['p1', 'p2', 'L', 'mup', 'mu', 'M']
# dlistMatE = ['l', 'lp', 'L', 'm', 'mp', 'M'] # Standard terms
# dlistP = ['p1', 'p2', 'L', 'mu', 'mup', 'M']
# Set matrix elements
matE = dataSet[0].copy()
# Calculate betas
BetaNormXS, BetaNorm, BetaRaw, XSmatE = lfblmXprod(matE, symSum = symSum, SFflag = SFflag,
thres = thres, thresDims = thresDims, selDims = selDims,
phaseConvention = phaseConvention,
dlistMatE = dlistMatE, dlistP = dlistP)
# Here BetaNormXS includes the correct normalisation term as per the original formalism, and XSmatE is the sum of the squared matrix elements, as used for the normalisation (== cross section without correct scaling).
plotThres = None
ep.util.matEleSelector(XSmatE, thres = plotThres, dims='Eke', sq=True, drop=True).real.plot.line(x='Eke');
ep.util.matEleSelector(BetaNormXS, thres = plotThres, dims='Eke', sq=True, drop=True).real.plot.line(x='Eke');
# Summing over M gives the final LF terms, as defined above.
# The B0 term (==cross section) is not correctly scaled here.
# The B2 term matches the GetCro reference results.
ep.util.matEleSelector(BetaNormXS.unstack('LM').sum('M'), thres = plotThres, dims='Eke', sq=True, drop=True).real.plot.line(x='Eke');
Explanation: With sym summation
End of explanation |
12,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotly maps
with Plotly's Python API library and Basemap
This notebook comes in response to <a href="https
Step1: From root
Step2: Import the plotly graph objects (in particular Contour) to help build our figure
Step3: Data with this notebook will be taken from a NetCDF file, so import netcdf class from the <a href="http
Step4: Finally, import the Matplotlib <a href="http
Step5: 1. Get the data!
The data is taken from <a href="http
Step6: The values lon start a 0 degrees and increase eastward to 360 degrees. So, the air array is centered about the Pacific Ocean. For a better-looking plot, shift the data so that it is centered about the 0 meridian
Step7: 2. Make Contour graph object
Very simply,
Step8: 3. Get the coastlines and country boundaries with Basemap
The Basemap module includes data for drawing coastlines and country boundaries onto world maps. Adding coastlines and/or country boundaries on a matplotlib figure is done with the .drawcoaslines() or .drawcountries() Basemap methods.
Next, we will retrieve the Basemap plotting data (or polygons) and convert them to longitude/latitude arrays (inspired by this stackoverflow <a href="http
Step9: Then,
Step10: 4. Make a figue object and plot!
Package the Contour trace with the coastline and country traces. Note that the Contour trace must be placed before the coastline and country traces in order to make all traces visible.
Step11: Layout options are set in a Layout object
Step12: Package data and layout in a Figure object and send it to plotly
Step13: See this graph in full screen <a href="https | Python Code:
import plotly
plotly.__version__
Explanation: Plotly maps
with Plotly's Python API library and Basemap
This notebook comes in response to <a href="https://twitter.com/rjallain/status/496767038782570496" target="_blank">this</a> Rhett Allain tweet.
Although Plotly does not feature built-in maps functionality (yet), this notebook demonstrates how to plotly-fy maps generated by Basemap.
<hr>
First, check the version which version of the Python API library installed on your machine:
End of explanation
import plotly.plotly as py
Explanation: From root:
<img src="./block-diagram.svg" />
From a folder:
<img src="./assets/block-diagram.svg" />
Next, if you have a plotly account as well as a credentials file set up on your machine, singing in to Plotly's servers is done automatically while importing plotly.plotly.
End of explanation
from plotly.graph_objs import *
Explanation: Import the plotly graph objects (in particular Contour) to help build our figure:
End of explanation
import numpy as np
from scipy.io import netcdf
Explanation: Data with this notebook will be taken from a NetCDF file, so import netcdf class from the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.netcdf.netcdf_file.html" target="_blank">scipy.io</a> module, along with numpy:
End of explanation
from mpl_toolkits.basemap import Basemap
Explanation: Finally, import the Matplotlib <a href="http://matplotlib.org/basemap/" target="_blank">Basemap</a> Toolkit, its installation instructions can found <a href="http://matplotlib.org/basemap/users/installing.html" target="_blank">here</a>.
End of explanation
# Path the downloaded NetCDF file (different for each download)
f_path = '/home/etienne/Downloads/compday.Bo3cypJYyE.nc'
# Retrieve data from NetCDF file
with netcdf.netcdf_file(f_path, 'r') as f:
lon = f.variables['lon'][::] # copy as list
lat = f.variables['lat'][::-1] # invert the latitude vector -> South to North
air = f.variables['air'][0,::-1,:] # squeeze out the time dimension,
# invert latitude index
Explanation: 1. Get the data!
The data is taken from <a href="http://www.esrl.noaa.gov/psd/data/composites/day/" target="_blank">NOAA Earth System Research Laboratory</a>.
Unfortunately, this website does not allow to code your output demand and/or use wget to download the data. <br>
That said, the data used for this notebook can be downloaded in a only a few clicks:
Select Air Temperature in Varaibles
Select Surface in Analysis level?
Select Jul | 1 and Jul | 31
Enter 2014 in the Enter Year of last day of range field
Select Anomaly in Plot type?
Select All in Region of globe
Click on Create Plot
Then on the following page, click on Get a copy of the netcdf data file used for the plot to download the NetCDF on your machine.
Note that the data represents the average daily surface air temperature anomaly (in deg. C) for July 2014 with respect to 1981-2010 climatology.
Now, import the NetCDF file into this IPython session. The following was inspired by this earthpy blog <a href="http://earthpy.org/interpolation_between_grids_with_basemap.html" target="_blank">post</a>.
End of explanation
# Shift 'lon' from [0,360] to [-180,180], make numpy array
tmp_lon = np.array([lon[n]-360 if l>=180 else lon[n]
for n,l in enumerate(lon)]) # => [0,180]U[-180,2.5]
i_east, = np.where(tmp_lon>=0) # indices of east lon
i_west, = np.where(tmp_lon<0) # indices of west lon
lon = np.hstack((tmp_lon[i_west], tmp_lon[i_east])) # stack the 2 halves
# Correspondingly, shift the 'air' array
tmp_air = np.array(air)
air = np.hstack((tmp_air[:,i_west], tmp_air[:,i_east]))
Explanation: The values lon start a 0 degrees and increase eastward to 360 degrees. So, the air array is centered about the Pacific Ocean. For a better-looking plot, shift the data so that it is centered about the 0 meridian:
End of explanation
trace1 = Contour(
z=air,
x=lon,
y=lat,
colorscale="RdBu",
zauto=False, # custom contour levels
zmin=-5, # first contour level
zmax=5 # last contour level => colorscale is centered about 0
)
Explanation: 2. Make Contour graph object
Very simply,
End of explanation
# Make shortcut to Basemap object,
# not specifying projection type for this example
m = Basemap()
# Make trace-generating function (return a Scatter object)
def make_scatter(x,y):
return Scatter(
x=x,
y=y,
mode='lines',
line=Line(color="black"),
name=' ' # no name on hover
)
# Functions converting coastline/country polygons to lon/lat traces
def polygons_to_traces(poly_paths, N_poly):
'''
pos arg 1. (poly_paths): paths to polygons
pos arg 2. (N_poly): number of polygon to convert
'''
traces = [] # init. plotting list
for i_poly in range(N_poly):
poly_path = poly_paths[i_poly]
# get the Basemap coordinates of each segment
coords_cc = np.array(
[(vertex[0],vertex[1])
for (vertex,code) in poly_path.iter_segments(simplify=False)]
)
# convert coordinates to lon/lat by 'inverting' the Basemap projection
lon_cc, lat_cc = m(coords_cc[:,0],coords_cc[:,1], inverse=True)
# add plot.ly plotting options
traces.append(make_scatter(lon_cc,lat_cc))
return traces
# Function generating coastline lon/lat traces
def get_coastline_traces():
poly_paths = m.drawcoastlines().get_paths() # coastline polygon paths
N_poly = 91 # use only the 91st biggest coastlines (i.e. no rivers)
return polygons_to_traces(poly_paths, N_poly)
# Function generating country lon/lat traces
def get_country_traces():
poly_paths = m.drawcountries().get_paths() # country polygon paths
N_poly = len(poly_paths) # use all countries
return polygons_to_traces(poly_paths, N_poly)
Explanation: 3. Get the coastlines and country boundaries with Basemap
The Basemap module includes data for drawing coastlines and country boundaries onto world maps. Adding coastlines and/or country boundaries on a matplotlib figure is done with the .drawcoaslines() or .drawcountries() Basemap methods.
Next, we will retrieve the Basemap plotting data (or polygons) and convert them to longitude/latitude arrays (inspired by this stackoverflow <a href="http://stackoverflow.com/questions/14280312/world-map-without-rivers-with-matplotlib-basemap" target="_blank">post</a>) and then package them into Plotly Scatter graph objects .
In other words, the goal is to plot each continuous coastline and country boundary lines as 1 Plolty scatter line trace.
End of explanation
# Get list of of coastline and country lon/lat traces
traces_cc = get_coastline_traces()+get_country_traces()
Explanation: Then,
End of explanation
data = Data([trace1]+traces_cc)
Explanation: 4. Make a figue object and plot!
Package the Contour trace with the coastline and country traces. Note that the Contour trace must be placed before the coastline and country traces in order to make all traces visible.
End of explanation
title = u"Average daily surface air temperature anomalies [\u2103]<br> \
in July 2014 with respect to 1981-2010 climatology"
anno_text = "Data courtesy of \
<a href='http://www.esrl.noaa.gov/psd/data/composites/day/'>\
NOAA Earth System Research Laboratory</a>"
axis_style = dict(
zeroline=False,
showline=False,
showgrid=False,
ticks='',
showticklabels=False,
)
layout = Layout(
title=title,
showlegend=False,
hovermode="closest", # highlight closest point on hover
xaxis=XAxis(
axis_style,
range=[lon[0],lon[-1]] # restrict y-axis to range of lon
),
yaxis=YAxis(
axis_style,
),
annotations=Annotations([
Annotation(
text=anno_text,
xref='paper',
yref='paper',
x=0,
y=1,
yanchor='bottom',
showarrow=False
)
]),
autosize=False,
width=1000,
height=500,
)
Explanation: Layout options are set in a Layout object:
End of explanation
fig = Figure(data=data, layout=layout)
py.iplot(fig, filename="maps", width=1000)
Explanation: Package data and layout in a Figure object and send it to plotly:
End of explanation
from IPython.display import display, HTML
import urllib2
url = 'https://raw.githubusercontent.com/plotly/python-user-guide/master/custom.css'
display(HTML(urllib2.urlopen(url).read()))
Explanation: See this graph in full screen <a href="https://plot.ly/~etpinard/453" target="_blank">here</a>.
To learn more about Plotly's Python API
Refer to
our online documentation <a href="https://plot.ly/python/" target="_blank">page</a> or
our <a href="https://plot.ly/python/user-guide/" target="_blank">User Guide</a>.
<br>
<hr>
<br>
<div style="float:right; \">
<img alt="plotly logo" src="http://i.imgur.com/4vwuxdJ.png"
align=right style="float:right; margin-left: 5px; margin-top: -10px" />
</div>
<h4 style="margin-top:60px;"> Got Questions or Feedback? </h4>
About <a href="https://plot.ly" target="_blank">Plotly</a>
email: [email protected]
tweet:
<a href="https://twitter.com/plotlygraphs" target="_blank">@plotlygraphs</a>
<h4 style="margin-top:30px;">Notebook styling ideas</h4>
Big thanks to
<a href="http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Prologue/Prologue.ipynb" target="_blank">Cam Davidson-Pilon</a>
<a href="http://lorenabarba.com/blog/announcing-aeropython/#.U1ULXdX1LJ4.google_plusone_share" target="_blank">Lorena A. Barba</a>
<br>
End of explanation |
12,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regular Expressions in Python (A Short Tutorial)
This is a tutorial showing how regular expressions are supported in Python.
The assumption is that the reader already has a grasp of the concept of
regular expressions as it is taught in lectures
on formal languages, for example in
Formal Languages and Their Application, but does not know how regular expressions are supported in Python.
In Python, regular expressions are not part of the core language but are rather implemented in the module re. This module is part of the Python standard library and therefore there is no need
to install this module. The full documentation of this module can be found at
https
Step1: Regular expressions are strings that describe <em style=\color
Step2: In the next example, the flag re.IGNORECASE is set and hence the function call returns a list of length 3.
Step3: To begin our definition of the set $\textrm{RegExp}$ of Python regular expressions, we first have to define
the set $\texttt{MetaChars}$ of all meta-characters
Step4: In the following example we have to use <em style="color
Step5: Concatenation
The next rule shows how regular expressions can be <em style="color
Step6: Choice
Regular expressions provide the operator | that can be used to choose between
<em style="color
Step7: Quantifiers
The most interesting regular expression operators are the <em style="color
Step8: If $r$ is a regular expressions, then $r$ is a regular expression. This
regular expression matches either the empty string or any string $s$ that can be split into a list on $n$ substrings $s_1$,
$s_2$, $\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \in {1,\cdots,n}$.
Formally, we have
$$\mathcal{L}(r)
Step9: If $r$ is a regular expressions, then $r?$ is a regular expression. This
regular expression matches either the empty string or any string $s$ that is matched by $r$. Formally we have
$$\mathcal{L}(r?)
Step10: If $r$ is a regular expressions and $m,n\in\mathbb{N}$ such that $m \leq n$, then $r{m,n}$ is a
regular expression. This regular expression matches any number $k$ of repetitions of $r$ such that $m \leq k \leq n$.
Formally, we have
$$\mathcal{L}(r{m,n}) =
\Bigl{ s \mid \exists k \in \mathbb{N}
Step11: Above, the regular expression r'a{2,3}' matches the string 'aaaa' only once since the first match consumes three occurrences of a and then there is only a single a left.
If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{n}$ is a regular expression. This regular expression matches exactly $n$ repetitions of $r$. Formally, we have
$$\mathcal{L}(r{n}) = \mathcal{L}(r{n,n}).$$
Step12: If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{,n}$ is a regular expression. This regular expression matches up to $n$ repetitions of $r$. Formally, we have
$$\mathcal{L}(r{,n}) = \mathcal{L}(r{0,n}).$$
Step13: If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{n,}$ is a regular expression. This regular expression matches $n$ or more repetitions of $r$. Formally, we have
$$\mathcal{L}(r{n,}) = \mathcal{L}(r{n}r*).$$
Step14: Non-Greedy Quantifiers
The quantifiers ?, +, *, {m,n}, {n}, {,n}, and {n,} are <em style="color
Step15: Character Classes
In order to match a set of characters we can use a <em style="color
Step16: Character classes can also contain <em style="color
Step17: Note that the next example looks quite similar but gives a different result
Step18: Here, the regular expression starts with the alternative [0-9], which matches any single digit.
So once a digit is found, the resulting substring is returned and the search starts again. Therefore, if this regular expression is used in findall, it will only return a list of single digits.
There are some predefined character classes
Step19: Character classes can be negated if the first character after the opening [ is the character ^.
For example, [^abc] matches any character that is different from a, b, or c.
Step20: The following regular expression uses the character class \b to isolate numbers. Note that we had to use parentheses since concatenation of regular expressions binds stronger than the choice operator |.
Step21: Grouping
If $r$ is a regular expression, then $(r)$ is a regular expression describing the same language as
$r$. There are two reasons for using parentheses
Step22: In general, given a digit $n$, the expression $\backslash n$ refers to the string matched in the $n$-th group of the regular expression.
The Dot
The regular expression . matches any character except the newline. For example, c.*?t matches any string that starts with the character c and ends with the character t and does not contain any newline. If we are using the non-greedy version of the quantifier *, we can find all such words in the string below.
Step23: The dot . does not have any special meaning when used inside a character range. Hence, the regular expression
[.] matches only the character ..
Named Groups
Referencing a group via the syntax \n where n is a natural number is both cumbersome and error-prone. Instead, we can use named groups.
The syntax to define a named group is
(?P<name>r)
where name is the name of the group and r is the regular expression. To refer to the string matched by this group we use the following syntax
Step24: Start and End of a Line
The regular expression ^ matches at the start of a string. If we set the flag re.MULTILINE, which we
will usually do when working with this regular expression containing the expression ^,
then ^ also matches at the beginning of each line,
i.e. it matches after every newline character.
Similarly, the regular expression $ matches at the end of a string. If we set the flag re.MULTILINE, then $ also matches at the end of each line,
i.e. it matches before every newline character.
Step25: Lookahead Assertions
Sometimes we need to look ahead in order to know whether we have found what we are looking for. Consider the case that you want to add up all numbers followed by a dollar symbol but you are not interested in any other numbers. In this case a
lookahead assertion comes in handy. The syntax of a lookahead assertion is
Step26: There are also <em style="color
Step27: Examples
In order to have some strings to play with, let us read the file alice.txt, which contains the book
Alice's Adventures in Wonderland written by
Lewis Carroll.
Step28: How many non-empty lines does this story have?
Step29: Next, let us check, whether this text is suitable for minors. In order to do so we search for all four
letter words that start with either d, f or s and end with k or t.
Step30: How many words are in this text and how many different words are used? | Python Code:
import re
Explanation: Regular Expressions in Python (A Short Tutorial)
This is a tutorial showing how regular expressions are supported in Python.
The assumption is that the reader already has a grasp of the concept of
regular expressions as it is taught in lectures
on formal languages, for example in
Formal Languages and Their Application, but does not know how regular expressions are supported in Python.
In Python, regular expressions are not part of the core language but are rather implemented in the module re. This module is part of the Python standard library and therefore there is no need
to install this module. The full documentation of this module can be found at
https://docs.python.org/3/library/re.html.
End of explanation
re.findall('a', 'abcabcABC')
Explanation: Regular expressions are strings that describe <em style=\color:blue>languages</em>, where a
<em style=\color:blue>language</em> is defined as a <em style=\color:blue\ a>set of strings</em>.
In the following, let us assume that $\Sigma$ is the set of all Unicode characters and $\Sigma^$ is the set
of strings consisting of Unicode characters. We will define the set $\textrm{RegExp}$ of regular expressions inductively.
In order to define the meaning of a regular expression $r$ we define a function
$$ \mathcal{L}:\textrm{RegExp} \rightarrow 2^{\Sigma^} $$
such that $\mathcal{L}(r)$ is the <em style=\color:blue>language</em> specified by the regular expression $r$.
In order to demonstrate how regular expressions work we will use the function findall from the module
re. This function is called in the following way:
$$ \texttt{re.findall}(r, s, \textrm{flags}=0) $$
Here, the arguments are interpreted as follows:
- $r$ is a string that is interpreted as a regular expression,
- $s$ is a string. The regular expression $r$ specifies substrings of $s$ that we want to find.
- $\textrm{flags}$ is an optional argument of type int which is set to $0$ by default.
This argument is useful to set flags that might be used to alter the interpretation of the regular
expression $r$. For example, if the flag re.IGNORECASE is set, then the search performed by findall is not case sensitive.
The function findall returns a list of those non-overlapping substrings of the string $s$ that
match the regular expression $r$. In the following example, the regular expression $r$ searches
for the letter a and since the string $s$ contains the character a two times, findall returns a
list with two occurrences of a:
End of explanation
re.findall('a', 'abcabcABC', re.IGNORECASE)
Explanation: In the next example, the flag re.IGNORECASE is set and hence the function call returns a list of length 3.
End of explanation
re.findall('a', 'abaa')
Explanation: To begin our definition of the set $\textrm{RegExp}$ of Python regular expressions, we first have to define
the set $\texttt{MetaChars}$ of all meta-characters:
MetaChars := { '.', '^', '$', '*', '+', '?', '{', '}', '[', ']', '\', '|', '(', ')' }
These characters are used as <em style="color:blue">operator symbols</em> or as
part of operator symbols inside of regular expressions.
Now we can start our inductive definition of regular expressions:
- Any Unicode character $c$ such that $c \not\in \textrm{MetaChars}$ is a regular expression.
The regular expressions $c$ matches the character $c$, i.e. we have
$$ \mathcal{L}(c) = { c }. $$
- If $c$ is a meta character, i.e. we have $c \in \textrm{MetaChars}$, then the string $\backslash c$
is a regular expression matching the meta-character $c$, i.e. we have
$$ \mathcal{L}(\backslash c) = { c }. $$
End of explanation
re.findall(r'\+', '+-+')
re.findall('\\+', '+-+')
Explanation: In the following example we have to use <em style="color:blue">raw strings</em> in order to prevent
the backlash character to be mistaken as an <em style="color:blue">escape sequence</em>. A string is a
<em style="color:blue">raw string</em> if the opening quote character is preceded with the character
r.
End of explanation
re.findall(r'the', 'The horse, the dog, and the cat.', flags=re.IGNORECASE)
Explanation: Concatenation
The next rule shows how regular expressions can be <em style="color:blue">concatenated</em>:
- If $r_1$ and $r_2$ are regular expressions, then $r_1r_2$ is a regular expression. This
regular expression matches any string $s$ that can be split into two substrings $s_1$ and $s_2$
such that $r_1$ matches $s_1$ and $r_2$ matches $s_2$. Formally, we have
$$\mathcal{L}(r_1r_2) :=
\bigl{ s_1s_2 \mid s_1 \in \mathcal{L}(r_1) \wedge s_2 \in \mathcal{L}(r_2) \bigr}.
$$
In the lecture notes we have used the notation $r_1 \cdot r_2$ instead of the Python notation $r_1r_2$.
Using concatenation of regular expressions, we can now find words.
End of explanation
re.findall(r'The|a', 'The horse, the dog, and a cat.', flags=re.IGNORECASE)
Explanation: Choice
Regular expressions provide the operator | that can be used to choose between
<em style="color:blue">alternatives:</em>
- If $r_1$ and $r_2$ are regular expressions, then $r_1|r_2$ is a regular expression. This
regular expression matches any string $s$ that can is matched by either $r_1$ or $r_2$.
Formally, we have
$$\mathcal{L}(r_1|r_2) := \mathcal{L}(r_1) \cup \mathcal{L}(r_2). $$
In the lecture notes we have used the notation $r_1 + r_2$ instead of the Python notation $r_1|r_2$.
End of explanation
re.findall(r'a+', 'abaabaAaba.', flags=re.IGNORECASE)
Explanation: Quantifiers
The most interesting regular expression operators are the <em style="color:blue">quantifiers</em>.
The official documentation calls them <em style="color:blue">repetition qualifiers</em> but in this notebook
they are called quantifiers, since this is shorter. Syntactically, quantifiers are
<em style="color:blue">postfix operators</em>.
- If $r$ is a regular expressions, then $r+$ is a regular expression. This
regular expression matches any string $s$ that can be split into a list on $n$ substrings $s_1$,
$s_2$, $\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \in {1,\cdots,n}$.
Formally, we have
$$\mathcal{L}(r+) :=
\Bigl{ s \Bigm| \exists n \in \mathbb{N}: \bigl(n \geq 1 \wedge
\exists s_1,\cdots,s_n : (s_1 \cdots s_n = s \wedge
\forall i \in {1,\cdots, n}: s_i \in \mathcal{L}(r)\bigr)
\Bigr}.
$$
Informally, $r+$ matches $r$ any positive number of times.
End of explanation
re.findall(r'a*', 'abaabaaaba')
Explanation: If $r$ is a regular expressions, then $r$ is a regular expression. This
regular expression matches either the empty string or any string $s$ that can be split into a list on $n$ substrings $s_1$,
$s_2$, $\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \in {1,\cdots,n}$.
Formally, we have
$$\mathcal{L}(r) := \bigl{ \texttt{''} \bigr} \cup
\Bigl{ s \Bigm| \exists n \in \mathbb{N}: \bigl(n \geq 1 \wedge
\exists s_1,\cdots,s_n : (s_1 \cdots s_n = s \wedge
\forall i \in {1,\cdots, n}: s_i \in \mathcal{L}(r)\bigr)
\Bigr}.
$$
Informally, $r*$ matches $r$ any number of times, including zero times. Therefore, in the following example the result also contains various empty strings. For example, in the string 'abaabaaaba' the regular expression a* will find an empty string at the beginning of each occurrence of the character 'b'. The final occurrence of the empty string is found at the end of the string:
End of explanation
re.findall(r'a?', 'abaa')
Explanation: If $r$ is a regular expressions, then $r?$ is a regular expression. This
regular expression matches either the empty string or any string $s$ that is matched by $r$. Formally we have
$$\mathcal{L}(r?) := \bigl{ \texttt{''} \bigr} \cup \mathcal{L}(r). $$
Informally, $r?$ matches $r$ at most one times but also zero times. Therefore, in the following example the result also contains two empty strings. One of these is found at the beginning of the character 'b', the second is found at the end of the string.
End of explanation
re.findall(r'a{2,3}', 'aaaa')
Explanation: If $r$ is a regular expressions and $m,n\in\mathbb{N}$ such that $m \leq n$, then $r{m,n}$ is a
regular expression. This regular expression matches any number $k$ of repetitions of $r$ such that $m \leq k \leq n$.
Formally, we have
$$\mathcal{L}(r{m,n}) =
\Bigl{ s \mid \exists k \in \mathbb{N}: \bigl(m \leq k \leq n \wedge
\exists s_1,\cdots,s_k : (s_1 \cdots s_k = s \wedge
\forall i \in {1,\cdots, k}: s_i \in \mathcal{L}(r)\bigr)
\Bigr}.
$$
Informally, $r{m,n}$ matches $r$ at least $m$ times and at most $n$ times.
End of explanation
re.findall(r'a{2}', 'aabaaaba')
Explanation: Above, the regular expression r'a{2,3}' matches the string 'aaaa' only once since the first match consumes three occurrences of a and then there is only a single a left.
If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{n}$ is a regular expression. This regular expression matches exactly $n$ repetitions of $r$. Formally, we have
$$\mathcal{L}(r{n}) = \mathcal{L}(r{n,n}).$$
End of explanation
re.findall(r'a{,2}', 'aabaaabba')
Explanation: If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{,n}$ is a regular expression. This regular expression matches up to $n$ repetitions of $r$. Formally, we have
$$\mathcal{L}(r{,n}) = \mathcal{L}(r{0,n}).$$
End of explanation
re.findall(r'a{2,}', 'aabaaaba')
Explanation: If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{n,}$ is a regular expression. This regular expression matches $n$ or more repetitions of $r$. Formally, we have
$$\mathcal{L}(r{n,}) = \mathcal{L}(r{n}r*).$$
End of explanation
re.findall(r'a{2,3}?', 'aaaa'), re.findall(r'a{2,3}', 'aaaa')
Explanation: Non-Greedy Quantifiers
The quantifiers ?, +, *, {m,n}, {n}, {,n}, and {n,} are <em style="color:blue">greedy</em>, i.e. they
match the longest possible substrings. Suffixing these operators with the character ? makes them
<em style="color:blue">non-greedy</em>. For example, the regular expression a{2,3}? matches either
two occurrences or three occurrences of the character a but will prefer to match only two characters. Hence, the regular expression a{2,3}? will find two matches in the string 'aaaa', while the regular expression a{2,3} only finds a single match.
End of explanation
re.findall(r'[abc]+', 'abcdcba')
Explanation: Character Classes
In order to match a set of characters we can use a <em style="color:blue">character class</em>.
If $c_1$, $\cdots$, $c_n$ are Unicode characters, then $[c_1\cdots c_n]$ is a regular expression that
matches any of the characters from the set ${c_1,\cdots,c_n}$:
$$ \mathcal{L}\bigl([c_1\cdots c_n]\bigr) := { c_1, \cdots, c_n } $$
End of explanation
re.findall(r'[1-9][0-9]*|0', '11 abc 12 2345 007 42 0')
Explanation: Character classes can also contain <em style="color:blue">ranges</em>. Syntactically, a range has the form
$c_1\texttt{-}c_2$, where $c_1$ and $c_2$ are Unicode characters.
For example, the regular expression [0-9] contains the range 0-9 and matches any decimal digit. To find all natural numbers embedded in a string we could use the regular expression [1-9][0-9]*|[0-9]. This regular expression matches either a single digit or a string that starts with a non-zero digit and is followed by any number of digits.
End of explanation
re.findall(r'[0-9]|[1-9][0-9]*', '11 abc 12 2345 007 42 0')
Explanation: Note that the next example looks quite similar but gives a different result:
End of explanation
re.findall(r'[\dabc]+', '11 abc12 1a2 2b3c4d5')
Explanation: Here, the regular expression starts with the alternative [0-9], which matches any single digit.
So once a digit is found, the resulting substring is returned and the search starts again. Therefore, if this regular expression is used in findall, it will only return a list of single digits.
There are some predefined character classes:
- \d matches any digit.
- \D matches any non-digit character.
- \s matches any whitespace character.
- \S matches any non-whitespace character.
- \w matches any alphanumeric character.
If we would use only <font style="font-variant: small-caps">Ascii</font> characters this would
be equivalent to the character class [0-9a-zA-Z_].
- \W matches any non-alphanumeric character.
- \b matches at a word boundary. The string that is matched is the empty string.
- \B matches at any place that is not a word boundary.
Again, the string that is matched is the empty string.
These escape sequences can also be used inside of square brackets.
End of explanation
re.findall(r'[^abc]+', 'axyzbuvwchij')
re.findall(r'\b\w+\b', 'This is some text where we want to extract the words.')
Explanation: Character classes can be negated if the first character after the opening [ is the character ^.
For example, [^abc] matches any character that is different from a, b, or c.
End of explanation
re.findall(r'\b(0|[1-9][0-9]*)\b', '11 abc 12 2345 007 42 0')
Explanation: The following regular expression uses the character class \b to isolate numbers. Note that we had to use parentheses since concatenation of regular expressions binds stronger than the choice operator |.
End of explanation
re.findall(r'(\d+)\s+\1', '12 12 23 23 17 18')
Explanation: Grouping
If $r$ is a regular expression, then $(r)$ is a regular expression describing the same language as
$r$. There are two reasons for using parentheses:
- Parentheses can be used to override the precedence of an operator.
This concept is the same as in programming languages. For example, the regular expression ab+
matches the character a followed by any positive number of occurrences of the character b because
the precedence of a quantifiers is higher than the precedence of concatenation of regular expressions.
However, (ab)+ matches the strings ab, abab, ababab, and so on.
- Parentheses can be used for <em style="color:blue">back-references</em> because inside
a regular expression we can refer to the substring matched by a regular expression enclosed in a pair of
parentheses using the syntax $\backslash n$ where $n \in {1,\cdots,9}$.
Here, $\backslash n$ refers to the $n^{\mathrm{th}}$ parenthesized <em style="color:blue">group</em> in the regular
expression, where a group is defined as any part of the regular expression enclosed in parentheses.
Counting starts with the left parentheses, For example, the regular expression
(a(b|c)*d)?ef(gh)+
has three groups:
1. (a(b|c)*d) is the first group,
2. (b|c) is the second group, and
3. (gh) is the third group.
For example, if we want to recognize a string that starts with a number followed by some white space and then
followed by the <b>same</b> number we can use the regular expression (\d+)\w+\1.
End of explanation
re.findall(r'c.*?t', 'ct cat caat could we look at that!')
Explanation: In general, given a digit $n$, the expression $\backslash n$ refers to the string matched in the $n$-th group of the regular expression.
The Dot
The regular expression . matches any character except the newline. For example, c.*?t matches any string that starts with the character c and ends with the character t and does not contain any newline. If we are using the non-greedy version of the quantifier *, we can find all such words in the string below.
End of explanation
re.findall(r'((?P<quote>[\'"])\w*(?P=quote))', 'abc "uvw" and \'xyz\'')
Explanation: The dot . does not have any special meaning when used inside a character range. Hence, the regular expression
[.] matches only the character ..
Named Groups
Referencing a group via the syntax \n where n is a natural number is both cumbersome and error-prone. Instead, we can use named groups.
The syntax to define a named group is
(?P<name>r)
where name is the name of the group and r is the regular expression. To refer to the string matched by this group we use the following syntax:
(?P=name)
For example, below we try to find a string of alphanumeric characters that is either contained in single quotes or in double quotes. The regular
expression [\'"] matches either a single or a double quote. By referring to the regular expression that has been named
quote we ensure that an opening single quote is matched by a closing single quote and an opening double quote is matched by a
closing double quote.
End of explanation
data = \
'''
This is a text containing five lines, two of which are empty.
This is the second non-empty line,
and this is the third non-empty line.
'''
re.findall(r'^.*$', data, flags=re.MULTILINE)
Explanation: Start and End of a Line
The regular expression ^ matches at the start of a string. If we set the flag re.MULTILINE, which we
will usually do when working with this regular expression containing the expression ^,
then ^ also matches at the beginning of each line,
i.e. it matches after every newline character.
Similarly, the regular expression $ matches at the end of a string. If we set the flag re.MULTILINE, then $ also matches at the end of each line,
i.e. it matches before every newline character.
End of explanation
text = 'Here is 1$, here are 21€, and there are 42 $.'
L = re.findall(r'([0-9]+)(?=\s*\$)', text)
print(f'L = {L}')
sum(int(x) for x in L)
Explanation: Lookahead Assertions
Sometimes we need to look ahead in order to know whether we have found what we are looking for. Consider the case that you want to add up all numbers followed by a dollar symbol but you are not interested in any other numbers. In this case a
lookahead assertion comes in handy. The syntax of a lookahead assertion is:
$$ r_1 (\texttt{?=}r_2) $$
Here $r_1$ and $r_2$ are regular expressions and ?= is the <em style="color:blue">lookahead operator</em>. $r_1$ is the regular expression you are searching for while $r_2$ is the regular expression describing the lookahead. Note that this lookahead is not matched. It is only checked whether $r_1$ is followed by $r_2$ but only the text matching $r_1$ is matched. Syntactically, the
lookahead $r_2$ has to be preceded by the lookahead operator and both have to be surrounded by parentheses.
In the following example we are looking for all numbers that are followed by dollar symbols and we sum these numbers up.
End of explanation
text = 'Here is 1$, here are 21 €, and there are 42 $.'
L = re.findall(r'[0-9]+(?![0-9]*\s*\$)', text)
print(f'L = {L}')
sum(int(x) for x in L)
Explanation: There are also <em style="color:blue">negative lookahead assertion</em>. The syntax is:
$$ r_1 (\texttt{?!}r_2) $$
Here $r_1$ and $r_2$ are regular expressions and ?! is the <em style="color:blue">negative lookahead operator</em>.
The expression above checks for all occurrences of $r_1$ that are <b>not</b> followed by $r_2$.
In the following examples we sum up all numbers that are <u>not</u> followed by a dollar symbol.
Note that the lookahead expression has to ensure that there are no additional digits. In general, negative lookahead is very tricky and I recommend against using it.
End of explanation
with open('alice.txt', 'r') as f:
text = f.read()
print(text[:1020])
Explanation: Examples
In order to have some strings to play with, let us read the file alice.txt, which contains the book
Alice's Adventures in Wonderland written by
Lewis Carroll.
End of explanation
len(re.findall(r'^.*\S.*?$', text, flags=re.MULTILINE))
Explanation: How many non-empty lines does this story have?
End of explanation
set(re.findall(r'\b[dfs]\w{2}[kt]\b', text, flags=re.IGNORECASE))
Explanation: Next, let us check, whether this text is suitable for minors. In order to do so we search for all four
letter words that start with either d, f or s and end with k or t.
End of explanation
L = re.findall(r'\b\w+\b', text.lower())
S = set(L)
print(f'There are {len(L)} words in this book and {len(S)} different words.')
Explanation: How many words are in this text and how many different words are used?
End of explanation |
12,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in Keras.
2. Build a simple RNN model and a multi-layer RNN model in Keras.
In this lab we will start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in Keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in Keras.
We will be exploring a lot of different model types in this notebook.
Step1: Note
Step2: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
Step4: The function clean_data below does three things
Step7: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
Step11: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
Step12: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
Step14: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
Step15: Baseline
Before we begin modeling in Keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
Step16: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
Step17: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
Step18: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with Keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
Step19: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in Keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
Step20: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in Keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
Step21: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in Keras.
2. Build a simple RNN model and a multi-layer RNN model in Keras.
In this lab we will start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in Keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in Keras.
We will be exploring a lot of different model types in this notebook.
End of explanation
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "your-gcp-bucket-here" # REPLACE WITH YOUR BUCKET
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
%env
PROJECT = PROJECT
BUCKET = BUCKET
REGION = REGION
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (Dense, DenseFeatures,
Conv1D, MaxPool1D,
Reshape, RNN,
LSTM, GRU, Bidirectional)
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
# To plot pretty figures
%matplotlib inline
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# For reproducible results.
from numpy.random import seed
seed(1)
tf.random.set_seed(2)
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Load necessary libraries and set up environment variables
End of explanation
%%time
bq = bigquery.Client(project=PROJECT)
bq_query = '''
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
LIMIT
100
'''
Explanation: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
End of explanation
def clean_data(input_df):
Cleans data to prepare for training.
Args:
input_df: Pandas dataframe.
Returns:
Pandas dataframe.
df = input_df.copy()
# Remove inf/na values.
real_valued_rows = ~(df == np.inf).max(axis=1)
df = df[real_valued_rows].dropna()
# TF doesn't accept datetimes in DataFrame.
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
# TF requires numeric label.
df['direction_numeric'] = df['direction'].apply(lambda x: {'DOWN': 0,
'STAY': 1,
'UP': 2}[x])
return df
Explanation: The function clean_data below does three things:
1. First, we'll remove any inf or NA values
2. Next, we parse the Date field to read it as a string.
3. Lastly, we convert the label direction into a numeric quantity, mapping 'DOWN' to 0, 'STAY' to 1 and 'UP' to 2.
End of explanation
STOCK_HISTORY_COLUMN = 'close_values_prior_260'
COL_NAMES = ['day_' + str(day) for day in range(0, 260)]
LABEL = 'direction_numeric'
def _scale_features(df):
z-scale feature columns of Pandas dataframe.
Args:
features: Pandas dataframe.
Returns:
Pandas dataframe with each column standardized according to the
values in that column.
avg = df.mean()
std = df.std()
return (df - avg) / std
def create_features(df, label_name):
Create modeling features and label from Pandas dataframe.
Args:
df: Pandas dataframe.
label_name: str, the column name of the label.
Returns:
Pandas dataframe
# Expand 1 column containing a list of close prices to 260 columns.
time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series)
# Rename columns.
time_series_features.columns = COL_NAMES
time_series_features = _scale_features(time_series_features)
# Concat time series features with static features and label.
label_column = df[LABEL]
return pd.concat([time_series_features,
label_column], axis=1)
Explanation: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
End of explanation
def _create_split(phase):
Create string to produce train/valid/test splits for a SQL query.
Args:
phase: str, either TRAIN, VALID, or TEST.
Returns:
String.
floor, ceiling = '2002-11-01', '2010-07-01'
if phase == 'VALID':
floor, ceiling = '2010-07-01', '2011-09-01'
elif phase == 'TEST':
floor, ceiling = '2011-09-01', '2012-11-30'
return '''
WHERE Date >= '{0}'
AND Date < '{1}'
'''.format(floor, ceiling)
def create_query(phase):
Create SQL query to create train/valid/test splits on subsample.
Args:
phase: str, either TRAIN, VALID, or TEST.
sample_size: str, amount of data to take for subsample.
Returns:
String.
basequery =
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
return basequery + _create_split(phase)
Explanation: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
End of explanation
N_TIME_STEPS = 260
N_LABELS = 3
Xtrain = pd.read_csv('../stock-train.csv')
Xvalid = pd.read_csv('../stock-valid.csv')
ytrain = Xtrain.pop(LABEL)
yvalid = Xvalid.pop(LABEL)
ytrain_categorical = to_categorical(ytrain.values)
yvalid_categorical = to_categorical(yvalid.values)
Explanation: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
End of explanation
def plot_curves(train_data, val_data, label='Accuracy'):
Plot training and validation metrics on single axis.
Args:
train_data: list, metrics obtrained from training data.
val_data: list, metrics obtained from validation data.
label: str, title and label for plot.
Returns:
Matplotlib plot.
plt.plot(np.arange(len(train_data)) + 0.5,
train_data,
"b.-", label="Training " + label)
plt.plot(np.arange(len(val_data)) + 1,
val_data, "r.-",
label="Validation " + label)
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel(label)
plt.grid(True)
Explanation: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
End of explanation
sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0]
Explanation: Baseline
Before we begin modeling in Keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
End of explanation
model = Sequential()
model.add(Dense(units=N_LABELS,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
Explanation: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
End of explanation
np.mean(history.history['val_accuracy'][-5:])
Explanation: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
End of explanation
dnn_hidden_units = [16, 8]
model = Sequential()
for layer in dnn_hidden_units:
model.add(Dense(units=layer,
activation="relu"))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with Keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
End of explanation
model = Sequential()
# Convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(Conv1D(filters=5,
kernel_size=5,
strides=2,
padding="valid",
input_shape=[None, 1]))
model.add(MaxPool1D(pool_size=2,
strides=None,
padding='valid'))
# Flatten the result and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(Dense(units=N_TIME_STEPS//4,
activation="relu"))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in Keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
End of explanation
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(LSTM(N_TIME_STEPS // 8,
activation='relu',
return_sequences=False))
model.add(Dense(units=N_LABELS,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
# Create the model.
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=40,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in Keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
End of explanation
rnn_hidden_units = [N_TIME_STEPS // 16,
N_TIME_STEPS // 32]
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units[:-1]:
model.add(GRU(units=layer,
activation='relu',
return_sequences=True))
model.add(GRU(units=rnn_hidden_units[-1],
return_sequences=False))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=50,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
End of explanation |
12,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Neural Structured Learning Authors
Step1: 合成グラフを使ってセンチメント分類を実施するためのグラフ正則化
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 依存関係とインポート
Step3: IMDB データセット
IMDB データセットには、Internet Movie Database から抽出した 50,000 件の映画レビューのテキストが含まれています。これらはトレーニング用とテスト用に 25,000 件ずつに分割されています。トレーニング用とテスト用のセットは均衡しています。つまり、これらに含まれる肯定的なレビューと否定的なレビューの件数が同数であるということです。
このチュートリアルでは、事前処理済みの IMDB データセットを使用します。
事前処理済みの IMDB データセットをダウンロードする
IMDB データセットは TensorFlow にパッケージ化されています。事前に処理済みであるため、レビュー(単語のシーケンス)は整数のシーケンスに変換されています。各整数は、ディクショナリ内の特定の単語を表現します。
以下のコードを使って IMDB データセットをダウンロードします(または、ダウンロード済みの場合はキャッシュされたコピーを使用します)。
Step4: 引数 num_words=10000 によって、トレーニングデータ内で最も頻繁に出現する単語の上位 10,000 語が保持されます。語彙を管理しやすいサイズに維持するために、まれに出現する単語は破棄されます。
データを調べる
データの形式を確認してみましょう。データセットは事前処理が行われているため、各サンプルは、映画レビューの単語を表現する整数の配列です。各ラベルは 0 または 1 の整数値で、0 は否定的なレビュー、1 は肯定的なレビューを示します。
Step5: レビューのテキストは整数に変換済みであり、これらの各整数はディクショナリ内の特定の単語を表現します。以下は、最初のレビューです。
Step6: 映画レビューの長さは異なります。以下のコードでは、1 番目と 2 番目の映画レビューの語数を示します。ニューラルネットワークへの入力は同じ長さである必要があるため、これについて後で解決する必要があります。
Step7: 整数を単語に変換し直す
整数を対応するテキストに変換する方法を知っておくと便利かもしれません。ここでは、整数と文字列のマッピングを含むディクショナリオブジェクトをクエリするヘルパー関数を作成することにします。
Step8: これで decode_review 関数を使用して、最初のレビューのテキストを表示できるようになりました。
Step9: グラフの構築
グラフの構築では、テキストサンプル用の埋め込みを作成してから、類似度関数を使って埋め込みの比較が行われます。
先に進む前にまず、このチュートリアルで作成されるアーティファクトを保存するためのディクショナリを作成します。
Step14: サンプルの埋め込みを作成する
事前トレーニング済みの Swivel 埋め込みを使用して、入力の各サンプルに使用する埋め込みを tf.train.Example 形式で作成します。作成した埋め込みを各サンプルの ID とともに TFRecord 形式で保存します。これは重要な手順であり、後でサンプルの埋め込みと対応するノードをグラフで一致させることができます。
Step15: グラフを構築する
サンプルの埋め込みを作成したので、それを使用して類似度グラフを構築します。つまり、このグラフのノードはサンプルに対応し、エッジはノードペア間の類似度に対応します。
Neural Structured Learning にはグラフ構築用のライブラリが備わっており、サンプルの埋め込みに基づいてグラフを作成することができます。類似度の測定としてコサイン類似度を使用して埋め込みを比較し、それらの間にエッジを作成します。また、類似度のしきい値を指定できるため、それを使用して、類似しないエッジを最終グラフから破棄することができます。次の例では、0.99 を類似度のしきい値として、12345 をランダムシードとして使用し、429,415 個の双方向エッジを持つグラフが構築されています。ここでは、グラフビルダーの局所性鋭敏型ハッシュ(LSH)サポートを使用して、グラフの構築を高速化しています。グラフビルダーの LSH サポートについては、build_graph_from_config API ドキュメントをご覧ください。
Step16: それぞれの双方向エッジは、出力 TSV ファイルの 2 つの有向エッジで表現されているため、ファイルには、合計 429,415 * 2 = 858,830 行が含まれます。
Step18: 注意
Step19: グラフ近傍を使ってトレーニングデータを拡張する
サンプルの特徴量と合成したグラフがあるため、Neural Structured Learning 用の拡張トレーニングデータを生成することができます。NSL フレームワークにはグラフとサンプル特徴量を合成し、最終的なトレーニングデータを作成してグラフの正則化を得るためのライブラリがあります。作成されたトレーニングデータには元のサンプル特徴量とそれに対応する近傍値が含まれます。
このチュートリアルでは、無向エッジを考慮し、サンプルごとに最大 3 つの近傍値を使用して、グラフ近傍でトレーニングデータを拡張します。
Step20: 基本モデル
グラフの正則化を行わずに基本モデルを構築する準備が整いました。このモデルを構築するには、グラフを構築するために使用された埋め込みを使用するか、分類タスクと同時に新しい埋め込みを学習することができます。このノートブックの目的により、ここでは後者を行うことにします。
グローバル変数
Step22: ハイパーパラメータ
HParams のインスタンスを使用して、トレーニングと評価に使用する様々なハイパーパラメータと定数をインクルードします。それぞれについての簡単な説明を以下に示します。
num_classes
Step26: データを準備する
整数の配列で表現されたレビューをニューラルネットワークにフィードする前にテンソルに変換する必要があります。この変換は、以下の 2 つの方法で行われます。
配列を、ワンホットエンコーディングと同様に、単語の出現を示す 0 と 1 のベクトルに変換します。たとえば、シーケンス [3, 5] は、1 を示す 3 と 5 を除き、すべてゼロの 10000 次元のベクトルになります。次に、これをネットワークの最初のレイヤーである、浮動小数点のベクトルデータを処理できる Dense レイヤーにします。ただし、このアプローチはメモリを集中的に使用するため、num_words * num_reviews サイズの行列が必要です。
または、配列の長さが同じになるように配列にパディングを行い、形状 max_length * num_reviews の整数テンソルを作成することができます。この形状を処理できる埋め込みレイヤーをネットワークの最初のレイヤーとして使用します。
このチュートリアルでは、後者のアプローチを使用します。
映画レビューの長さは同じである必要があるため、以下に定義される pad_sequence 関数を使用して、長さを標準化します。
Step29: モデルを構築する
ニューラルネットワークは、レイヤーをスタックして作成されており、これには、2 つの主なアーキテクチャ上の決定が必要です。
モデルにはいくつのレイヤーを使用するか。
各レイヤーにはいくつの非表示ユニットを使用するか。
この例では、入力データは単語のインデックスの配列で構成されています。予測するラベルは 0 または 1 です。
このチュートリアルでは、基本モデルとして双方向 LSTM を使用します。
Step30: レイヤーは分類器を構成するため効果的に一列に積み重ねられます。
最初のレイヤーは、整数でエンコーディングされた語彙を取る Input レイヤーです。
次のレイヤーは、整数でエンコーディングされた語彙を受け取って、埋め込みベクトルで各単語インデックスをルックアップする Embedding レイヤーです。これらのベクトルはモデルのトレーニングの過程で学習されます。ベクトルは出力配列に次元を追加します。生成される次元は、(batch, sequence, embedding) です。
次に、双方向 LSTM レイヤーがサンプルごとに固定長の出力ベクトルを返します。
この固定長の出力ベクトルは、64 個の非表示ユニットを持つ全結合(Dense)レイヤーに受け渡されます。
最後のレイヤーは、単一の出力ノードに密に接続されています。sigmoid 活性化関数を使用し、この値は、確率または信頼水準を表す 0 と 1 の間の浮動小数となります。
非表示ユニット
上記のモデルには、Embedding を除き、入力と出力の間に 2 つの中間または「非表示」レイヤーがあります。出力数(ユニット、ノード、またはニューロン)はレイヤーの表現空間の次元で、言い換えると、内部表現を学習する際にネットワークに許可された自由度です。
モデルにより大きい非表示ユニット数(より高次元の表現空間)がある場合や、レイヤー数が増えるほど、ネットワークはよく複雑な表現を学習できますが、ネットワークの計算コストが高まり、不要なパターンが学習される可能性があります。これらのパターンはトレーニングデータのパフォーマンスを改善しても、テストデータのパフォーマンスは改善しません。この現象は「過適合」と呼ばれています。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイド活性を持つ単一ユニットレイヤー)を出力するため、binary_crossentropy 損失関数を使用します。
Step31: 検証セットを作成する
トレーニングの際に、モデルが遭遇したことのないデータで、モデルの精度を確認します。この場合、元のトレーニングデータの一部を分割し、検証セットを作成します。(ここでテストセットを使用しないのは、トレーニングデータのみを使用してモデルの開発とチューニングを行い、その後でテストデータを一度だけ使用して精度を評価することが目標であるためです)。
このチュートリアルでは、最初のトレーニングサンプルのおよそ 10%(25000 の 10%)をトレーニングのラベル付きデータとして取り、残りを検証データとしています。最初のトレーニングデータとテストデータの割合は 50
Step32: モデルをトレーニングする
モデルをミニバッチでトレーニングします。トレーニング中に、検証セットでのモデルの損失と精度を監視します。
Step33: モデルを評価する
モデルがどのように実行するかを確認しましょう。損失(誤差を表す数値で、低いほど良です)と精度の 2 つの値が返されます。
Step34: 精度と損失の経時的なグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を収めたディクショナリを含む History オブジェクトを返します。
Step35: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の精度を比較したグラフを作成することができます。
Step36: トレーニングの損失がエポックごとに下降し、トレーニングの精度がエポックごとに上昇していることに注目してください。これは、勾配下降最適化を使用しているときに見られる現象で、イテレーションごとに希望する量を最小化します。
グラフの正則化
上記で構築した基本モデルを使用して、グラフの正則化を試す準備が整いました。Neural Structured Learning フレームワークが提供する GraphRegularization ラッパークラスを使用して基本(bi-LSTM)モデルをラップし、グラフの正則化を含めます。グラフ正則化のトレーニングと評価の残りのステップは、基本モデルのトレーニングと評価と同じです。
グラフ正則化モデルを作成する
グラフ正則化の増分効果を評価するために、基本モデルの新しいインスタンスを作成します。これは、model がすでに数回のイテレーションでトレーニングされており、このトレーニング済みのモデルを再利用してグラフ正則化モデルを作成しても、modelの公平な比較にならないためです。
Step37: モデルをトレーニングする
Step38: モデルを評価する
Step39: 精度と損失の経時的なグラフを作成する
Step40: ディクショナリには、トレーニング損失、トレーニング精度、トレーニンググラフ損失、検証損失、および検証精度の 5 つのエントリがあります。これらをまとめてプロットし、比較に使用することができます。グラフ損失はトレーニング中にのみ計算されることに注意してください。
Step41: 半教師あり学習の性能
半教師あり学習、さらに具体的に言えば、このチュートリアルの文脈でのグラフ正則化は、トレーニングデータの量が少ない場合に非常に強力です。トレーニングデータの不足分は、トレーニングサンプル間の類似度を利用して補完されます。これは、従来の教師あり学習では実現できません。
supervision ratio(教師率)を、トレーニング、検証、およびテストサンプルを含むサンプル総数に対するトレーニングサンプルの比率として定義します。このノートブックでは、基本モデルとグラフ正則化モデルの両方のトレーニングに 0.05 の教師率(ラベル付きデータの 5%)を使用しました。教師率がモデルの精度に与える影響を以下のセルで説明します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Neural Structured Learning Authors
End of explanation
!pip install --quiet neural-structured-learning
!pip install --quiet tensorflow-hub
Explanation: 合成グラフを使ってセンチメント分類を実施するためのグラフ正則化
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
<td><a href="https://tfhub.dev/"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TFハブモデルを参照してください</a></td>
</table>
概要
このノートブックでは、映画レビューのテキストを使用して、それが肯定的であるか否定的であるかに分類します。これは二項分類の例で、機械学習問題では重要な分類法として広く適用されています。
このノートブックでは、特定の入力からグラフを構築することでグラフ正則化の使用方法を示しています。入力に明示的なグラフが含まれない場合に、Neural Structured Learning(NSL)フレームワークを使ってグラフ正則化モデルを構築するための一般的なレシピは以下のとおりです。
入力のテキストサンプルに埋め込みを作成します。これは、word2vec、Swivel、BERT などの事前トレーニング済みのモデルを使って行えます。
'L2' distance、'cosine' distance などの類似度メトリクスを使って、これらの埋め込みに基づくグラフを構築します。グラフ内のノードはサンプルに対応し、グラフ内のエッジはサンプルペア間の類似度に対応します。
上記の合成グラフとサンプル特徴量からトレーニングデータを生成します。生成されたトレーニングデータには、元のノード特徴量のほかに、近傍する特徴量が含まれます。
Keras Sequential API、Functional API、または Subclass API を使用して、基本モデルとしてニューラルネットワークを作成します。
NSL フレームワークが提供する GraphRegularization ラッパークラスで基本モデルをラップし、新しいグラフ Keras モデルを作成します。この新しいモデルは、トレーニング目的の正則化項にグラフ正則化損失を含みます。
グラフ Keras モデルをトレーニングして評価します。
注意: このチュートリアルにかかる時間はおよそ 1 時間を想定しています。
要件
Neural Structured Learning パッケージをインストールします。
tensorflow-hub をインストールします。
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import neural_structured_learning as nsl
import tensorflow as tf
import tensorflow_hub as hub
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
Explanation: 依存関係とインポート
End of explanation
imdb = tf.keras.datasets.imdb
(pp_train_data, pp_train_labels), (pp_test_data, pp_test_labels) = (
imdb.load_data(num_words=10000))
Explanation: IMDB データセット
IMDB データセットには、Internet Movie Database から抽出した 50,000 件の映画レビューのテキストが含まれています。これらはトレーニング用とテスト用に 25,000 件ずつに分割されています。トレーニング用とテスト用のセットは均衡しています。つまり、これらに含まれる肯定的なレビューと否定的なレビューの件数が同数であるということです。
このチュートリアルでは、事前処理済みの IMDB データセットを使用します。
事前処理済みの IMDB データセットをダウンロードする
IMDB データセットは TensorFlow にパッケージ化されています。事前に処理済みであるため、レビュー(単語のシーケンス)は整数のシーケンスに変換されています。各整数は、ディクショナリ内の特定の単語を表現します。
以下のコードを使って IMDB データセットをダウンロードします(または、ダウンロード済みの場合はキャッシュされたコピーを使用します)。
End of explanation
print('Training entries: {}, labels: {}'.format(
len(pp_train_data), len(pp_train_labels)))
training_samples_count = len(pp_train_data)
Explanation: 引数 num_words=10000 によって、トレーニングデータ内で最も頻繁に出現する単語の上位 10,000 語が保持されます。語彙を管理しやすいサイズに維持するために、まれに出現する単語は破棄されます。
データを調べる
データの形式を確認してみましょう。データセットは事前処理が行われているため、各サンプルは、映画レビューの単語を表現する整数の配列です。各ラベルは 0 または 1 の整数値で、0 は否定的なレビュー、1 は肯定的なレビューを示します。
End of explanation
print(pp_train_data[0])
Explanation: レビューのテキストは整数に変換済みであり、これらの各整数はディクショナリ内の特定の単語を表現します。以下は、最初のレビューです。
End of explanation
len(pp_train_data[0]), len(pp_train_data[1])
Explanation: 映画レビューの長さは異なります。以下のコードでは、1 番目と 2 番目の映画レビューの語数を示します。ニューラルネットワークへの入力は同じ長さである必要があるため、これについて後で解決する必要があります。
End of explanation
def build_reverse_word_index():
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index['<PAD>'] = 0
word_index['<START>'] = 1
word_index['<UNK>'] = 2 # unknown
word_index['<UNUSED>'] = 3
return dict((value, key) for (key, value) in word_index.items())
reverse_word_index = build_reverse_word_index()
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
Explanation: 整数を単語に変換し直す
整数を対応するテキストに変換する方法を知っておくと便利かもしれません。ここでは、整数と文字列のマッピングを含むディクショナリオブジェクトをクエリするヘルパー関数を作成することにします。
End of explanation
decode_review(pp_train_data[0])
Explanation: これで decode_review 関数を使用して、最初のレビューのテキストを表示できるようになりました。
End of explanation
!mkdir -p /tmp/imdb
Explanation: グラフの構築
グラフの構築では、テキストサンプル用の埋め込みを作成してから、類似度関数を使って埋め込みの比較が行われます。
先に進む前にまず、このチュートリアルで作成されるアーティファクトを保存するためのディクショナリを作成します。
End of explanation
pretrained_embedding = 'https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1'
hub_layer = hub.KerasLayer(
pretrained_embedding, input_shape=[], dtype=tf.string, trainable=True)
def _int64_feature(value):
Returns int64 tf.train.Feature.
return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist()))
def _bytes_feature(value):
Returns bytes tf.train.Feature.
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _float_feature(value):
Returns float tf.train.Feature.
return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist()))
def create_embedding_example(word_vector, record_id):
Create tf.Example containing the sample's embedding and its ID.
text = decode_review(word_vector)
# Shape = [batch_size,].
sentence_embedding = hub_layer(tf.reshape(text, shape=[-1,]))
# Flatten the sentence embedding back to 1-D.
sentence_embedding = tf.reshape(sentence_embedding, shape=[-1])
features = {
'id': _bytes_feature(str(record_id)),
'embedding': _float_feature(sentence_embedding.numpy())
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_embeddings(word_vectors, output_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(output_path) as writer:
for word_vector in word_vectors:
example = create_embedding_example(word_vector, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features containing embeddings for training data in
# TFRecord format.
create_embeddings(pp_train_data, '/tmp/imdb/embeddings.tfr', 0)
Explanation: サンプルの埋め込みを作成する
事前トレーニング済みの Swivel 埋め込みを使用して、入力の各サンプルに使用する埋め込みを tf.train.Example 形式で作成します。作成した埋め込みを各サンプルの ID とともに TFRecord 形式で保存します。これは重要な手順であり、後でサンプルの埋め込みと対応するノードをグラフで一致させることができます。
End of explanation
graph_builder_config = nsl.configs.GraphBuilderConfig(
similarity_threshold=0.99, lsh_splits=32, lsh_rounds=15, random_seed=12345)
nsl.tools.build_graph_from_config(['/tmp/imdb/embeddings.tfr'],
'/tmp/imdb/graph_99.tsv',
graph_builder_config)
Explanation: グラフを構築する
サンプルの埋め込みを作成したので、それを使用して類似度グラフを構築します。つまり、このグラフのノードはサンプルに対応し、エッジはノードペア間の類似度に対応します。
Neural Structured Learning にはグラフ構築用のライブラリが備わっており、サンプルの埋め込みに基づいてグラフを作成することができます。類似度の測定としてコサイン類似度を使用して埋め込みを比較し、それらの間にエッジを作成します。また、類似度のしきい値を指定できるため、それを使用して、類似しないエッジを最終グラフから破棄することができます。次の例では、0.99 を類似度のしきい値として、12345 をランダムシードとして使用し、429,415 個の双方向エッジを持つグラフが構築されています。ここでは、グラフビルダーの局所性鋭敏型ハッシュ(LSH)サポートを使用して、グラフの構築を高速化しています。グラフビルダーの LSH サポートについては、build_graph_from_config API ドキュメントをご覧ください。
End of explanation
!wc -l /tmp/imdb/graph_99.tsv
Explanation: それぞれの双方向エッジは、出力 TSV ファイルの 2 つの有向エッジで表現されているため、ファイルには、合計 429,415 * 2 = 858,830 行が含まれます。
End of explanation
def create_example(word_vector, label, record_id):
Create tf.Example containing the sample's word vector, label, and ID.
features = {
'id': _bytes_feature(str(record_id)),
'words': _int64_feature(np.asarray(word_vector)),
'label': _int64_feature(np.asarray([label])),
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_records(word_vectors, labels, record_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(record_path) as writer:
for word_vector, label in zip(word_vectors, labels):
example = create_example(word_vector, label, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features (word vectors and labels) for training and test
# data in TFRecord format.
next_record_id = create_records(pp_train_data, pp_train_labels,
'/tmp/imdb/train_data.tfr', 0)
create_records(pp_test_data, pp_test_labels, '/tmp/imdb/test_data.tfr',
next_record_id)
Explanation: 注意: グラフの品質と、その延長として埋め込みの品質はグラフの正則化において非常に重要です。このノードブックでは Swivel 埋め込みを使用しましたが、たとえば BERT 埋め込みを使用した場合には、レビューのセマンティクスをより正確に捉えられる可能性があります。それぞれのニーズに合ったさまざまな埋め込みを使用することをお勧めします。
サンプル特徴量
この問題のサンプル特徴量を tf.train.Example 形式で作成し、TFRecord 形式で永続化します。各サンプルには、次の 3 つの特徴量が含まれます。
id: サンプルのノード ID。
words: 単語 ID を含む int64 リスト。
label: レビューのターゲットクラスを識別するシングルトン int64。
End of explanation
nsl.tools.pack_nbrs(
'/tmp/imdb/train_data.tfr',
'',
'/tmp/imdb/graph_99.tsv',
'/tmp/imdb/nsl_train_data.tfr',
add_undirected_edges=True,
max_nbrs=3)
Explanation: グラフ近傍を使ってトレーニングデータを拡張する
サンプルの特徴量と合成したグラフがあるため、Neural Structured Learning 用の拡張トレーニングデータを生成することができます。NSL フレームワークにはグラフとサンプル特徴量を合成し、最終的なトレーニングデータを作成してグラフの正則化を得るためのライブラリがあります。作成されたトレーニングデータには元のサンプル特徴量とそれに対応する近傍値が含まれます。
このチュートリアルでは、無向エッジを考慮し、サンプルごとに最大 3 つの近傍値を使用して、グラフ近傍でトレーニングデータを拡張します。
End of explanation
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
Explanation: 基本モデル
グラフの正則化を行わずに基本モデルを構築する準備が整いました。このモデルを構築するには、グラフを構築するために使用された埋め込みを使用するか、分類タスクと同時に新しい埋め込みを学習することができます。このノートブックの目的により、ここでは後者を行うことにします。
グローバル変数
End of explanation
class HParams(object):
Hyperparameters used for training.
def __init__(self):
### dataset parameters
self.num_classes = 2
self.max_seq_length = 256
self.vocab_size = 10000
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 2
### model architecture
self.num_embedding_dims = 16
self.num_lstm_dims = 64
self.num_fc_units = 64
### training parameters
self.train_epochs = 10
self.batch_size = 128
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
Explanation: ハイパーパラメータ
HParams のインスタンスを使用して、トレーニングと評価に使用する様々なハイパーパラメータと定数をインクルードします。それぞれについての簡単な説明を以下に示します。
num_classes: positive と negative の 2 つのクラスがあります。
max_seq_length: これはこの例のそれぞれの映画レビューから考慮される単語の最大数です。
vocab_size: これは、この例で考慮される語彙のサイズです。
distance_type:これはサンプルをその近傍と正則化する際に使用する距離メトリックです。
graph_regularization_multiplier:これは全体の損失関数においてグラフ正則化項の相対的な重みを制御します。
num_neighbors: グラフ正則化に使用する近傍の数です。この値は、nsl.tools.pack_nbrs を呼び出す際に上記で使用した max_nbrs 以下である必要があります。
num_fc_units: ニューラルネットワークの全結合レイヤーのユニット数。
train_epochs:トレーニングのエポック数。
<code>batch_size</code>: トレーニングや評価に使用するバッチサイズ。
eval_steps:評価が完了したと判断するまでに処理を行うバッチ数。None 設定にすると、テストセット内の全てのインスタンスを評価します。
End of explanation
def make_dataset(file_path, training=False):
Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
def pad_sequence(sequence, max_seq_length):
Pads the input sequence (a `tf.SparseTensor`) to `max_seq_length`.
pad_size = tf.maximum([0], max_seq_length - tf.shape(sequence)[0])
padded = tf.concat(
[sequence.values,
tf.fill((pad_size), tf.cast(0, sequence.dtype))],
axis=0)
# The input sequence may be larger than max_seq_length. Truncate down if
# necessary.
return tf.slice(padded, [0], [max_seq_length])
def parse_example(example_proto):
Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
# The 'words' feature is a variable length word ID vector.
feature_spec = {
'words': tf.io.VarLenFeature(tf.int64),
'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.VarLenFeature(tf.int64)
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
# Since the 'words' feature is a variable length word vector, we pad it to a
# constant maximum length based on HPARAMS.max_seq_length
features['words'] = pad_sequence(features['words'], HPARAMS.max_seq_length)
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
features[nbr_feature_key] = pad_sequence(features[nbr_feature_key],
HPARAMS.max_seq_length)
labels = features.pop('label')
return features, labels
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset('/tmp/imdb/nsl_train_data.tfr', True)
test_dataset = make_dataset('/tmp/imdb/test_data.tfr')
Explanation: データを準備する
整数の配列で表現されたレビューをニューラルネットワークにフィードする前にテンソルに変換する必要があります。この変換は、以下の 2 つの方法で行われます。
配列を、ワンホットエンコーディングと同様に、単語の出現を示す 0 と 1 のベクトルに変換します。たとえば、シーケンス [3, 5] は、1 を示す 3 と 5 を除き、すべてゼロの 10000 次元のベクトルになります。次に、これをネットワークの最初のレイヤーである、浮動小数点のベクトルデータを処理できる Dense レイヤーにします。ただし、このアプローチはメモリを集中的に使用するため、num_words * num_reviews サイズの行列が必要です。
または、配列の長さが同じになるように配列にパディングを行い、形状 max_length * num_reviews の整数テンソルを作成することができます。この形状を処理できる埋め込みレイヤーをネットワークの最初のレイヤーとして使用します。
このチュートリアルでは、後者のアプローチを使用します。
映画レビューの長さは同じである必要があるため、以下に定義される pad_sequence 関数を使用して、長さを標準化します。
End of explanation
# This function exists as an alternative to the bi-LSTM model used in this
# notebook.
def make_feed_forward_model():
Builds a simple 2 layer feed forward neural network.
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(embedding_layer)
dense_layer = tf.keras.layers.Dense(16, activation='relu')(pooling_layer)
outputs = tf.keras.layers.Dense(1)(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
def make_bilstm_model():
Builds a bi-directional LSTM model.
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size,
HPARAMS.num_embedding_dims)(
inputs)
lstm_layer = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(HPARAMS.num_lstm_dims))(
embedding_layer)
dense_layer = tf.keras.layers.Dense(
HPARAMS.num_fc_units, activation='relu')(
lstm_layer)
outputs = tf.keras.layers.Dense(1)(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
# Feel free to use an architecture of your choice.
model = make_bilstm_model()
model.summary()
Explanation: モデルを構築する
ニューラルネットワークは、レイヤーをスタックして作成されており、これには、2 つの主なアーキテクチャ上の決定が必要です。
モデルにはいくつのレイヤーを使用するか。
各レイヤーにはいくつの非表示ユニットを使用するか。
この例では、入力データは単語のインデックスの配列で構成されています。予測するラベルは 0 または 1 です。
このチュートリアルでは、基本モデルとして双方向 LSTM を使用します。
End of explanation
model.compile(
optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: レイヤーは分類器を構成するため効果的に一列に積み重ねられます。
最初のレイヤーは、整数でエンコーディングされた語彙を取る Input レイヤーです。
次のレイヤーは、整数でエンコーディングされた語彙を受け取って、埋め込みベクトルで各単語インデックスをルックアップする Embedding レイヤーです。これらのベクトルはモデルのトレーニングの過程で学習されます。ベクトルは出力配列に次元を追加します。生成される次元は、(batch, sequence, embedding) です。
次に、双方向 LSTM レイヤーがサンプルごとに固定長の出力ベクトルを返します。
この固定長の出力ベクトルは、64 個の非表示ユニットを持つ全結合(Dense)レイヤーに受け渡されます。
最後のレイヤーは、単一の出力ノードに密に接続されています。sigmoid 活性化関数を使用し、この値は、確率または信頼水準を表す 0 と 1 の間の浮動小数となります。
非表示ユニット
上記のモデルには、Embedding を除き、入力と出力の間に 2 つの中間または「非表示」レイヤーがあります。出力数(ユニット、ノード、またはニューロン)はレイヤーの表現空間の次元で、言い換えると、内部表現を学習する際にネットワークに許可された自由度です。
モデルにより大きい非表示ユニット数(より高次元の表現空間)がある場合や、レイヤー数が増えるほど、ネットワークはよく複雑な表現を学習できますが、ネットワークの計算コストが高まり、不要なパターンが学習される可能性があります。これらのパターンはトレーニングデータのパフォーマンスを改善しても、テストデータのパフォーマンスは改善しません。この現象は「過適合」と呼ばれています。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイド活性を持つ単一ユニットレイヤー)を出力するため、binary_crossentropy 損失関数を使用します。
End of explanation
validation_fraction = 0.9
validation_size = int(validation_fraction *
int(training_samples_count / HPARAMS.batch_size))
print(validation_size)
validation_dataset = train_dataset.take(validation_size)
train_dataset = train_dataset.skip(validation_size)
Explanation: 検証セットを作成する
トレーニングの際に、モデルが遭遇したことのないデータで、モデルの精度を確認します。この場合、元のトレーニングデータの一部を分割し、検証セットを作成します。(ここでテストセットを使用しないのは、トレーニングデータのみを使用してモデルの開発とチューニングを行い、その後でテストデータを一度だけ使用して精度を評価することが目標であるためです)。
このチュートリアルでは、最初のトレーニングサンプルのおよそ 10%(25000 の 10%)をトレーニングのラベル付きデータとして取り、残りを検証データとしています。最初のトレーニングデータとテストデータの割合は 50:50(それぞれ 25000 個のサンプル)であったため、実際のトレーニング/検証/テストの分割率は、5:45:50 です。
'train_dataset' にはすでにバッチ化とシャッフルが行われいます。
End of explanation
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
Explanation: モデルをトレーニングする
モデルをミニバッチでトレーニングします。トレーニング中に、検証セットでのモデルの損失と精度を監視します。
End of explanation
results = model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(results)
Explanation: モデルを評価する
モデルがどのように実行するかを確認しましょう。損失(誤差を表す数値で、低いほど良です)と精度の 2 つの値が返されます。
End of explanation
history_dict = history.history
history_dict.keys()
Explanation: 精度と損失の経時的なグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を収めたディクショナリを含む History オブジェクトを返します。
End of explanation
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
Explanation: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の精度を比較したグラフを作成することができます。
End of explanation
# Build a new base LSTM model.
base_reg_model = make_bilstm_model()
# Wrap the base model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: トレーニングの損失がエポックごとに下降し、トレーニングの精度がエポックごとに上昇していることに注目してください。これは、勾配下降最適化を使用しているときに見られる現象で、イテレーションごとに希望する量を最小化します。
グラフの正則化
上記で構築した基本モデルを使用して、グラフの正則化を試す準備が整いました。Neural Structured Learning フレームワークが提供する GraphRegularization ラッパークラスを使用して基本(bi-LSTM)モデルをラップし、グラフの正則化を含めます。グラフ正則化のトレーニングと評価の残りのステップは、基本モデルのトレーニングと評価と同じです。
グラフ正則化モデルを作成する
グラフ正則化の増分効果を評価するために、基本モデルの新しいインスタンスを作成します。これは、model がすでに数回のイテレーションでトレーニングされており、このトレーニング済みのモデルを再利用してグラフ正則化モデルを作成しても、modelの公平な比較にならないためです。
End of explanation
graph_reg_history = graph_reg_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
Explanation: モデルをトレーニングする
End of explanation
graph_reg_results = graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(graph_reg_results)
Explanation: モデルを評価する
End of explanation
graph_reg_history_dict = graph_reg_history.history
graph_reg_history_dict.keys()
Explanation: 精度と損失の経時的なグラフを作成する
End of explanation
acc = graph_reg_history_dict['accuracy']
val_acc = graph_reg_history_dict['val_accuracy']
loss = graph_reg_history_dict['loss']
graph_loss = graph_reg_history_dict['scaled_graph_loss']
val_loss = graph_reg_history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.clf() # clear figure
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-gD" is for solid green line with diamond markers.
plt.plot(epochs, graph_loss, '-gD', label='Training graph loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
Explanation: ディクショナリには、トレーニング損失、トレーニング精度、トレーニンググラフ損失、検証損失、および検証精度の 5 つのエントリがあります。これらをまとめてプロットし、比較に使用することができます。グラフ損失はトレーニング中にのみ計算されることに注意してください。
End of explanation
# Accuracy values for both the Bi-LSTM model and the feed forward NN model have
# been precomputed for the following supervision ratios.
supervision_ratios = [0.3, 0.15, 0.05, 0.03, 0.02, 0.01, 0.005]
model_tags = ['Bi-LSTM model', 'Feed Forward NN model']
base_model_accs = [[84, 84, 83, 80, 65, 52, 50], [87, 86, 76, 74, 67, 52, 51]]
graph_reg_model_accs = [[84, 84, 83, 83, 65, 63, 50],
[87, 86, 80, 75, 67, 52, 50]]
plt.clf() # clear figure
fig, axes = plt.subplots(1, 2)
fig.set_size_inches((12, 5))
for ax, model_tag, base_model_acc, graph_reg_model_acc in zip(
axes, model_tags, base_model_accs, graph_reg_model_accs):
# "-r^" is for solid red line with triangle markers.
ax.plot(base_model_acc, '-r^', label='Base model')
# "-gD" is for solid green line with diamond markers.
ax.plot(graph_reg_model_acc, '-gD', label='Graph-regularized model')
ax.set_title(model_tag)
ax.set_xlabel('Supervision ratio')
ax.set_ylabel('Accuracy(%)')
ax.set_ylim((25, 100))
ax.set_xticks(range(len(supervision_ratios)))
ax.set_xticklabels(supervision_ratios)
ax.legend(loc='best')
plt.show()
Explanation: 半教師あり学習の性能
半教師あり学習、さらに具体的に言えば、このチュートリアルの文脈でのグラフ正則化は、トレーニングデータの量が少ない場合に非常に強力です。トレーニングデータの不足分は、トレーニングサンプル間の類似度を利用して補完されます。これは、従来の教師あり学習では実現できません。
supervision ratio(教師率)を、トレーニング、検証、およびテストサンプルを含むサンプル総数に対するトレーニングサンプルの比率として定義します。このノートブックでは、基本モデルとグラフ正則化モデルの両方のトレーニングに 0.05 の教師率(ラベル付きデータの 5%)を使用しました。教師率がモデルの精度に与える影響を以下のセルで説明します。
End of explanation |
12,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Librería numpy
Objetivos
Conocer la librería numpy y su aplicación para cálculo numérico.
Aprender las diferencias de utilización entre numpy.matrix y numpy.array.
0.1 Instrucciones
Las instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente link.
Después de descargar y abrir el presente notebook, recuerden
Step2: Contenido
Overview de Numpy y Scipy
Librería Numpy
Arreglos vs Matrices
Axis
Funciones basicas.
Input y Output
Tips
1. Overview de numpy y scipy
¿Cual es la diferencia entre numpy y scipy?
In an ideal world, NumPy would contain nothing but the array data type and the most basic operations
Step3: Importante
Ipython notebook es interactivo y permite la utilización de tabulación para ofrecer sugerencias o enseñar ayuda (no solo para numpy, sino que para cualquier código en python).
Pruebe los siguientes ejemplos
Step4: 2. Librería Numpy
2.1 Array vs Matrix
Por defecto, la gran mayoria de las funciones de numpy y de scipy asumen que se les pasará un objeto de tipo array.
Veremos las diferencias entre los objetos array y matrix, pero recuerden utilizar array mientras sea posible.
Matrix
Una matrix de numpy se comporta exactamente como esperaríamos de una matriz
Step5: 2.1 Array vs Matrix
Array
Un array de numpy es simplemente un "contenedor" multidimensional.
Pros
Step6: Desafío 1
Step7: 2.2 Indexación y Slicing
Los arrays se indexan de la forma "tradicional".
Para un array unidimensional
Step8: Observación
Cabe destacar que al tomar slices (subsecciones) de un arreglo obtenemos siempre un arreglo con menores dimensiones que el original.
Esta notación es extremadamente conveniente, puesto que nos permite manipular el array sin necesitar conocer el tamaño del array y escribir de manera compacta las fórmulas numéricas.
Por ejemplo, implementar una derivada numérica es tan simple como sigue.
Step9: Desafío 2
Step10: 2. Librería Numpy
2.2 Funciones Básicas
Algunas funciones básicas que es conveniente conocer son las siguientes
Step11: 2. Librería Numpy
2.2 Funciones Básicas
Algunas funciones básicas que es conveniente conocer son las siguientes
Step12: Desafío 3
Complete el siguiente código
Step13: Desafío 4
Implemente la regla de integración trapezoidal
Step14: 2. Librería Numpy
2.5 Inputs y Outputs
Numpy permite leer datos en formato array con la función loadtxt. Existen variados argumentos opcionales, pero los mas importantes son
Step15: 2. Librería Numpy
2.5 Inputs y Outputs
Numpy permite guardar datos de manera sencilla con la función savetxt
Step16: Revisemos si el archivo quedó bien escrito. Cambiaremos de python a bash para utilizar los comandos del terminal
Step17: Desafío 5
Lea el archivo data/cherry.txt
Escale la matriz para tener todas las unidades en metros o metros cubicos.
Guarde la matriz en un nuevo archivo data/cherry_mks.txt, con un encabezado apropiado y 2 decimales de precisión para el flotante (pero no en notación científica).
Step18: 2. Librería Numpy
2.6 Selecciones de datos
Existen 2 formas de seleccionar datos en un array A
Step19: 2.6 Índices
Observe que es posible repetir índices, por lo que el array obtenido puede tener más elementos que el array original.
En un arreglo 2d, es necesario pasar 2 arrays, el primero para las filas y el segundo para las columnas.
Step20: Desafío 6
La potencia de un aerogenerador, para $k$ una constante relacionada con la geometría y la eficiencia, $\rho$ la densidad del are, $r$ el radio del aerogenerador en metros y $v$ la velocidad el viento en metros por segundo, está dada por | Python Code:
IPython Notebook v4.0 para python 3.0
Librerías adicionales: numpy, matplotlib
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
Explanation: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Librería numpy
Objetivos
Conocer la librería numpy y su aplicación para cálculo numérico.
Aprender las diferencias de utilización entre numpy.matrix y numpy.array.
0.1 Instrucciones
Las instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente link.
Después de descargar y abrir el presente notebook, recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con Ctr-S para evitar sorpresas.
* Reemplazar en las celdas de código donde diga FIX_ME por el código correspondiente.
* Ejecutar cada celda de código utilizando Ctr-Enter
0.2 Licenciamiento y Configuración
Ejecutar la siguiente celda mediante Ctr-S.
End of explanation
import numpy as np
print np.version.version # Si alguna vez tienen problemas, verifiquen su version de numpy
Explanation: Contenido
Overview de Numpy y Scipy
Librería Numpy
Arreglos vs Matrices
Axis
Funciones basicas.
Input y Output
Tips
1. Overview de numpy y scipy
¿Cual es la diferencia entre numpy y scipy?
In an ideal world, NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic elementwise functions, et cetera. All numerical code would reside in SciPy. However, one of NumPy’s important goals is compatibility, so NumPy tries to retain all features supported by either of its predecessors. Thus NumPy contains some linear algebra functions, even though these more properly belong in SciPy. In any case, SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms. If you are doing scientific computing with python, you should probably install both NumPy and SciPy. Most new features belong in SciPy rather than NumPy.
Link stackoverflow
Por ser python un lenguaje open-source, existen miles de paquetes disponibles creados por individuos o comunidades. Éstos pueden estar disponibles en un repositorio como github o bitbucket, o bien estar disponibles en el repositorio oficial de python: pypi. En un inicio, cuando no existía una librerías de cálculo científico oficial, varios candidatos proponían soluciones:
* numpy: tenía una excelente representación de vectores, matrices y arreglos, implementados en C y llamados fácilmente desde python
* scipy: proponía linkear a librerías ya elaboradas de calculo científico de alto rendimiento en C o fortran, permitiendo ejecutar rápidamente desde python.
Ambos projectos fueron creciendo en complejidad y alcance, y en vez de competir, decidieron dividir tareas y unificar fuerzas para proponer una plataforma de cálculo científico que reemplazara completamente otros programas.
numpy: Corresponde a lo relacionado con la estructura de los datos (arrays densos y sparse, matrices, constructores especiales, lectura de datos regulares, etc.), pero no las operaciones en sí. Por razones históricas y de compatibilidad, tiene algunos algoritmos, pero en realidad resulta más consistente utilizar los algoritmos de scipy.
scipy: Corresponde a la implementación numérica de diversos algoritmos de corte científicos: algebra lineal, estadística, ecuaciones diferenciales ordinarias, interpolacion, integracion, optimización, análisis de señales, entre otros.
OBSERVACIÓN IMPORTANTE:
Las matrices y arrays de numpy deben contener variables con el mismo tipo de datos: sólo enteros, sólo flotantes, sólo complejos, sólo booleanos o sólo strings. La uniformicidad de los datos es lo que permite acelerar los cálculos con implementaciones en C a bajo nivel.
2. Librería Numpy
Siempre importaremos la librería numpy de la siguiente forma:
import numpy as np
Todas las funciones y módulos de numpy quedan a nuestro alcance a 3 carácteres de distancia:
np.array([1,4,9,16])
np.linspace(0.,1.,100)
Evite a todo costo utilizar lo siguiente:
from numpy import *
End of explanation
# Presionar tabulacción con el cursor despues de np.arr
np.arr
# Presionar Ctr-Enter para obtener la documentacion de la funcion np.array usando "?"
np.array?
# Presionar Ctr-Enter
%who
x = 10
%who
Explanation: Importante
Ipython notebook es interactivo y permite la utilización de tabulación para ofrecer sugerencias o enseñar ayuda (no solo para numpy, sino que para cualquier código en python).
Pruebe los siguientes ejemplos:
End of explanation
# Operaciones con np.matrix
A = np.matrix([[1,2],[3,4]])
B = np.matrix([[1, 1],[0,1]], dtype=float)
x = np.matrix([[1],[2]])
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "A*B =\n", A*B
print "A*x =\n", A*x
print "A*A = A^2 =\n", A**2
print "x.T*A =\n", x.T * A
Explanation: 2. Librería Numpy
2.1 Array vs Matrix
Por defecto, la gran mayoria de las funciones de numpy y de scipy asumen que se les pasará un objeto de tipo array.
Veremos las diferencias entre los objetos array y matrix, pero recuerden utilizar array mientras sea posible.
Matrix
Una matrix de numpy se comporta exactamente como esperaríamos de una matriz:
Pros:
Multiplicación utiliza el signo * como es esperable.
Resulta natural si lo único que haremos es algebra lineal.
Contras:
Todas las matrices deben estar completamente alineadas para poder operar correctamente.
Operaciones elementwise son mas dificiles de definir/acceder.
Están exclusivamente definidas en 2D: un vector fila o un vector columna siguen siendo 2D.
End of explanation
# Operaciones con np.matrix
A = np.array([[1,2],[3,4]])
B = np.array([[1, 1],[0,1]], dtype=float)
x = np.array([1,2]) # No hay necesidad de definir como fila o columna!
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "AoB = (multiplicacion elementwise) \n", A*B
print "A*B = (multiplicacion matricial, v1) \n", np.dot(A,B)
print "A*B = (multiplicacion matricial, v2) \n", A.dot(B)
print "A*A = A^2 = (potencia matricial)\n", np.linalg.matrix_power(A,2)
print "AoA = (potencia elementwise)\n", A**2
print "A*x =\n", np.dot(A,x)
print "x.T*A =\n", np.dot(x,A) # No es necesario transponer.
Explanation: 2.1 Array vs Matrix
Array
Un array de numpy es simplemente un "contenedor" multidimensional.
Pros:
Es multidimensional: 1D, 2D, 3D, ...
Resulta consistente: todas las operaciones son element-wise a menos que se utilice una función específica.
Contras:
Multiplicación maticial utiliza la función dot()
End of explanation
# 1: Utilizando matrix
A = np.matrix([]) # FIX ME
B = np.matrix([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
# 2: Utilizando arrays
A = np.array([]) # FIX ME
B = np.array([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
Explanation: Desafío 1: matrix vs array
Sean
$$
A = \begin{pmatrix} 1 & 0 & 1 \ 0 & 1 & 1\end{pmatrix}
$$
y
$$
B = \begin{pmatrix} 1 & 0 & 1 \ 0 & 1 & 1 \ 0 & 0 & 1\end{pmatrix}
$$
Cree las matrices utilizando np.matrix y multipliquelas en el sentido matricial. Imprima el resultado.
Cree las matrices utilizando np.array y multipliquelas en el sentido matricial. Imprima el resultado.
End of explanation
x = np.arange(9) # "Vector" con valores del 0 al 8
print "x = ", x
print "x[:] = ", x[:]
print "x[5:] = ", x[5:]
print "x[:8] = ", x[:8]
print "x[:-1] = ", x[:-1]
print "x[1:-1] = ", x[1:-1]
print "x[1:-1:2] = ", x[1:-1:2]
A = x.reshape(3,3) # Arreglo con valores del 0 al 8, en 3 filas y 3 columnas.
print "\n"
print "A = \n", A
print "primera fila de A\n", A[0,:]
print "ultima columna de A\n", A[:,-1]
print "submatriz de A\n", A[:2,:2]
Explanation: 2.2 Indexación y Slicing
Los arrays se indexan de la forma "tradicional".
Para un array unidimensional: sólo tiene una indexacion. ¡No es ni fila ni columna!
Para un array bidimensional: primera componente son las filas, segunda componente son las columnas. Notación respeta por tanto la convención tradicional de matrices.
Para un array tridimensional: primera componente son las filas, segunda componente son las columnas, tercera componente la siguiente dimension.
<img src="images/anatomyarray.png" alt="" height="100px" align="left"/>
Respecto a los índices de los elementos, éstos comienzan en cero, como en C. Además, es posible utilizar índices negativos, que como convención asignan -1 al último elemento, al -2 el penúltimo elemento, y así sucesivamente.
Por ejemplo, si a = [2,3,5,7,11,13,17,19], entonces a[0] es el valor 2 y a[1] es el valor 3, mientras que a[-1] es el valor 19 y a[-2] es el valor 17.
Ademas, en python existe la "slicing notation":
* a[start:end] : items desde índice start hasta end-1
* a[start:] : items desde índice start hasta el final del array
* a[:end] : items desde el inicio hasta el índice end-1
* a[:] : todos los items del array (una copia nueva)
* a[start:end:step] : items desde start hasta pasar end (sin incluir) con paso step
End of explanation
def f(x):
return 1 + x**2
x = np.array([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # O utilizar np.linspace!
y = f(x) # Tan facil como llamar f sobre x
dydx = ( y[1:] - y[:-1] ) / ( x[1:] - x[:-1] )
x_aux = 0.5*(x[1:] + x[:-1])
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, '-s', label="f")
plt.plot(x_aux, dydx, '-s', label="df/dx")
plt.legend(loc="upper left")
plt.show()
Explanation: Observación
Cabe destacar que al tomar slices (subsecciones) de un arreglo obtenemos siempre un arreglo con menores dimensiones que el original.
Esta notación es extremadamente conveniente, puesto que nos permite manipular el array sin necesitar conocer el tamaño del array y escribir de manera compacta las fórmulas numéricas.
Por ejemplo, implementar una derivada numérica es tan simple como sigue.
End of explanation
def g(x):
return 1 + x**2 + np.sin(x)
x = np.linspace(0,1,10)
y = g(x)
d2ydx2 = 0 * x # FIX ME
x_aux = 0*d2ydx2 # FIX ME
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, label="f")
plt.plot(x_aux, d2ydx2, label="d2f/dx2")
plt.legend(loc="upper left")
plt.show()
Explanation: Desafío 2: Derivación numérica
Implemente el cálculo de la segunda derivada, que puede obtenerse por diferencias finitas centradas mediante
$$ \frac{d f(x_i)}{dx} = \frac{1}{\Delta x^2} \Big( f(x_{i+1}) -2 f(x_{i}) + f(x_{i-1}) \Big)$$
End of explanation
# arrays 1d
A = np.ones(3)
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros(3)
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(1,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# Si queremos forzar la misma forma que A y B
C = np.eye(1,3).flatten() # o np.eye(1,3)[0,:]
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# square arrays
A = np.ones((3,3))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((3,3))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(3) # Or np.eye(3,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# fat 2d array
A = np.ones((2,5))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((2,5))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(2,5)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
Explanation: 2. Librería Numpy
2.2 Funciones Básicas
Algunas funciones básicas que es conveniente conocer son las siguientes:
* shape: Entrega las dimensiones del arreglo. Siempre es una tupla.
* len: Entrega el número de elementos de la primera dimensión del arreglo. Siempre es un entero.
* ones: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.
* zeros: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.
* eye: Crea un arreglo con las dimensiones provistas e inicializado con 1 en la diagonal. Por defecto array 2D.
End of explanation
x = np.linspace(0., 1., 6)
A = x.reshape(3,2)
print "x = \n", x
print "A = \n", A
print "np.diag(x) = \n", np.diag(x)
print "np.diag(B) = \n", np.diag(A)
print ""
print "A.sum() = ", A.sum()
print "A.sum(axis=0) = ", A.sum(axis=0)
print "A.sum(axis=1) = ", A.sum(axis=1)
print ""
print "A.mean() = ", A.mean()
print "A.mean(axis=0) = ", A.mean(axis=0)
print "A.mean(axis=1) = ", A.mean(axis=1)
print ""
print "A.std() = ", A.std()
print "A.std(axis=0) = ", A.std(axis=0)
print "A.std(axis=1) = ", A.std(axis=1)
Explanation: 2. Librería Numpy
2.2 Funciones Básicas
Algunas funciones básicas que es conveniente conocer son las siguientes:
* reshape: Convierte arreglo a nueva forma. Numero de elementos debe ser el mismo.
* linspace: Regresa un arreglo con valores linealmente espaciados.
* diag(x): Si x es 1D, regresa array 2D con valores en diagonal. Si x es 2D, regresa valores en la diagonal.
* sum: Suma los valores del arreglo. Puede hacerse en general o a lo largo de un axis.
* mean: Calcula el promedio de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.
* std: Calcula la desviación estándar de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.
End of explanation
A = np.outer(np.arange(3),np.arange(3))
print A
# FIX ME
# FIX ME
# FIX ME
# FIX ME
# FIX ME
Explanation: Desafío 3
Complete el siguiente código:
* Se le provee un array A cuadrado
* Calcule un array B como la multiplicación element-wise de A por sí misma.
* Calcule un array C como la multiplicación matricial de A y B.
* Imprima la matriz C resultante.
* Calcule la suma, promedio y desviación estándar de los valores en la diagonal de C.
* Imprima los valores anteriormente calculados.
End of explanation
def mi_funcion(x):
f = 1 + x + x**3 + x**5 + np.sin(x)
return f
N = 5
x = np.linspace(-1,1,N)
y = mi_funcion(x)
# FIX ME
I = 0 # FIX ME
# FIX ME
print "Area bajo la curva: %.3f" %I
# Ilustración gráfica
x_aux = np.linspace(x.min(),x.max(),N**2)
fig = plt.figure(figsize=(12,8))
fig.gca().fill_between(x, 0, y, alpha=0.25)
plt.plot(x_aux, mi_funcion(x_aux), 'k')
plt.plot(x, y, 'r.-')
plt.show()
Explanation: Desafío 4
Implemente la regla de integración trapezoidal
End of explanation
# Ejemplo de lectura de datos
data = np.loadtxt("data/cherry.txt")
print data.shape
print data
# Ejemplo de lectura de datos, saltandose 11 lineas y truncando a enteros
data_int = np.loadtxt("data/cherry.txt", skiprows=11).astype(int)
print data_int.shape
print data_int
Explanation: 2. Librería Numpy
2.5 Inputs y Outputs
Numpy permite leer datos en formato array con la función loadtxt. Existen variados argumentos opcionales, pero los mas importantes son:
* skiprows: permite saltarse lineas en la lectura.
* dtype: declarar que tipo de dato tendra el array resultante
End of explanation
# Guardando el archivo con un header en español
encabezado = "Diametro Altura Volumen (Valores truncados a numeros enteros)"
np.savetxt("data/cherry_int.txt", data_int, fmt="%d", header=encabezado)
Explanation: 2. Librería Numpy
2.5 Inputs y Outputs
Numpy permite guardar datos de manera sencilla con la función savetxt: siempre debemos dar el nombre del archivo y el array a guardar.
Existen variados argumentos opcionales, pero los mas importantes son:
* header: Línea a escribir como encabezado de los datos
* fmt: Formato con el cual se guardan los datos (%d para enteros, %.5f para flotantes con 5 decimales, %.3E para notación científica con 3 decimales, etc.).
End of explanation
%%bash
cat data/cherry_int.txt
Explanation: Revisemos si el archivo quedó bien escrito. Cambiaremos de python a bash para utilizar los comandos del terminal:
End of explanation
# Leer datos
#FIX_ME#
# Convertir a mks
#FIX_ME#
# Guardar en nuevo archivo
#FIX_ME#
Explanation: Desafío 5
Lea el archivo data/cherry.txt
Escale la matriz para tener todas las unidades en metros o metros cubicos.
Guarde la matriz en un nuevo archivo data/cherry_mks.txt, con un encabezado apropiado y 2 decimales de precisión para el flotante (pero no en notación científica).
End of explanation
x = np.linspace(0,42,10)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
mask_x_1 = x>10
print "mask_x_1 = ", mask_x_1
print "x[mask_x_1] = ", x[mask_x_1]
print "x[mask_x_1].shape = ", x[mask_x_1].shape
print "\n"
mask_x_2 = x > x.mean()
print "mask_x_2 = ", mask_x_2
print "x[mask_x_2] = ", x[mask_x_2]
print "x[mask_x_2].shape = ", x[mask_x_2].shape
A = np.linspace(10,20,12).reshape(3,4)
print "\n"
print "A = ", A
print "A.shape = ", A.shape
print "\n"
mask_A_1 = A>13
print "mask_A_1 = ", mask_A_1
print "A[mask_A_1] = ", A[mask_A_1]
print "A[mask_A_1].shape = ", A[mask_A_1].shape
print "\n"
mask_A_2 = A > 0.5*(A.min()+A.max())
print "mask_A_2 = ", mask_A_2
print "A[mask_A_2] = ", A[mask_A_2]
print "A[mask_A_2].shape = ", A[mask_A_2].shape
T = np.linspace(-100,100,24).reshape(2,3,4)
print "\n"
print "T = ", T
print "T.shape = ", T.shape
print "\n"
mask_T_1 = T>=0
print "mask_T_1 = ", mask_T_1
print "T[mask_T_1] = ", T[mask_T_1]
print "T[mask_T_1].shape = ", T[mask_T_1].shape
print "\n"
mask_T_2 = 1 - T + 2*T**2 < 0.1*T**3
print "mask_T_2 = ", mask_T_2
print "T[mask_T_2] = ", T[mask_T_2]
print "T[mask_T_2].shape = ", T[mask_T_2].shape
Explanation: 2. Librería Numpy
2.6 Selecciones de datos
Existen 2 formas de seleccionar datos en un array A:
* Utilizar máscaras de datos, que corresponden a un array con las mismas dimensiones del array A, pero de tipo booleano. Todos aquellos elementos True del array de la mascara serán seleccionados.
* Utilizar un array con valores enteros. Los valores del array indican los valores que desean conservarse.
2.6 Máscaras
Observe que el array regresado siempre es unidimensional puesto que no es posible garantizar que se mantenga la dimensión original del array.
End of explanation
x = np.linspace(10,20,11)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
ind_x_1 = np.array([1,2,3,5,7])
print "ind_x_1 = ", ind_x_1
print "x[ind_x_1] = ", x[ind_x_1]
print "x[ind_x_1].shape = ", x[ind_x_1].shape
print "\n"
ind_x_2 = np.array([0,0,1,2,3,4,5,6,7,-3,-2,-1,-1])
print "ind_x_2 = ", ind_x_2
print "x[ind_x_2] = ", x[ind_x_2]
print "x[ind_x_2].shape = ", x[ind_x_2].shape
A = np.linspace(-90,90,10).reshape(2,5)
print "A = ", A
print "A.shape = ", A.shape
print "\n"
ind_row_A_1 = np.array([0,0,0,1,1])
ind_col_A_1 = np.array([0,2,4,1,3])
print "ind_row_A_1 = ", ind_row_A_1
print "ind_col_A_1 = ", ind_col_A_1
print "A[ind_row_A_1,ind_col_A_1] = ", A[ind_row_A_1,ind_col_A_1]
print "A[ind_row_A_1,ind_col_A_1].shape = ", A[ind_row_A_1,ind_col_A_1].shape
print "\n"
ind_row_A_2 = 1
ind_col_A_2 = np.array([0,1,3])
print "ind_row_A_2 = ", ind_row_A_2
print "ind_col_A_2 = ", ind_col_A_2
print "A[ind_row_A_2,ind_col_A_2] = ", A[ind_row_A_2,ind_col_A_2]
print "A[ind_row_A_2,ind_col_A_2].shape = ", A[ind_row_A_2,ind_col_A_2].shape
Explanation: 2.6 Índices
Observe que es posible repetir índices, por lo que el array obtenido puede tener más elementos que el array original.
En un arreglo 2d, es necesario pasar 2 arrays, el primero para las filas y el segundo para las columnas.
End of explanation
import numpy as np
k = 0.8
rho = 1.2 #
r_m = np.array([ 25., 25., 25., 25., 25., 25., 20., 20., 20., 20., 20.])
v_kmh = np.array([10.4, 12.6, 9.7, 7.2, 12.3, 10.8, 12.9, 13.0, 8.6, 12.6, 11.2]) # En kilometros por hora
P = 0
n_activos = 0
P_mean = 0.0
P_total = 0.0
print "Existen %d aerogeneradores activos del total de %d" %(n_activos, r.shape[0])
print "La potencia promedio de los aeorgeneradores es {0:.2f} ".format(P_mean)
print "La potencia promedio de los aeorgeneradores es " + str(P_total)
Explanation: Desafío 6
La potencia de un aerogenerador, para $k$ una constante relacionada con la geometría y la eficiencia, $\rho$ la densidad del are, $r$ el radio del aerogenerador en metros y $v$ la velocidad el viento en metros por segundo, está dada por:
$$ P = \begin{cases} k \ \rho \ r^2 \ v^3, 3 \leq v \leq 25\ 0,\ eoc\end{cases}$$
Típicamente se considera una valor de $k=0.8$ y una densidad del aire de $\rho = 1.2$ [$kg/m^3$].
Calcule el número de aerogeneradores activos, la potencia promedio y la potencia total generada por los 11 generadores del parque Eólico Canela 1.
Los valores de radio del aerogenerador (en metros) y la velocidad del viento (en kilometros por hora) se indican a continuación en arreglos en el código numérico.
End of explanation |
12,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The perceptron - limitations
<div>Table of contents</div>
<div id="toc"></div>
Step1: As in the previous simulation we implement a very simple network. It will only have two input units plus a bias unit, as in the figure. We will give to groups of input patterns to the network. This time the two groups can not be linearly separable. We will see how the perceptron fails in categorizing this kind of data.
Training
Initializing data and parameters
We create input data with pseudo-random number generation.
We will start from two sets of points, the centroid groups, and create all patterns in each group (belonging/not belonging) adding noise to each centroid. We take the first set from a circle of radius = 2, while the second group is composed of points nearby the origin. Thus the second group of patterns is surrounded by the first group.
Step2: Let's plot the input points. Red points belong to the class to be learned, while blue ones do not belong to it. You can see how the blue points form a cloud that is surrounded by a ring of red points.
Step3: Spreading of the network during training
Here starts the core part, iterating the timesteps. We also divide the training phase in epochs. Each epoch is a single presentation of the whole input pattern series. The sum of squared errors will be grouped by epochs.
Step4: Plotting the results of training
We plot the final decision boundary together with the history of the squared errors through epocs.
As you see below, the network finds a line that completely divides points belonging to the class from those not belonging to it. The perceptron can only define a linear boundary, thus it cannot find any correct solution. The background of the "Decision boundary" plot is all gray. This was due to the fact that the network is continously exploring all the possible inclinations of the boundary. You can see in the error plot how the curve of the error do not converge to zero. The network cannot find a minimum in this case.
Step5: Testing
Initializing data and parameters
We now create a new dataset to test the network by generating a cloud of random points allover the space of inputs
Step6: Classifying the input patterns
We read each pattern and collect the answer of the network
Step7: Plotting the results of test
We can plot all test patterns using the output of the network to color them. Red and blue dots correspond to patterns belonging or not belonging to the class. You can see that the network divided all inputs into two groups with a linear separation. These two group do not correspond at all to the desired division!!!
Step8: <br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
The next cell is just for styling | Python Code:
%matplotlib inline
from pylab import *
from utils import *
Explanation: The perceptron - limitations
<div>Table of contents</div>
<div id="toc"></div>
End of explanation
#-------------------------------------------------
# Training
# Constants
# Number of input elements
n = 2
# Learning rate
eta = 0.0001
# number of training patterns
n_patterns = 2000
# Number of repetitions of
# the pattern series
epochs = 30
# Number of timesteps
stime = n_patterns*epochs
# Variables
# we define a set of 20 angles
angles = linspace(-pi, pi,20)
# the first group of centroids is a set of points
# lying on a circle of radius=2
centroids1 = [ [2*cos(x), 2*sin(x)] for x in angles ]
# the second group of centroids is a set of points
# lying on a circle of radius=0.001
centroids2 = [ [0.001*cos(x), 0.001*sin(x)] for x in angles ]
# generate training data (function build_dataset in utils.py)
data = build_dataset(n_patterns, centroids1 = centroids1, centroids2=centroids2 )
# Each row of P is an input pattern
P = data[:,:2]
# Each element of o is the desired output
# relative to an input pattern
o = data[:,2]
# Initialize weights
w = zeros(n+1)
# Initialize the weight history storage
dw = zeros([n+1,stime])
# Initialize the error history storage
squared_errors = zeros(epochs)
Explanation: As in the previous simulation we implement a very simple network. It will only have two input units plus a bias unit, as in the figure. We will give to groups of input patterns to the network. This time the two groups can not be linearly separable. We will see how the perceptron fails in categorizing this kind of data.
Training
Initializing data and parameters
We create input data with pseudo-random number generation.
We will start from two sets of points, the centroid groups, and create all patterns in each group (belonging/not belonging) adding noise to each centroid. We take the first set from a circle of radius = 2, while the second group is composed of points nearby the origin. Thus the second group of patterns is surrounded by the first group.
End of explanation
# limits
upper_bound = P.max(0) + 0.2*(P.max(0)-P.min(0))
lower_bound = P.min(0) - 0.2*(P.max(0)-P.min(0))
# Create the figure
fig = figure(figsize=(4,4))
scatter(*P[(n_patterns/2):,:].T, s = 20, c = '#ff8888' )
scatter(*P[:(n_patterns/2),:].T, s = 20, c = '#8888ff' )
xlim( [lower_bound[0], upper_bound[0]] )
ylim( [lower_bound[1], upper_bound[1]] )
show()
Explanation: Let's plot the input points. Red points belong to the class to be learned, while blue ones do not belong to it. You can see how the blue points form a cloud that is surrounded by a ring of red points.
End of explanation
# Create a list of pattern indices.
# We will reshuffle it at each
# repetition of the series
pattern_indices = arange(n_patterns)
# counter of repetitions
# of the series of patterns
epoch = -1
for t in xrange(stime) :
# Reiterate the input pattern
# sequence through timesteps
# Reshuffle at the end
# of the series
if t%n_patterns == 0:
shuffle(pattern_indices)
epoch += 1
# Current pattern
k = pattern_indices[t%n_patterns]
# MAIN STEP CALCULATIONS
# Bias-plus-input vector
x = hstack([1, P[k]])
# Weighted sum - !!dot product!!
net = dot(w, x)
# Activation
y = step(net)
# Learning
w += eta*(o[k] - y)*x
# Store current weights
dw[:,t] = w
# Current error
squared_errors[epoch] += 0.5*(o[k] - y)**2
Explanation: Spreading of the network during training
Here starts the core part, iterating the timesteps. We also divide the training phase in epochs. Each epoch is a single presentation of the whole input pattern series. The sum of squared errors will be grouped by epochs.
End of explanation
# Create the figure
fig = figure(figsize=(10,4))
ax = fig.add_subplot(121)
ax.set_title('Decision boundary')
# Chose the x-axis coords of the
# two points to plot the decision
# boundary line
x1 = array([lower_bound[0],upper_bound[0]])
# Calculate the y-axis coords of the
# two points to plot the decision
# boundary line as it changes
for t in xrange(stime) :
# Show evert 10th timestep
if t%10 == 0:
if dw[2,t] != 0 :
# Evaluate x2 based on current weights
x2 = -(dw[1,t]*x1 + dw[0,t])/dw[2,t]
# Plot the changes in the boundary line during learning
ax.plot(x1,x2, c='#cccccc', linewidth = 1, zorder = 1)
# Evaluate x2 ibased on final weights
x2 = -(w[1]*x1 + w[0])/w[2]
# Plot the learned boundary line
plot(x1,x2, c= '#000000', linewidth = 2, zorder = 1)
# Plot in red points belonging to the class
scatter(*P[(n_patterns/2):,:].T, s = 50, c = '#ff8888', zorder = 2 )
# Plot in blue points not belonging to the class
scatter(*P[:(n_patterns/2),:].T, s = 50, c = '#8888ff', zorder = 2 )
# Limits and labels of the plot
xlim( [lower_bound[0], upper_bound[0]] )
ylim( [lower_bound[1], upper_bound[1]] )
xlabel("$p_1$", size = 'xx-large')
ylabel("$p_2$", size = 'xx-large')
# Plot squared errors
ax = fig.add_subplot(122)
ax.set_title('Error')
ax.plot(squared_errors)
# Labels and ticks of the plot
xlabel("epochs", size = 'xx-large')
ylabel("SSE", size = 'xx-large')
xticks(range(epochs))
show()
Explanation: Plotting the results of training
We plot the final decision boundary together with the history of the squared errors through epocs.
As you see below, the network finds a line that completely divides points belonging to the class from those not belonging to it. The perceptron can only define a linear boundary, thus it cannot find any correct solution. The background of the "Decision boundary" plot is all gray. This was due to the fact that the network is continously exploring all the possible inclinations of the boundary. You can see in the error plot how the curve of the error do not converge to zero. The network cannot find a minimum in this case.
End of explanation
#-------------------------------------------------
# Test
# Number of test patterns
n_patterns = 50000
# Generating test data - we use a single repeated centroid
# so we have a single population of points expanding across
# the decision boundary line
test_centroid = lower_bound +(upper_bound-lower_bound)/2.0
# Generating test data - build_dataset function from utils.py.
# We change the standard deviation
data = build_dataset(n_patterns,
centroids1 = [ test_centroid ],
centroids2 = [ test_centroid ],
std_deviation = 2.6 )
# Each row of P is a test pattern
P = data[:,:2]
y = zeros(n_patterns)
Explanation: Testing
Initializing data and parameters
We now create a new dataset to test the network by generating a cloud of random points allover the space of inputs:
End of explanation
# iterate tests
for t in xrange(n_patterns) :
# Bias-plus-input vector
x = hstack([1, P[t]])
# Weighted sum - !!dot product!!
net = dot(w, x)
# Activation
y[t] = step(net)
Explanation: Classifying the input patterns
We read each pattern and collect the answer of the network
End of explanation
# Create the figure
fig = figure(figsize=(5,4))
title('Tests - average error = {}'.format(mean(squared_errors).round(4)))
# Show points
ax = scatter(*P.T, s = 2, c = y, edgecolors='none', zorder = 2, cmap = cm.coolwarm )
#limits
xlim( [lower_bound[0], upper_bound[0]] )
ylim( [lower_bound[1], upper_bound[1]] )
xlabel("$p_1$", size = 'xx-large')
ylabel("$p_2$", size = 'xx-large')
show()
Explanation: Plotting the results of test
We can plot all test patterns using the output of the network to color them. Red and blue dots correspond to patterns belonging or not belonging to the class. You can see that the network divided all inputs into two groups with a linear separation. These two group do not correspond at all to the desired division!!!
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../style/ipybn.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
The next cell is just for styling
End of explanation |
12,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homogeneous Gas
Here is a notebook for homogeneous gas model.
Here we are talking about a homogeneous gas bulk of neutrinos with single energy. The EoM is
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E}B +\lambda L +\sqrt 2 G_F \int_0^\infty dE' ( \rho_{E'} - \bar \rho_{E'} ) ,\rho_E \right]
$$
while the EoM for antineutrinos is
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B +\lambda L +\sqrt 2 G_F \int_0^\infty dE' ( \rho_{E'} - \bar \rho_{E'} ) ,\bar\rho_E \right]
$$
Initial
Step1: Using Mathematica, I can find the 4*2 equations
Step2: I am going to substitute all density matrix elements using their corrosponding network expressions.
So first of all, I need the network expression for the unknown functions.
A function is written as
$$ y_i= 1+t_i v_k f(t_i w_k+u_k) ,$$
while it's derivative is
$$v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k)) w_k .$$
Now I can write down the equations using these two forms.
Step3: Minimization
Here is the minimization
Step4: Functions
Find the solutions to each elements.
Step5: Practice | Python Code:
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
%load_ext snakeviz
import numpy as np
from scipy.optimize import minimize
from scipy.special import expit
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import timeit
import pandas as pd
import plotly.plotly as py
from plotly.graph_objs import *
import plotly.tools as tls
# hbar=1.054571726*10**(-34)
hbar=1
delm2E=1
lamb=1 ## lambda for neutrinos
lambb=1 ## lambda for anti neutrinos
gF=1
nd=1 ## number density
ndb=1 ## number density
omega=1
omegab=1
## Here are some matrices to be used
elM = np.array([[1,0],[0,0]])
bM = 1/2*np.array( [ [ - 0.38729833462,0.31622776601] , [0.31622776601,0.38729833462] ] )
## sqareroot of 2
sqrt2=np.sqrt(2)
Explanation: Homogeneous Gas
Here is a notebook for homogeneous gas model.
Here we are talking about a homogeneous gas bulk of neutrinos with single energy. The EoM is
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E}B +\lambda L +\sqrt 2 G_F \int_0^\infty dE' ( \rho_{E'} - \bar \rho_{E'} ) ,\rho_E \right]
$$
while the EoM for antineutrinos is
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B +\lambda L +\sqrt 2 G_F \int_0^\infty dE' ( \rho_{E'} - \bar \rho_{E'} ) ,\bar\rho_E \right]
$$
Initial:
Homogeneous, Isotropic, Monoenergetic $\nu_e$ and $\bar\nu_e$
The equations becomes
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E} B +\lambda L +\sqrt 2 G_F ( \rho_{E} - \bar \rho_{E} ) ,\rho_E \right]
$$
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B +\lambda_b L +\sqrt 2 G_F ( \rho_{E} - \bar \rho_{E} ) ,\bar\rho_E \right]
$$
Define $\omega=\frac{\delta m^2}{2E}$, $\omega = \frac{\delta m^2}{-2E}$, $\mu=\sqrt{2}G_F n_\nu$
$$
i \partial_t \rho_E = \left[ \omega B +\lambda L +\mu ( \rho_{E} - \bar \rho_{E} ) ,\rho_E \right]
$$
$$
i \partial_t \bar\rho_E = \left[\bar\omega B +\bar\lambda L +\mu ( \rho_{E} - \bar \rho_{E} ) ,\bar\rho_E \right]
$$
where
$$
B = \frac{1}{2} \begin{pmatrix}
-\cos 2\theta_v & \sin 2\theta_v \
\sin 2\theta_v & \cos 2\theta_v
\end{pmatrix} =
\begin{pmatrix}
-0.38729833462 & 0.31622776601\
0.31622776601 & 0.38729833462
\end{pmatrix}
$$
$$
L = \begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
Initial condition
$$
\rho(t=0) = \begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
$$
\bar\rho(t=0) =\begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
define the following quantities
hbar$=\hbar$
delm2E$= \delta m^2/2E$
lamb $= \lambda$, lambb $= \bar\lambda$
gF $= G_F$
mu $=\mu$
omega $=\omega$, omegab $=\bar\omega$
Numerical
End of explanation
#r11prime(t)
## The matrix eqn for neutrinos. Symplify the equation to the form A.X=0. Here I am only writing down the LHS.
## Eqn for r11'
# 1/2*( r21(t)*( bM12*delm2E - 2*sqrt2*gF*rb12(t) ) + r12(t) * ( -bM21*delm2E + 2*sqrt2*gF*rb21(t) ) - 1j*r11prime(t) )
## Eqn for r12'
# 1/2*( r22(t)* ( bM12 ) )
### wait a minute I don't actually need to write down this. I can just do this part in numpy.
Explanation: Using Mathematica, I can find the 4*2 equations
End of explanation
def trigf(x):
#return 1/(1+np.exp(-x)) # It's not bad to define this function here for people could use other functions other than expit(x).
return expit(x)
## The time derivative part
### Here are the initial conditions
init = np.array( [[1,0],[0,0]] )
initb = np.array([[0,0],[0,0]])
### For neutrinos
def rho(x,ti,initialCondition): # x is the input structure arrays, ti is a time point
v11,w11,u11,v12,w12,u12,v21,w21,u21,v22,w22,u22 = np.split(x,12)[:12]
elem11= np.sum(ti * v11 * trigf( ti*w11 +u11 ) )
elem12= np.sum(ti * v12 * trigf( ti*w12 +u12 ) )
elem21= np.sum(ti * v21 * trigf( ti*w21 +u21 ) )
elem22= np.sum(ti * v22 * trigf( ti*w22 +u22 ) )
return initialCondition + np.array([[ elem11 , elem12 ],[elem21, elem22]])
def rhob(xb,ti,initialConditionb): # x is the input structure arrays, ti is a time point
vb11,wb11,ub11,vb12,wb12,ub12,vb21,wb21,ub21,vb22,wb22,ub22 = np.split(xb,12)[:12]
elem11= np.sum(ti * vb11 * trigf( ti*wb11 +ub11 ) )
elem12= np.sum(ti * vb12 * trigf( ti*wb12 +ub12 ) )
elem21= np.sum(ti * vb21 * trigf( ti*wb21 +ub21 ) )
elem22= np.sum(ti * vb22 * trigf( ti*wb22 +ub22 ) )
return initialConditionb + np.array([[ elem11 , elem12 ],[elem21, elem22]])
## Test
xtemp=np.ones(120)
rho(xtemp,1,init)
## Define Hamiltonians for both
def hamil(x,xb,ti,initialCondition,initialConditionb):
return delm2E*bM + lamb*elM + sqrt2*gF*( rho(x,ti,initialCondition) - rhob(xb,ti,initialConditionb) )
def hamilb(x,xb,ti,initialCondition,initialConditionb):
return -delm2E*bM + lambb*elM + sqrt2*gF*( rho(x,ti,initialCondition) - rhob(xb,ti,initialConditionb) )
## The commutator
def comm(x,xb,ti,initialCondition,initialConditionb):
return np.dot(hamil(x,xb,ti,initialCondition,initialConditionb), rho(x,ti,initialCondition) ) - np.dot(rho(x,ti,initialCondition), hamil(x,xb,ti,initialCondition,initialConditionb) )
def commb(x,xb,ti,initialCondition,initialConditionb):
return np.dot(hamilb(x,xb,ti,initialCondition,initialConditionb), rhob(xb,ti,initialConditionb) ) - np.dot(rhob(xb,ti,initialConditionb), hamilb(x,xb,ti,initialCondition,initialConditionb) )
## Test
print "neutrino\n",comm(xtemp,xtemp,1,init,initb)
print "antineutrino\n",commb(xtemp,xtemp,0.5,init,initb)
## The COST of the eqn set
def costTi(x,xb,ti,initialCondition,initialConditionb):
v11,w11,u11,v12,w12,u12,v21,w21,u21,v22,w22,u22 = np.split(x,12)[:12]
vb11,wb11,ub11,vb12,wb12,ub12,vb21,wb21,ub21,vb22,wb22,ub22 = np.split(xb,12)[:12]
fvec11 = np.array(trigf(ti*w11 + u11) ) # This is a vector!!!
fvec12 = np.array(trigf(ti*w12 + u12) )
fvec21 = np.array(trigf(ti*w21 + u21) )
fvec22 = np.array(trigf(ti*w22 + u22) )
fvecb11 = np.array(trigf(ti*wb11 + ub11) ) # This is a vector!!!
fvecb12 = np.array(trigf(ti*wb12 + ub12) )
fvecb21 = np.array(trigf(ti*wb21 + ub21) )
fvecb22 = np.array(trigf(ti*wb22 + ub22) )
costi11= ( np.sum (v11*fvec11 + ti * v11* fvec11 * ( 1 - fvec11 ) * w11 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[0,0] ) )
costi12= ( np.sum (v12*fvec12 + ti * v12* fvec12 * ( 1 - fvec12 ) * w12 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[0,1] ) )
costi21= ( np.sum (v21*fvec21 + ti * v21* fvec21 * ( 1 - fvec21 ) * w21 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[1,0] ) )
costi22= ( np.sum (v22*fvec22 + ti * v22* fvec22 * ( 1 - fvec22 ) * w22 ) + 1j* ( comm(x,xb,ti,initialCondition,initialConditionb)[1,1] ) )
costbi11= ( np.sum (vb11*fvecb11 + ti * vb11* fvecb11 * ( 1 - fvecb11 ) * wb11 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[0,0] ) )
costbi12= ( np.sum (vb12*fvecb12 + ti * vb12* fvecb12 * ( 1 - fvecb12 ) * wb12 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[0,1] ) )
costbi21= ( np.sum (vb21*fvecb21 + ti * vb21* fvecb21 * ( 1 - fvecb21 ) * wb21 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[1,0] ) )
costbi22= ( np.sum (vb22*fvecb22 + ti * vb22* fvecb22 * ( 1 - fvecb22 ) * wb22 ) + 1j* ( commb(x,xb,ti,initialCondition,initialConditionb)[1,1] ) )
return (np.real(costi11))**2 + (np.real(costi12))**2+ (np.real(costi21))**2 + (np.real(costi22))**2 + (np.real(costbi11))**2 + (np.real(costbi12))**2 +(np.real(costbi21))**2 + (np.real(costbi22))**2 + (np.imag(costi11))**2 + (np.imag(costi12))**2+ (np.imag(costi21))**2 + (np.imag(costi22))**2 + (np.imag(costbi11))**2 + (np.imag(costbi12))**2 +(np.imag(costbi21))**2 + (np.imag(costbi22))**2
costTi(xtemp,xtemp,0,init,initb)
## Calculate the total cost
def cost(xtot,t,initialCondition,initialConditionb):
x,xb = np.split(xtot,2)[:2]
t = np.array(t)
costTotal = np.sum( costTList(x,xb,t,initialCondition,initialConditionb) )
return costTotal
def costTList(x,xb,t,initialCondition,initialConditionb): ## This is the function WITHOUT the square!!!
t = np.array(t)
costList = np.asarray([])
for temp in t:
tempElement = costTi(x,xb,temp,initialCondition,initialConditionb)
costList = np.append(costList, tempElement)
return np.array(costList)
ttemp = np.linspace(0,10)
print ttemp
ttemp = np.linspace(0,10)
print costTList(xtemp,xtemp,ttemp,init,initb)
print cost(xtemp,ttemp,init,initb)
Explanation: I am going to substitute all density matrix elements using their corrosponding network expressions.
So first of all, I need the network expression for the unknown functions.
A function is written as
$$ y_i= 1+t_i v_k f(t_i w_k+u_k) ,$$
while it's derivative is
$$v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k)) w_k .$$
Now I can write down the equations using these two forms.
End of explanation
tlin = np.linspace(0,0.5,3)
initGuess = np.ones(120)
# initGuess = np.random.rand(1,30)+2
costF = lambda x: cost(x,tlin,init,initb)
cost(initGuess,tlin,init,initb)
## %%snakeviz
# startCG = timeit.default_timer()
#costFResultCG = minimize(costF,initGuess,method="CG")
#stopCG = timeit.default_timer()
#print stopCG - startCG
#print costFResultCG
%%snakeviz
startSLSQP = timeit.default_timer()
costFResultSLSQP = minimize(costF,initGuess,method="SLSQP")
stopSLSQP = timeit.default_timer()
print stopSLSQP - startSLSQP
print costFResultSLSQP
costFResultSLSQP.get('x')
np.savetxt('./assets/homogen/optimize_ResultSLSQPT2120.txt', costFResultSLSQP.get('x'), delimiter = ',')
Explanation: Minimization
Here is the minimization
End of explanation
# costFResultSLSQPx = np.genfromtxt('./assets/homogen/optimize_ResultSLSQP.txt', delimiter = ',')
## The first element of neutrino density matrix
xresult = np.split(costFResultSLSQP.get('x'),2)[0]
xresultb = np.split(costFResultSLSQP.get('x'),2)[1]
#xresult = np.split(costFResultSLSQPx,2)[0]
#xresultb = np.split(costFResultSLSQPx,2)[1]
## print xresult11
plttlin=np.linspace(0,5,100)
pltdata11 = np.array([])
pltdata22 = np.array([])
for i in plttlin:
pltdata11 = np.append(pltdata11 ,rho(xresult,i,init)[0,0] )
print pltdata11
for i in plttlin:
pltdata22 = np.append(pltdata22 ,rho(xresult,i,init)[1,1] )
print pltdata22
print "----------------------------------------"
pltdatab11 = np.array([])
pltdatab22 = np.array([])
for i in plttlin:
pltdatab11 = np.append(pltdatab11 ,rho(xresultb,i,init)[0,0] )
print pltdatab11
for i in plttlin:
pltdatab22 = np.append(pltdatab22 ,rho(xresultb,i,init)[1,1] )
print pltdatab22
#np.savetxt('./assets/homogen/optimize_pltdatar11.txt', pltdata11, delimiter = ',')
#np.savetxt('./assets/homogen/optimize_pltdatar22.txt', pltdata22, delimiter = ',')
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rho11')
plt.plot(plttlin,pltdata11,"b4-",label="rho11")
py.iplot_mpl(plt.gcf(),filename="HG-rho11")
# tls.embed("https://plot.ly/~emptymalei/73/")
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rho22')
plt.plot(plttlin,pltdata22,"r4-",label="rho22")
py.iplot_mpl(plt.gcf(),filename="HG-rho22")
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rhob11')
plt.plot(plttlin,pltdatab11,"b*-",label="rhob11")
py.iplot_mpl(plt.gcf(),filename="HG-rhob11")
plt.figure(figsize=(20,12.36))
plt.ylabel('Time')
plt.xlabel('rhob22')
plt.plot(plttlin,pltdatab11,"b*-",label="rhob22")
py.iplot_mpl(plt.gcf(),filename="HG-rhob22")
MMA_optmize_pltdata = np.genfromtxt('./assets/homogen/MMA_optmize_pltdata.txt', delimiter = ',')
plt.figure(figsize=(20,12.36))
plt.ylabel('MMArho11')
plt.xlabel('Time')
plt.plot(np.linspace(0,5,501),MMA_optmize_pltdata,"r-",label="MMArho11")
plt.plot(plttlin,pltdata11,"b4-",label="rho11")
py.iplot_mpl(plt.gcf(),filename="MMA-rho11")
Explanation: Functions
Find the solutions to each elements.
End of explanation
xtemp1 = np.arange(4)
xtemp1.shape = (2,2)
print xtemp1
xtemp1[0,1]
np.dot(xtemp1,xtemp1)
xtemp1[0,1]
Explanation: Practice
End of explanation |
12,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:35
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
12,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{myHDL to PYNQ Fabric Only Exsample}
\author{Steven K Armour}
\maketitle
Refrances
Libraries and Helper functions
Step2: Project 1
Step3: myHDL Testing
Step4: Verilog Code
Step5: \begin{figure}
\centerline{\includegraphics[width=10cm]{S0L0_RTL.png}}
\caption{\label{fig
Step7: Board Verification
Project 2
Step8: myHDL Testing
Step9: Verilog Code
Step10: \begin{figure}
\centerline{\includegraphics[width=10cm]{S2L4_RTL.png}}
\caption{\label{fig
Step11: myHDL Testing
Step12: Need to figure out how to write/run these long simulations better in python
Verilog Code
Step13: Verilog Testbench
PYNQ-Z1 Constraints File
Below is what is found in file constrs_S0L0.xdc
Notice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0
Board Verification
Project 4
Step14: myHDL Testing
Step15: Verilog Code
Step16: PYNQ-Z1 Constraints File
Below is what is found in file constrs_S0L0.xdc
Notice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0
Verilog Testbench
Step17: Board Verification
Project 5
Step18: pwm myHDL Testing | Python Code:
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
Explanation: \title{myHDL to PYNQ Fabric Only Exsample}
\author{Steven K Armour}
\maketitle
Refrances
Libraries and Helper functions
End of explanation
@block
def S0L0(sw, clk, led):
FPGA Hello world of one switch controlling one LED based on
https://timetoexplore.net/blog/arty-fpga-verilog-01
Target:
ZYNQ 7000 Board (Arty, PYNQ-Z1, PYNQ-Z2) with at least 2
switchs and 4 leds
Input:
sw(2bitVec):switch input
clk(bool): clock input
Ouput:
led(4bitVec): led output
@always(clk.posedge)
def logic():
if sw[0]==0:
led.next[0]=True
else:
led.next[0]=False
return instances()
Explanation: Project 1: 1 Switch 1 LED
https://timetoexplore.net/blog/arty-fpga-verilog-01
Constraints File
myHDL Code
End of explanation
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
sw=Signal(intbv(0)[2:]); Peeker(sw, 'sw')
led=Signal(intbv(0)[4:]); Peeker(led, 'led')
np.random.seed(18)
swTVals=[int(i) for i in np.random.randint(0,2, 10)]
DUT=S0L0(sw, clk, led)
def S0L0_TB():
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(10):
sw.next[0]=swTVals[i]
yield clk.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, S0L0_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('S0L0');
Explanation: Verilog Code
End of explanation
swTVal=intbv(int(''.join([str(i) for i in swTVals]), 2))[len(swTVals):]
print(f'swTest: {swTVals}, {swTVal}, {[int(i) for i in swTVal]}')
@block
def S0L0_TBV():
clk=Signal(bool(0))
sw=Signal(intbv(0)[2:])
led=Signal(intbv(0)[4:])
#test stimuli
swTVals=Signal(swTVal)
@always_comb
def print_data():
print(sw, clk, led)
DUT=S0L0(sw, clk, led)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(10):
sw.next[0]=swTVals[i]
yield clk.posedge
raise StopSimulation()
return instances()
TB=S0L0_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('S0L0_TBV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{S0L0_RTL.png}}
\caption{\label{fig:S0L0RTL} S0L0 RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{S0L0_SYN.png}}
\caption{\label{fig:S0L0SYN} S0L0 Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{S0L0_SYN.png}}
\caption{\label{fig:S0L0SYN} S0L0 Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
PYNQ-Z1 Constraints File
Below is what is found in file constrs_S0L0.xdc
Notice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0
Verilog Testbench
End of explanation
@block
def S2L4(sw, clk, led):
FPGA Hello world of two switchs controlling four LED based on
https://timetoexplore.net/blog/arty-fpga-verilog-01
Target:
ZYNQ 7000 Board (Arty, PYNQ-Z1, PYNQ-Z2) with at least 2
switchs and 4 leds
Input:
sw(2bitVec):switch input
clk(bool): clock input
Ouput:
led(4bitVec): led output
@always(clk.posedge)
def logic():
if sw[0]==0:
led.next[2:]=0
else:
led.next[2:]=3
if sw[1]==0:
led.next[4:2]=0
else:
led.next[4:2]=3
return instances()
Explanation: Board Verification
Project 2: 2 Switchs 4 LEDS
https://timetoexplore.net/blog/arty-fpga-verilog-01
myHDL Code
End of explanation
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
sw=Signal(intbv(0)[2:]); Peeker(sw, 'sw')
led=Signal(intbv(0)[4:]); Peeker(led, 'led')
np.random.seed(18)
swTVals=[int(i) for i in np.random.randint(0,4, 10)]
DUT=S2L4(sw, clk, led)
def S2L4_TB():
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(10):
sw.next=swTVals[i]
yield clk.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, S2L4_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('S2L4');
Explanation: Verilog Code
End of explanation
@block
def countLED(clk, led):
counter=Signal(modbv(0)[33:])
@always(clk.posedge)
def logic():
counter.next=counter+1
led.next[0]=counter[26]
led.next[1]=counter[24]
led.next[3]=counter[22]
led.next[4]=counter[20]
return instances()
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{S2L4_RTL.png}}
\caption{\label{fig:S2L4RTL} S2L4 RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{S2L4_SYN.png}}
\caption{\label{fig:S2L4SYN} S2L4 Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{S2L4_IMP.png}}
\caption{\label{fig:S2L4SYN} S2L4 Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
Verilog Testbench (ToDo)
will write later when testbench conversion is improved
PYNQ-Z1 Constraints File
using same one as in 1 Switch 1 LED: constrs_S0L0.xdc
Board Verification
Project 3: Countdown
myHDL Code
End of explanation
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
led=Signal(intbv(0)[4:]); Peeker(led, 'led')
DUT=countLED(clk, led)
'''
def countLED_TB():
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==2**33:
raise StopSimulation()
if 1%100==0:
print(i)
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, countLED_TB(), *Peeker.instances()).run()
'''
;
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('countLED');
Explanation: Need to figure out how to write/run these long simulations better in python
Verilog Code
End of explanation
@block
def BDCLed(clk, led):
counter=Signal(modbv(0)[8:])
duty_led=Signal(modbv(8)[8:])
@always(clk.posedge)
def logic():
counter.next=counter+1
if counter<duty_led:
led.next=15
else:
led.next=0
return instances()
Explanation: Verilog Testbench
PYNQ-Z1 Constraints File
Below is what is found in file constrs_S0L0.xdc
Notice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0
Board Verification
Project 4: Basic Duty Cycle
https://timetoexplore.net/blog/arty-fpga-verilog-02
myHDL Code
End of explanation
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
led=Signal(intbv(0)[4:]); Peeker(led, 'led')
DUT=BDCLed(clk, led)
def BDCLed_TB():
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==1000:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, BDCLed_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
BDCLedData=Peeker.to_dataframe()
BDCLedData=BDCLedData[BDCLedData['clk']==1]
BDCLedData.plot(y='led');
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('BDCLed');
Explanation: Verilog Code
End of explanation
@block
def BDCLed_TBV():
clk=Signal(bool(0))
led=Signal(intbv(0)[4:])
@always_comb
def print_data():
print(sw, clk, led)
DUT=BDCLed(clk, led)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==1000:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=BDCLed_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('BDCLed_TBV');
Explanation: PYNQ-Z1 Constraints File
Below is what is found in file constrs_S0L0.xdc
Notice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0
Verilog Testbench
End of explanation
@block
def pwm(clk, dutyCount, o_state):
counter=Signal(modbv(0)[8:])
@always(clk.posedge)
def logic():
counter.next=counter+1
o_state.next=counter<dutyCount
return instances()
Explanation: Board Verification
Project 5: Mid level PWM LED
pwm myHDL Code
End of explanation
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
dutyCount=Signal(intbv(4)[8:]); Peeker(dutyCount, 'dutyCount')
o_state=Signal(bool(0)); Peeker(o_state, 'o_state')
DUT=pwm(clk, dutyCount, o_state)
def pwm_TB():
pass
Explanation: pwm myHDL Testing
End of explanation |
12,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Plots
Step1: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="http
Step2: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
Step3: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variabe, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homooskedasticity and <br />
linearity.
Step4: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
Step5: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
Step6: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
Step7: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
Step8: Regression Plots
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
Step9: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
Step10: Statewide Crime 2009 Dataset
Compare the following to http
Step11: Partial Regression Plots
Step12: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
Step13: Influence Plot
Step14: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
Step15: There isn't yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888) | Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import lzip
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
Explanation: Regression Plots
End of explanation
prestige = sm.datasets.get_rdataset("Duncan", "car", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="http://vincentarelbundock.github.com/Rdatasets/">Rdatasets package</a>.
End of explanation
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.influence_plot(prestige_model, ax=ax, criterion="cooks")
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige, ax=ax)
fix, ax = plt.subplots(figsize=(12,14))
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige, ax=ax)
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variabe, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homooskedasticity and <br />
linearity.
End of explanation
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(prestige_model, fig=fig)
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_ccpr(prestige_model, "education", ax=ax)
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
fig = plt.figure(figsize=(12, 8))
fig = sm.graphics.plot_ccpr_grid(prestige_model, fig=fig)
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(prestige_model, "education", fig=fig)
Explanation: Regression Plots
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_fit(prestige_model, "education", ax=ax)
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(crime_model, fig=fig)
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], ax=ax, data=dta)
Explanation: Partial Regression Plots
End of explanation
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.plot_leverage_resid2(crime_model, ax=ax)
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.influence_plot(crime_model, ax=ax)
Explanation: Influence Plot
End of explanation
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
Explanation: There isn't yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation |
12,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 07 - Non linear Elliptic problem
Keywords
Step1: 3. Affine Decomposition
For this problem the affine decomposition is straightforward
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the NonlinearElliptic class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
Explanation: Tutorial 07 - Non linear Elliptic problem
Keywords: exact parametrized functions, POD-Galerkin
1. Introduction
In this tutorial, we consider a non linear elliptic problem in a two-dimensional spatial domain $\Omega=(0,1)^2$. We impose a homogeneous Dirichlet condition on the boundary $\partial\Omega$. The source term is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = 100\sin(2\pi x_0)cos(2\pi x_1) \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
This problem is characterized by two parameters. The first parameter $\mu_0$ controls the strength of the sink term and the second parameter $\mu_1$ the strength of the nonlinearity. The range of the two parameters is the following:
$$
\mu_0,\mu_1\in[0.01,10.0]
$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
on the parameter domain
$$
\mathbb{P}=[0.01,10]^2.
$$
In order to be able to compare the interpolation methods (EIM and DEIM) used to solve this problem, we propose to use an exact solution of the problem.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the solution in the domain $\Omega$.
The strong formulation of the parametrized problem is given by:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})$ such that</center>
$$ -\nabla^2u(\boldsymbol{\mu})+\frac{\mu_0}{\mu_1}(\exp{\mu_1u(\boldsymbol{\mu})}-1)=g(\boldsymbol{x}; \boldsymbol{\mu})$$
<br>
The corresponding weak formulation reads:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)+c\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in H_1(\Omega) : v|_{\partial\Omega}=0}
$$
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u, v;\boldsymbol{\mu})=\int_{\Omega} \nabla u\cdot \nabla v \ d\boldsymbol{x},$$
the parametrized bilinear form $c(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v;\boldsymbol{\mu})=\mu_0\int_{\Omega} \frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x},$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v; \boldsymbol{\mu})= \int_{\Omega}g(\boldsymbol{x}; \boldsymbol{\mu})v \ d\boldsymbol{x}.$$
The output of interest $s(\boldsymbol{\mu})$ is given by
$$s(\boldsymbol{\mu}) = \int_{\Omega} v \ d\boldsymbol{x}$$
is computed for each $\boldsymbol{\mu}$.
End of explanation
@ExactParametrizedFunctions()
class NonlinearElliptic(NonlinearEllipticProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearEllipticProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
self.u = self._solution
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Store the forcing term expression
self.f = Expression("sin(2*pi*x[0])*sin(2*pi*x[1])", element=self.V.ufl_element())
# Customize nonlinear solver parameters
self._nonlinear_solver_parameters.update({
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
})
# Return custom problem name
def name(self):
return "NonlinearEllipticExact"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = 1.
return (theta_a0,)
elif term == "c":
theta_c0 = mu[0]
return (theta_c0,)
elif term == "f":
theta_f0 = 100.
return (theta_f0,)
elif term == "s":
theta_s0 = 1.0
return (theta_s0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_derivatives
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
du = self.du
a0 = inner(grad(du), grad(v)) * dx
return (a0,)
elif term == "c":
u = self.u
mu = self.mu
c0 = (exp(mu[1] * u) - 1) / mu[1] * v * dx
return (c0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "s":
s0 = v * dx
return (s0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1)]
return (bc0,)
elif term == "inner_product":
du = self.du
x0 = inner(grad(du), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearEllipticProblem)
def CustomizeReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
class ReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearElliptic_Base.__init__(self, truth_problem, **kwargs)
self._nonlinear_solver_parameters.update({
"report": True,
"line_search": "wolfe"
})
return ReducedNonlinearElliptic
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward:
$$a(u,v;\boldsymbol{\mu})=\underbrace{1}{\Theta^{a}_0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\nabla u \cdot \nabla v \ d\boldsymbol{x}}{a_0(u,v)},$$
$$c(u,v;\boldsymbol{\mu})=\underbrace{\mu_0}{\Theta^{c}0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x}}{c_0(u,v)},$$
$$f(v; \boldsymbol{\mu}) = \underbrace{100}{\Theta^{f}0(\boldsymbol{\mu})} \underbrace{\int{\Omega}\sin(2\pi x_0)cos(2\pi x_1)v \ d\boldsymbol{x}}{f_0(v)}.$$
We will implement the numerical discretization of the problem in the class
class NonlinearElliptic(NonlinearEllipticProblem):
by specifying the coefficients $\Theta^{a}(\boldsymbol{\mu})$, $\Theta^{c}_(\boldsymbol{\mu})$ and $\Theta^{f}(\boldsymbol{\mu})$ in the method
def compute_theta(self, term):
and the bilinear forms $a_(u, v)$, $c(u, v)$ and linear forms $f_(v)$ in
def assemble_operator(self, term):
End of explanation
mesh = Mesh("data/square.xml")
subdomains = MeshFunction("size_t", mesh, "data/square_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/square_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = NonlinearElliptic(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.01, 10.0), (0.01, 10.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the NonlinearElliptic class
End of explanation
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20)
reduction_method.set_tolerance(1e-8)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
reduction_method.initialize_training_set(50)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (0.3, 9.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(50)
reduction_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation |
12,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object Oriented Programming
What is an Object?
First some semantics
Step1: Note the reference to object, this means that our new class inherits from object. We won't be going into too much detail about inheritance, but for now you should always inherit from object when defining a class.
Once a class is defined you can create an instance of that class, which is an object. In Python we do this by calling the class name as if it were a function
Step2: A class can store some data (after all, an empty class isn't very interesting!)
Step3: We can access variables stored in a class by writing the name of the instance followed by a dot and then the name of the variable
Step4: Classes can also contain functions. Functions attached to classes are called methods
Step5: The first argument to every method automatically refers to the object we're calling the method on, by convention we call that argument self.
Step6: Notice we don't have to pass the self argument, Python's object system does this for you.
Some methods are called special methods. Their names start and end with a double underscore. A particularly useful special method is __init__, which initializes an object.
Step7: The __init__ method is called when we create an instance of a class. Now when we call the class name we can pass the arguments required by __init__
Step8: Methods on an object have acces to the variables defined on the object | Python Code:
class A(object):
pass
Explanation: Object Oriented Programming
What is an Object?
First some semantics:
- An object is essentially a container which holds some data, and crucially some associated methods for working with that data.
- We define objects, and their behaviours, using something called a class.
- We create objects by instantiating classes, so, objects are instances of classes.
Note, these are very similar to structures, with associated functions attached.
Why do we need objects?
This is all very nice, but why bother with the overhead and confusion of objects and classes? People have been working with procedural programs for decades and they seem to work!
A few core ideas:
Modularity
Separation of concerns
Abstraction over complex mechanisms
We've used a lot of objects already!
Most of the code we've been using already has made heavy use of object-oriented programming:
NumPy arrays are objects (with attributes like shape and methods like mean())
Iris cubes are objects
CIS datasets are objects
Matplotlib axes/figures/lines etc. are all objects
Object-Oriented Programming in Python
Don't panic!
Objects can seem quite complex at first and it is possible to go into a lot of depth worring about things like inheritance and 'design'.
In many languages we're forced into using classes and objects for everything (e.g. Java and C#), but some languages don't support objects at all (e.g. R and Fortran 77).
In python we have (in my opinion) a nice half-way house, we have a full OO implementation when we need it (including multiple inheritance, abstract classes etc), but we can use the procedural or functional styles when it's more desirable to do so.
Defining a class in Python is easy:
End of explanation
a_object = A()
print(type(a_object))
Explanation: Note the reference to object, this means that our new class inherits from object. We won't be going into too much detail about inheritance, but for now you should always inherit from object when defining a class.
Once a class is defined you can create an instance of that class, which is an object. In Python we do this by calling the class name as if it were a function:
End of explanation
class B(object):
value = 1
Explanation: A class can store some data (after all, an empty class isn't very interesting!):
End of explanation
b_object = B()
print(b_object.value)
Explanation: We can access variables stored in a class by writing the name of the instance followed by a dot and then the name of the variable:
End of explanation
class B(object):
value = 1
def show_value(self, another_arg):
print('self.value is {}'.format(self.value))
Explanation: Classes can also contain functions. Functions attached to classes are called methods:
End of explanation
b1 = B()
b1.show_value(12)
b1.value = 999
b1.show_value(132)
b2 = B()
Explanation: The first argument to every method automatically refers to the object we're calling the method on, by convention we call that argument self.
End of explanation
class C(object):
def __init__(self, value):
self.var = value
Explanation: Notice we don't have to pass the self argument, Python's object system does this for you.
Some methods are called special methods. Their names start and end with a double underscore. A particularly useful special method is __init__, which initializes an object.
End of explanation
c1 = C("Python!")
c2 = C("Hello")
print(c1.var)
print(c2.var)
Explanation: The __init__ method is called when we create an instance of a class. Now when we call the class name we can pass the arguments required by __init__:
End of explanation
class Counter(object):
def __init__(self, start=0):
self.value = start
def increment(self):
self.value += 1
counter1 = Counter()
print(counter1.value)
counter1.increment()
print(counter1.value)
counter2 = Counter(start=10)
counter2.increment()
counter2.increment()
print(counter2.value)
Explanation: Methods on an object have acces to the variables defined on the object:
End of explanation |
12,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Reference Guide to NLTK
In progress...
<span style="float
Step1: Ex.3. Similar tokens
Step2: Ex.2. Common Context
Step3: Ex.3. Dispersion Plot
Step4: Ex.4. Calculate Text Diversity
Step5: Ex.5. Top Most Common Tokens
Step6: Ex.6. Plot Cummulative Distribution Tokens
Step7: Ex.7. Filter for Long Tokens
Step8: Ex.8. Collocations
Step9: Ex.9. Conditional Frequency Distribution
Step10: Ex.10. Plot 1 Conditional Frequency Distribution
Step11: Ex.11. Plot 2 Conditional Frequency Distribution
Step12: Ex.12. Plot 3 Conditional Frequency Distribution
Step13: Ex.13. Generate Random Text with Bigram
Step14: Ex.14. Plot Bar Chart by Frequency Word Distribution
Step15: Ex.15. Words as a Graph | Python Code:
import nltk
from __future__ import division
import matplotlib as mpl
from matplotlib import pyplot as plt
from nltk.book import *
from nltk.corpus import brown
from nltk.corpus import udhr
from nltk.corpus import wordnet as wn
from numpy import arange
import networkx as nx
%matplotlib inline
Explanation: Quick Reference Guide to NLTK
In progress...
<span style="float:right">By Diego Marinho de Oliveira<br/>Last Update: 08-18-2015</span>
This notebook is only to demostrante simple and quick usefull examples of what we can do with NLTK. It also can used as a reference guide. Its not intendend to explore exaustively NLTK package. Many examples were extracted directly from NLTK Book written by Steven Bird, Ewan Klein, and Edward Loper distributed under the terms of the Creative Commons Attribution Noncommercial No-Derivative-Works 3.0 US License.
Import Labraries
End of explanation
text1.similar("monstrous")
Explanation: Ex.3. Similar tokens
End of explanation
text2.common_contexts(["monstrous", "very"])
Explanation: Ex.2. Common Context
End of explanation
text4.dispersion_plot(["citizens", :"democracy", "freedom", "duties", "America"])
Explanation: Ex.3. Dispersion Plot
End of explanation
format(len(set(text4))/len(text4)
Explanation: Ex.4. Calculate Text Diversity
End of explanation
nltk.FreqDist(text1).most_common(5)
Explanation: Ex.5. Top Most Common Tokens
End of explanation
nltk.FreqDist(text1).plot(50, cumulative=True)
Explanation: Ex.6. Plot Cummulative Distribution Tokens
End of explanation
[w for w in text1 if len(w) > 15][:5]
Explanation: Ex.7. Filter for Long Tokens
End of explanation
text4.collocations()
Explanation: Ex.8. Collocations
End of explanation
cfd = nltk.ConditionalFreqDist(
(genre, word)
for genre in brown.categories()
for word in brown.words(categories=genre))
genres = ['news', 'religion', 'hobbies', 'science_fiction', 'romance', 'humor']
modals = ['can', 'could', 'may', 'might', 'must', 'will']
cfd.tabulate(conditions=genres, samples=modals)
Explanation: Ex.9. Conditional Frequency Distribution
End of explanation
cfd = nltk.ConditionalFreqDist(
(target, fileid)
for fileid in inaugural.fileids()
for w in inaugural.words(fileid)
for target in ['america', 'citizen']
if w.lower().startswith(target)) [1]
cfd.plot()
Explanation: Ex.10. Plot 1 Conditional Frequency Distribution
End of explanation
languages = ['Chickasaw', 'English', 'German_Deutsch',
'Greenlandic_Inuktikut', 'Hungarian_Magyar', 'Ibibio_Efik']
cfd = nltk.ConditionalFreqDist(
(lang, len(word))
for lang in languages
for word in udhr.words(lang + '-Latin1'))
cfd.plot(cumulative=True)
Explanation: Ex.11. Plot 2 Conditional Frequency Distribution
End of explanation
names = nltk.corpus.names
names.fileids()
['female.txt', 'male.txt']
male_names = names.words('male.txt')
female_names = names.words('female.txt')
[w for w in male_names if w in female_names]
cfd = nltk.ConditionalFreqDist(
(fileid, name[-1])
for fileid in names.fileids()
for name in names.words(fileid))
cfd.plot()
Explanation: Ex.12. Plot 3 Conditional Frequency Distribution
End of explanation
sent = ['In', 'the', 'beginning', 'God', 'created', 'the', 'heaven',
... 'and', 'the', 'earth', '.']
list(nltk.bigrams(sent))
def generate_model(cfdist, word, num=15):
result = ''
for i in range(num):
result += word + ' '
word = cfdist[word].max()
print result.strip()
text = nltk.corpus.genesis.words('english-kjv.txt')
bigrams = nltk.bigrams(text)
cfd = nltk.ConditionalFreqDist(bigrams)
cfd['living']
nltk.FreqDist({'creature': 7, 'thing': 4, 'substance': 2, ',': 1, '.': 1, 'soul': 1})
generate_model(cfd, 'living')
Explanation: Ex.13. Generate Random Text with Bigram
End of explanation
colors = 'rgbcmyk' # red, green, blue, cyan, magenta, yellow, black
def bar_chart(categories, words, counts):
"Plot a bar chart showing counts for each word by category"
ind = arange(len(words))
width = 1 / (len(categories) + 1)
bar_groups = []
for c in range(len(categories)):
bars = plt.bar(ind+c*width, counts[categories[c]], width,
color=colors[c % len(colors)])
bar_groups.append(bars)
plt.xticks(ind+width, words)
plt.legend([b[0] for b in bar_groups], categories, loc='upper left')
plt.ylabel('Frequency')
plt.title('Frequency of Six Modal Verbs by Genre')
plt.show()
genres = ['news', 'religion', 'hobbies', 'government', 'adventure']
modals = ['can', 'could', 'may', 'might', 'must', 'will']
cfdist = nltk.ConditionalFreqDist(
(genre, word)
for genre in genres
for word in nltk.corpus.brown.words(categories=genre)
if word in modals)
counts = {}
for genre in genres:
counts[genre] = [cfdist[genre][word] for word in modals]
bar_chart(genres, modals, counts)
Explanation: Ex.14. Plot Bar Chart by Frequency Word Distribution
End of explanation
def traverse(graph, start, node):
graph.depth[node.name] = node.shortest_path_distance(start)
for child in node.hyponyms():
graph.add_edge(node.name, child.name)
traverse(graph, start, child)
def hyponym_graph(start):
G = nx.Graph()
G.depth = {}
traverse(G, start, start)
return G
def graph_draw(graph):
nx.draw_graphviz(graph,
node_size = [16 * graph.degree(n) for n in graph],
node_color = [graph.depth[n] for n in graph],
with_labels = False)
plt.show()
dog = wn.synset('dog.n.01')
graph = hyponym_graph(dog)
graph_draw(graph)
Explanation: Ex.15. Words as a Graph
End of explanation |
12,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Under Construction!
Working with External Web Services
This example shows how to use external services to set up Cytoscape session with pathways
Cytoscape 3.2.0 beta
KEGGScape 0.7.x
cy-rest 0.9.x or later
Input and Output
Input - Disease name
Output - Cytoscape session file containing all KEGG pathways known to be related to the disease.
External Services
KEGG API
TogoWS
Step1: Get list of entries about cancer
Step2: Get pathway list
Step3: Import all cancer related pathways
Step4: Result
Annotate Pathways by External Services
Get list of genes in the pathway
ID Conversion
Interactions | Python Code:
import requests
import json
import pandas as pd
import io
from IPython.display import Image
# Basic Setup
PORT_NUMBER = 1234
BASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'
# KEGG API
KEGG_API_URL = 'http://rest.kegg.jp/'
# Header for posting data to the server as JSON
HEADERS = {'Content-Type': 'application/json'}
requests.get(BASE)
Explanation: Under Construction!
Working with External Web Services
This example shows how to use external services to set up Cytoscape session with pathways
Cytoscape 3.2.0 beta
KEGGScape 0.7.x
cy-rest 0.9.x or later
Input and Output
Input - Disease name
Output - Cytoscape session file containing all KEGG pathways known to be related to the disease.
External Services
KEGG API
TogoWS
End of explanation
# Find information about cancer from KEGG disease database.
query = 'cancer'
res = requests.get(KEGG_API_URL + '/find/disease/' + query)
pathway_list = res.content.decode('utf8')
disease_df = pd.read_csv(io.StringIO(pathway_list), delimiter='\t', header=None, names=['id', 'name'])
disease_df
Explanation: Get list of entries about cancer
End of explanation
disease_ids = disease_df['id']
disease_urls = disease_ids.apply(lambda x: KEGG_API_URL + 'get/' + x)
def disease_parser(entry):
lines = entry.split('\n')
data = {}
last_key = None
for line in lines:
if '///' in line:
return data
parts = line.split(' ')
if parts[0] is not None and len(parts[0]) != 0:
last_key = parts[0]
data[parts[0]] = line.replace(parts[0], '').strip()
else:
last_val = data[last_key]
data[last_key] = last_val + '|' + line.strip()
return data
result = []
for url in disease_urls:
res = requests.get(url)
rows = disease_parser(res.content)
result.append(rows)
disease_df = pd.DataFrame(result)
pathways = disease_df['PATHWAY'].dropna().unique()
p_urls = []
for pathway in pathways:
entries = pathway.split('|')
for en in entries:
url = KEGG_API_URL + 'get/' + en.split(' ')[0].split('(')[0] + '/kgml'
p_urls.append(url)
Explanation: Get pathway list
End of explanation
def create_from_list(network_list):
server_res = requests.post(BASE + 'networks?source=url&collection=' + query, data=json.dumps(network_list), headers=HEADERS)
return json.loads(server_res.content)
requests.delete(BASE + 'networks')
url_list = list(set(p_urls))
pathway_suids = create_from_list(url_list)
# Check the result.
print(json.dumps(pathway_suids[0], indent=4))
Image(url=BASE+'networks/' + str(pathway_suids[0]['networkSUID'][0]) + '/views/first.png', embed=True)
Explanation: Import all cancer related pathways
End of explanation
# Find SUID for Cancer Overview Pathway
cancer_overview_pathway_suid = None
for result in pathway_suids:
if 'hsa05200' in result['source']:
cancer_overview_pathway_suid = result['networkSUID'][0]
break
rows_res = requests.get(BASE + 'networks/' + str(cancer_overview_pathway_suid) + '/tables/defaultnode/rows')
# Convert it to DataFrame
cancer_df = pd.read_json(rows_res.content)
genes = cancer_df[cancer_df['KEGG_NODE_TYPE'] == 'gene']
gene_set = set([])
count = 0
converted = ''
query_str = ''
for gene_list in genes['KEGG_ID']:
for gene in gene_list:
gene_set.add(gene)
query_str = query_str + gene + '+'
count = count + 1
if count == 99:
conversion = requests.get('http://rest.kegg.jp/conv/uniprot/' + query_str)
converted = converted + conversion.content
count = 0
query_str = ''
conversion = requests.get('http://rest.kegg.jp/conv/uniprot /' + query_str)
converted = converted + conversion.content
print(len(gene_set))
conversion_map = pd.read_csv(io.StringIO(converted.decode('utf8')), delimiter='\t', header=None, names=['KEGG_ID', 'uniprot_id'])
id_list = conversion_map.drop_duplicates()
type(id_list)
id_list['uniprot_id'] = id_list['uniprot_id'].apply(lambda x: x.split(':')[1])
print(json.dumps(json.loads(id_list.to_json(orient='records')), indent=4))
new_column = {
'name': 'KEGG_ID_TOP',
'type': 'String'
}
first_key_res = requests.post(BASE + 'networks/' + str(cancer_overview_pathway_suid) + '/tables/defaultnode/columns', data=json.dumps(new_column), headers=HEADERS)
genes['KEGG_ID_TOP'] = genes.apply(lambda row: row['KEGG_ID'][0], axis=1)
sub_table= genes[['SUID', 'KEGG_ID_TOP']]
sub_table.columns = ['SUID', 'KEGG_ID']
merged = pd.merge(sub_table, id_list, on='KEGG_ID')
merged.columns = ['SUID', 'KEGG_ID_TOP', 'UNIPROT_ID']
table_data = json.loads(merged.to_json(orient='records'))
new_table_data = {
'data': table_data
}
# print(json.dumps(new_table_data, indent=4))
res = requests.put(BASE + 'networks/' + str(cancer_overview_pathway_suid) + '/tables/defaultnode', data=json.dumps(new_table_data), headers=HEADERS)
merged[['UNIPROT_ID']].to_clipboard(index=False, header=False)
Explanation: Result
Annotate Pathways by External Services
Get list of genes in the pathway
ID Conversion
Interactions
End of explanation |
12,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traffic Accidents by San Diego Community
This analysis links traffic accident records from SWITRS to San Diego planning communities to see which communityies have the most traffoc accidents. Unfortunately, the data, from the years 2014 and 2015, has very poor lat/lon values -- most records have none -- so the number of records that can be linked into communities is too small for a worthwhile answer
Step1: There are only 106 records with valid lat/lon, out of the original 8919, which is a pretty poort sample. Let's plot those points over a map of the county to see where they are.
Step2: Not only are there only 106 valid points, but a lot of the points are in crazy places, outside of the city, outside of the county, and in the ocean.
Step3: Spatially join the accident points with the communities.
Step4: Out of the small number with real points, only 84 of them link to a community in San Diego, we can get an answer, but it will not be a good one.
Step5: What's interesting here is that there are a lot of city codes that are in the jurisdiction documentation that aren't in this set | Python Code:
import seaborn as sns
import metapack as mp
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
from shapely.geometry import Point
%matplotlib inline
sns.set_context('notebook')
pkg = mp.jupyter.open_package()
#pkg = mp.jupyter.open_source_package()
pkg
pkg.resource('collisions')
col = pkg.resource('collisions').read_csv()
col_sd = col[col.juris.isin( ['3711','3714','3797', '3725'] )]
len(col_sd)
## Create a new GeoPandas frame, converting the targetlongitude and targetlatitude
## colums to a Shapely Point and assigning it to the frame's geometry
t = col_sd.dropna(subset=['longitude', 'latitude'])
gdf = gpd.GeoDataFrame(t, geometry=
[Point(x,y) for x,y in zip(-t.longitude, t.latitude) ])
len(gdf),len(col_sd)
Explanation: Traffic Accidents by San Diego Community
This analysis links traffic accident records from SWITRS to San Diego planning communities to see which communityies have the most traffoc accidents. Unfortunately, the data, from the years 2014 and 2015, has very poor lat/lon values -- most records have none -- so the number of records that can be linked into communities is too small for a worthwhile answer
End of explanation
len(col)
comm = pkg.reference('communities').geoframe()
ax = comm.plot()
gdf.plot(ax=ax, color='red')
Explanation: There are only 106 records with valid lat/lon, out of the original 8919, which is a pretty poort sample. Let's plot those points over a map of the county to see where they are.
End of explanation
comm = pkg.reference('communities').geoframe()
comm.head()
sd_comm = comm[ comm['type'] == 'sd_community']
ax = sd_comm.plot()
gdf.plot(ax=ax, color='red')
Explanation: Not only are there only 106 valid points, but a lot of the points are in crazy places, outside of the city, outside of the county, and in the ocean.
End of explanation
j = gpd.sjoin(gdf.dropna(subset=['geometry']), sd_comm)
sum(j.code.notnull())
Explanation: Spatially join the accident points with the communities.
End of explanation
j.groupby('name').count().case_id.sort_values(ascending=False)
col.head().T
# SD County
col_sdc = col[ ( (col.juris.str.startswith('37')) | (col.juris == '9645'))] # 9645 is CHP in San Diego City
len(col_sdc)
col_sdc.juris.value_counts().sort_index()
Explanation: Out of the small number with real points, only 84 of them link to a community in San Diego, we can get an answer, but it will not be a good one.
End of explanation
col[col.juris == '3720']
pkg.resource('sd_juris')
juris = pkg.resource('sd_juris').read_csv()
#juris['code'] = juris.code.astype('str')
t = col_sdc.merge(juris, left_on='cnty_city_loc', right_on='code')
df = t.groupby('name').count().case_id.sort_values(ascending=False).to_frame()
df.columns = ['accidents']
df = df.join(juris.set_index('name'))
df['area'] = df.area / 1_000_000 # sq meters to sq km
# Rate by area and population
df['area_rate'] = df.accidents / df.area
df['pop_rate'] = df.accidents / df.population * 100_000
df['dvmt_rate'] = df.accidents / df.daily_vmt
df['road_rate'] = df.accidents / df.road_miles
df.sort_values('pop_rate', ascending=False)
from tabulate import tabulate
print(tabulate(df[['accidents','area_rate','pop_rate']], headers='keys', tablefmt='pipe'))
from tabulate import tabulate
print(tabulate(df[['dvmt_rate', 'road_rate']].sort_values('road_rate', ascending=False), headers='keys', tablefmt='pipe'))
Explanation: What's interesting here is that there are a lot of city codes that are in the jurisdiction documentation that aren't in this set: 3780 Poway, 3720 Lemon grove. From the look for the documentation, these values might be mapped to 3700.
End of explanation |
12,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rover Project Test Notebook
This notebook contains the functions that provide the scaffolding needed to test out mapping methods. The following steps are taken to test functions and calibrate data for the project
Step1: Quick Look at the Data
There's some example data provided in the test_dataset folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator.
Next, read in and display a random image from the test_dataset folder
Step2: Calibration Data
Read in and display example grid and rock sample calibration images. The grid is used for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.
Step4: Perspective Transform
Define the perspective transform function and test it on an image.
Four source points are selected which represent a 1 square meter grid in the image viewed from the rover's front camera. These source points are subsequently mapped to four corresponding grid cell points in our "warped" image such that a grid cell in it is 10x10 pixels viewed from top-down. Thus, the front_cam image is said to be warped into a top-down view image by the perspective transformation. The example grid image above is used to choose source points for the grid cell which is in front of the rover (each grid cell is 1 square meter in the sim). The source and destination points are defined to warp the image to a grid where each 10x10 pixel square represents 1 square meter.
The following steps are used to warp an image using a perspective transform
Step6: Color Thresholding
Define the color thresholding function for navigable terrain and apply it to the warped image.
Ultimately, the map not only includes navigable terrain but also obstacles and the positions of the rock samples we're searching for. New functions are needed to return the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that these areas can be mapped into world coordinates as well.
Color thresholding for navigable terrain
Step8: Color thresholding for obstacle terrain
Step10: Color thresholding for gold rocks
Step16: Coordinate Transformations
Define the functions used to do coordinate transforms and apply them to an image.
Step17: Testing left and right nav angles
Step18: Testing Image Pixels for Improving Fidelity
Step21: Read in saved data and ground truth map of the world
The next cell is all setup to read data saved from rover sensors into a pandas dataframe. Here we'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain.
After that, we'll define a class to store telemetry data and pathnames to images. When the class (data = SensorData()) is instantiated, we'll have a global variable called data that can be referenced for telemetry and to map data within the process_image() function in the following cell.
Step23: Write a function to process stored images
The process_image() function below is modified by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image() function in conjunction with the moviepy video processing package to create a video from the rover camera image data taken in the simulator.
In short, we will be passing individual images into process_image() and building up an image called output_image that will be stored as one frame of the output video. A mosaic of the various steps of above analysis process and additional text can also be added.
The output video ultimately demonstrates our mapping process.
Step24: Make a video from processed image data
Use the moviepy library to process images and create a video.
Step26: This next cell should function as an inline video player
If this fails to render the video, the alternative video rendering method in the following cell can be run. The output video mp4 is saved in the /output folder.
Step27: Below is an alternative way to create a video in case the above cell did not work. | Python Code:
#%%HTML
#<style> code {background-color : orange !important;} </style>
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
Explanation: Rover Project Test Notebook
This notebook contains the functions that provide the scaffolding needed to test out mapping methods. The following steps are taken to test functions and calibrate data for the project:
The simulator is run in "Training Mode" and some data is recorded. Note: the simulator may crash if a large (longer than a few minutes) dataset is recorded; only a small data sample is required i.e. just some example images to work with.
The functions are tested with the data.
Functions are written and modified to report and map out detections of obstacles and rock samples (yellow rocks).
process_image() function is populated with the appropriate steps/functions to go from a raw image to a worldmap.
moviepy functions are used to construct a video output from processed image data.
Once it is confirmed that mapping is working, perception.py and decision.py are modified to allow the rover to navigate and map in autonomous mode!
Note: If, at any point, display windows freeze up or other confounding issues are encountered, Kernel should be restarted and output cleared from the "Kernel" menu above.
Uncomment and run the next cell to get code highlighting in the markdown cells.
End of explanation
path = '../test_dataset/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
Explanation: Quick Look at the Data
There's some example data provided in the test_dataset folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator.
Next, read in and display a random image from the test_dataset folder
End of explanation
# In the simulator the grid on the ground can be toggled on for calibration.
# The rock samples can be toggled on with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = '../calibration_images/example_grid1.jpg'
example_rock = '../calibration_images/example_rock1.jpg'
example_rock2 = '../calibration_images/example_rock2.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
rock_img2 = mpimg.imread(example_rock2)
fig = plt.figure(figsize=(12,3))
plt.subplot(131)
plt.imshow(grid_img)
plt.subplot(132)
plt.imshow(rock_img)
plt.subplot(133)
plt.imshow(rock_img2)
Explanation: Calibration Data
Read in and display example grid and rock sample calibration images. The grid is used for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.
End of explanation
def perspect_transform(input_img, sourc_pts, destn_pts):
Apply a perspective transformation to input 3D image.
Keyword arguments:
input_img -- 3D numpy image on which perspective transform is applied
sourc_pts -- numpy array of four source coordinates on input 3D image
destn_pts -- corresponding destination coordinates on output 2D image
Return value:
output_img -- 2D numpy image with overhead view
transform_matrix = cv2.getPerspectiveTransform(
sourc_pts,
destn_pts
)
output_img = cv2.warpPerspective(
input_img,
transform_matrix,
(input_img.shape[1], input_img.shape[0]) # keep same size as input_img
)
return output_img
# Define calibration box in source (actual) and destination (desired)
# coordinates to warp the image to a grid where each 10x10 pixel square
# represents 1 square meter and the destination box will be 2*dst_size
# on each side
dst_size = 5
# Set a bottom offset (rough estimate) to account for the fact that the
# bottom of the image is not the position of the rover but a bit in front
# of it
bottom_offset = 6
source = np.float32(
[[14, 140],
[301, 140],
[200, 96],
[118, 96]]
)
destination = np.float32(
[
[image.shape[1]/2 - dst_size,
image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size,
image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size,
image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size,
image.shape[0] - 2*dst_size - bottom_offset]
]
)
warped = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
# scipy.misc.imsave('../output/warped_example.jpg', warped)
warped_rock = perspect_transform(rock_img, source, destination)
warped_rock2 = perspect_transform(rock_img2, source, destination)
fig = plt.figure(figsize=(16,7))
plt.subplot(221)
plt.imshow(rock_img)
plt.subplot(222)
plt.imshow(rock_img2)
plt.subplot(223)
plt.imshow(warped_rock)
plt.subplot(224)
plt.imshow(warped_rock2)
rock1_pixels = np.copy(rock_img)
plt.imshow(rock1_pixels[90:112,150:172])
Explanation: Perspective Transform
Define the perspective transform function and test it on an image.
Four source points are selected which represent a 1 square meter grid in the image viewed from the rover's front camera. These source points are subsequently mapped to four corresponding grid cell points in our "warped" image such that a grid cell in it is 10x10 pixels viewed from top-down. Thus, the front_cam image is said to be warped into a top-down view image by the perspective transformation. The example grid image above is used to choose source points for the grid cell which is in front of the rover (each grid cell is 1 square meter in the sim). The source and destination points are defined to warp the image to a grid where each 10x10 pixel square represents 1 square meter.
The following steps are used to warp an image using a perspective transform:
Define 4 source points, in this case, the 4 corners of a grid cell in the front camera image above.
Define 4 destination points (must be listed in the same order as source points!).
Use cv2.getPerspectiveTransform() to get M, the transform matrix.
Use cv2.warpPerspective() to apply M and warp front camera image to a top-down view.
Refer to the following documentation for geometric transformations in OpenCV:
http://docs.opencv.org/trunk/da/d6e/tutorial_py_geometric_transformations.html
End of explanation
def color_thresh_nav(input_img, rgb_thresh=(160, 160, 160)):
Apply a color threshold to extract only ground terrain pixels.
Keyword arguments:
input_img -- numpy image on which RGB threshold is applied
rgb_thresh -- RGB thresh tuple above which only ground pixels are detected
Return value:
nav_img -- binary image identifying ground/navigable terrain pixels
# Create an array of zeros same xy size as input_img, but single channel
nav_img = np.zeros_like(input_img[:, :, 0])
# Require that each of the R(0), G(1), B(2) pixels be above all three
# rgb_thresh values such that pix_above_thresh will now contain a
# boolean array with "True" where threshold was met
pix_above_thresh = (
(input_img[:, :, 0] > rgb_thresh[0]) &
(input_img[:, :, 1] > rgb_thresh[1]) &
(input_img[:, :, 2] > rgb_thresh[2])
)
# Index the array of zeros with the boolean array and set to 1 (white)
# those pixels that are above rgb_thresh for ground/navigable terrain
nav_img[pix_above_thresh] = 1
# nav_img will now contain white pixels identifying navigable terrain
return nav_img
threshed = color_thresh_nav(warped)
plt.imshow(threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
Explanation: Color Thresholding
Define the color thresholding function for navigable terrain and apply it to the warped image.
Ultimately, the map not only includes navigable terrain but also obstacles and the positions of the rock samples we're searching for. New functions are needed to return the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that these areas can be mapped into world coordinates as well.
Color thresholding for navigable terrain
End of explanation
def color_thresh_obst(input_img, rgb_thresh=(160, 160, 160)):
Apply a color threshold to extract only mountain rock pixels.
Keyword arguments:
input_img -- numpy image on which RGB threshold is applied
rgb_thresh -- RGB thresh tuple below which only obstacle pixels are detected
Return value:
nav_img -- binary image identifying rocks/obstacles terrain pixels
# Create an array of zeros same xy size as input_img, but single channel
obs_img = np.zeros_like(input_img[:, :, 0])
# Require that each of the R(0), G(1), B(2) pixels be below all three
# rgb_thresh values such that pix_below_thresh will now contain a
# boolean array with "True" where threshold was met
#pix_below_thresh = (
# (input_img[:, :, 0] < rgb_thresh[0]) &
# (input_img[:, :, 1] < rgb_thresh[1]) &
# (input_img[:, :, 2] < rgb_thresh[2])
#)
pix_below_thresh = (
(np.logical_and(input_img[:, :, 0] > 0,input_img[:, :, 0] <= rgb_thresh[0])) &
(np.logical_and(input_img[:, :, 1] > 0,input_img[:, :, 1] <= rgb_thresh[1])) &
(np.logical_and(input_img[:, :, 2] > 0,input_img[:, :, 2] <= rgb_thresh[2]))
)
# Index the array of zeros with the boolean array and set to 1 (white)
# those pixels that are below rgb_thresh for rocks/obstacles terrain
obs_img[pix_below_thresh] = 1
# obs_img will now contain white pixels identifying obstacle terrain
return obs_img
threshed_obstacles_image = color_thresh_obst(warped)
plt.imshow(threshed_obstacles_image, cmap='gray')
Explanation: Color thresholding for obstacle terrain
End of explanation
def color_thresh_rock(input_img, low_bound, upp_bound):
Apply a color threshold using OpenCV to extract pixels for gold rocks.
Keyword arguments:
input_img -- numpy image on which OpenCV HSV threshold is applied
low_bound -- tuple defining lower HSV color value for gold rocks
upp_bound -- tuple defining upper HSV color value for gold rocks
Return value:
threshed_img -- binary image identifying gold rock pixels
# Convert BGR to HSV
hsv_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2HSV)
# Threshold the HSV image to get only colors for gold rocks
threshed_img = cv2.inRange(hsv_img, low_bound, upp_bound)
return threshed_img
# define range of gold rock color in HSV
lower_bound = (75, 130, 130)
upper_bound = (255, 255, 255)
# apply rock color threshold to original rocks 1 and 2 images
threshed_rock_image = color_thresh_rock(
rock_img,
lower_bound,
upper_bound
)
threshed_rock2_image = color_thresh_rock(
rock_img2,
lower_bound,
upper_bound
)
# apply rock color threshold to warped rocks 1 and 2 images
threshed_warped_rock_image = color_thresh_rock(
warped_rock,
lower_bound,
upper_bound
)
threshed_warped_rock2_image = color_thresh_rock(
warped_rock2,
lower_bound,
upper_bound
)
# verify correctness of gold rock threshold
fig = plt.figure(figsize=(20,11))
plt.subplot(421)
plt.imshow(rock_img)
plt.subplot(422)
plt.imshow(threshed_rock_image, cmap='gray')
plt.subplot(423)
plt.imshow(warped_rock)
plt.subplot(424)
plt.imshow(threshed_warped_rock_image, cmap='gray')
plt.subplot(425)
plt.imshow(rock_img2)
plt.subplot(426)
plt.imshow(threshed_rock2_image, cmap='gray')
plt.subplot(427)
plt.imshow(warped_rock2)
plt.subplot(428)
plt.imshow(threshed_warped_rock2_image, cmap='gray')
Explanation: Color thresholding for gold rocks
End of explanation
def to_rover_coords(binary_img):
Convert all points on img coord-frame to those on rover's frame.
# Identify nonzero pixels in binary image representing
# region of interest e.g. rocks
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to rover's coordinate
# frame given that rover front cam itself is at center bottom of
# the photographed image.
x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[1]/2).astype(np.float)
return x_pixel, y_pixel
def to_polar_coords(x_pix, y_pix):
Convert cartesian coordinates to polar coordinates.
# compute distance and angle of 'each' pixel from origin and
# vertical respectively
distances = np.sqrt(x_pix**2 + y_pix**2)
angles = np.arctan2(y_pix, x_pix)
return distances, angles
def rotate_pix(x_pix, y_pix, angle):
Apply a geometric rotation.
angle_rad = angle * np.pi / 180 # yaw to radians
x_pix_rotated = (x_pix * np.cos(angle_rad)) - (y_pix * np.sin(angle_rad))
y_pix_rotated = (x_pix * np.sin(angle_rad)) + (y_pix * np.cos(angle_rad))
return x_pix_rotated, y_pix_rotated
def translate_pix(x_pix_rot, y_pix_rot, x_pos, y_pos, scale):
Apply a geometric translation and scaling.
x_pix_translated = (x_pix_rot / scale) + x_pos
y_pix_translated = (y_pix_rot / scale) + y_pos
return x_pix_translated, y_pix_translated
def pix_to_world(x_pix, y_pix, x_pos, y_pos, yaw, world_size, scale):
Apply a geometric transformation i.e. rotation and translation to ROI.
Keyword arguments:
x_pix, y_pix -- numpy array coords of ROI being converted to world frame
x_pos, y_pos, yaw -- rover position and yaw angle in world frame
world_size -- integer length of the square world map (200 x 200 pixels)
scale -- scale factor between world frame pixels and rover frame pixels
Note:
Requires functions rotate_pix and translate_pix to work
# Apply rotation and translation
x_pix_rot, y_pix_rot = rotate_pix(
x_pix,
y_pix,
yaw
)
x_pix_tran, y_pix_tran = translate_pix(
x_pix_rot,
y_pix_rot,
x_pos,
y_pos,
scale
)
# Clip pixels to be within world_size
x_pix_world = np.clip(np.int_(x_pix_tran), 0, world_size - 1)
y_pix_world = np.clip(np.int_(y_pix_tran), 0, world_size - 1)
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh_nav(warped)
# Calculate pixel values in rover-centric coords and
# distance/angle to all pixels
xpix, ypix = to_rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
######## TESTING ############
xpix = xpix[dist < 130]
ypix = ypix[dist < 130]
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
Explanation: Coordinate Transformations
Define the functions used to do coordinate transforms and apply them to an image.
End of explanation
x_nav_test_pix, y_nav_test_pix = to_rover_coords(threshed)
nav_test_dists, nav_test_angles = to_polar_coords(x_nav_test_pix, y_nav_test_pix)
mean_test_angle = np.mean(nav_test_angles)
# separate nav_test_angles into left and right angles
nav_test_left_angles = nav_test_angles[nav_test_angles > 0]
mean_test_left_angle = np.mean(nav_test_left_angles)
nav_test_right_angles = nav_test_angles[nav_test_angles < 0]
mean_test_right_angle = np.mean(nav_test_right_angles)
print('nav_test_angles:')
print(nav_test_angles)
print('amount: ', len(nav_test_angles))
print('mean:', mean_test_angle * 180 / np.pi)
print('')
print('nav_test_left_angles:')
print(nav_test_left_angles)
print('amount: ', len(nav_test_left_angles))
print('mean:', mean_test_left_angle * 180 / np.pi)
print('')
print('nav_test_right_angles:')
print(nav_test_right_angles)
print('amount: ', len(nav_test_right_angles))
print('mean:', mean_test_right_angle * 180 / np.pi)
print('')
#### do some plotting ######
fig = plt.figure(figsize=(12,9))
plt.plot(x_nav_test_pix, y_nav_test_pix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 180)
arrow_length = 150
# main test angle
x_mean_test_angle = arrow_length * np.cos(mean_test_angle)
y_mean_test_angle = arrow_length * np.sin(mean_test_angle)
plt.arrow(0, 0, x_mean_test_angle, y_mean_test_angle, color='red', zorder=2, head_width=10, width=2)
# main left test angle
x_mean_test_left_angle = arrow_length * np.cos(mean_test_left_angle)
y_mean_test_left_angle = arrow_length * np.sin(mean_test_left_angle)
plt.arrow(0, 0, x_mean_test_left_angle, y_mean_test_left_angle, color='yellow', zorder=2, head_width=10, width=2)
# main right test angle
x_mean_test_right_angle = arrow_length * np.cos(mean_test_right_angle)
y_mean_test_right_angle = arrow_length * np.sin(mean_test_right_angle)
plt.arrow(0, 0, x_mean_test_right_angle, y_mean_test_right_angle, color='blue', zorder=2, head_width=10, width=2)
Explanation: Testing left and right nav angles
End of explanation
nav_x_pixs, nav_y_pixs = to_rover_coords(threshed)
nav_dists, nav_angles = to_polar_coords(nav_x_pixs, nav_y_pixs)
print('nav_x_pixs:')
print(nav_x_pixs)
print(nav_x_pixs.shape)
print('')
print('nav_y_pixs:')
print(nav_y_pixs)
print(nav_y_pixs.shape)
print('')
print('nav_dists:')
print('len(nav_dists):', len(nav_dists))
print(nav_dists[:4])
print('mean:', np.mean(nav_dists))
print('shape:', nav_dists.shape)
print('')
# remove some pixels that are farthest away
#indexes_to_remove = []
#trim_nav_x_pixs = np.delete(nav_x_pixs, x )
trim_nav_x_pixs = nav_x_pixs[nav_dists < 120]
print('trim_nav_x_pixs')
print(trim_nav_x_pixs)
trim_nav_y_pixs = nav_y_pixs[nav_dists < 120]
print('trim_nav_y_pixs')
print(trim_nav_y_pixs)
Explanation: Testing Image Pixels for Improving Fidelity
End of explanation
import pandas as pd
# Change the path below to your data directory
# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator
# change the '.' to ','
# Read in csv log file as dataframe
df = pd.read_csv('../test_dataset_2/robot_log.csv', delimiter=';', decimal='.')
csv_img_list = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack(
(ground_truth*0,
ground_truth*255,
ground_truth*0)
).astype(np.float)
class SensorData():
Create a class to be a container of rover sensor data from sim.
Reads in saved data from csv sensor log file generated by sim which
includes saved locations of front camera snapshots and corresponding
rover position and yaw values in world coordinate frame
def __init__(self):
Initialize a SensorData instance unique to a single simulation run.
worldmap instance variable is instantiated with a size of 200 square
grids corresponding to a 200 square meters space which is same size as
the 200 square pixels ground_truth variable allowing full range
of output position values in x and y from the sim
self.images = csv_img_list
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
# running index set to -1 as hack because moviepy
# (below) seems to run one extra iteration
self.count = -1
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a SensorData().. this will be a global variable/object
# that can be referenced in the process_image() function below
data = SensorData()
Explanation: Read in saved data and ground truth map of the world
The next cell is all setup to read data saved from rover sensors into a pandas dataframe. Here we'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain.
After that, we'll define a class to store telemetry data and pathnames to images. When the class (data = SensorData()) is instantiated, we'll have a global variable called data that can be referenced for telemetry and to map data within the process_image() function in the following cell.
End of explanation
def process_image(input_img):
Establish ROIs in rover cam image and overlay with ground truth map.
Keyword argument:
input_img -- 3 channel color image
Return value:
output_img -- 3 channel color image with ROIs identified
Notes:
Requires data (a global SensorData object)
Required by the ImageSequeceClip object from moviepy module
# Example of how to use the SensorData() object defined above
# to print the current x, y and yaw values
# print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])
# 1) Define source and destination points for perspective transform
# 2) Apply perspective transform
warped_img = perspect_transform(input_img, source, destination)
# 3) Apply color threshold to identify following ROIs:
# a. navigable terrain
# b. obstacles
# c. rock samples
threshed_img_navigable = color_thresh_nav(warped_img)
threshed_img_obstacle = color_thresh_obst(warped_img)
threshed_img_rock = color_thresh_rock(
warped_img,
lower_bound,
upper_bound
)
# 4) Convert thresholded image pixel values to rover-centric coords
navigable_x_rover, navigable_y_rover = to_rover_coords(threshed_img_navigable)
obstacle_x_rover, obstacle_y_rover = to_rover_coords(threshed_img_obstacle)
rock_x_rover, rock_y_rover = to_rover_coords(threshed_img_rock)
########################### TESTING ############################
nav_dists = to_polar_coords(navigable_x_rover, navigable_y_rover)[0]
navigable_x_rover = navigable_x_rover[nav_dists < 130]
navigable_y_rover = navigable_y_rover[nav_dists < 130]
# 5) Convert rover-centric pixel values to world coords
my_worldmap = np.zeros((200, 200))
my_scale = 10 # scale factor assumed between world and rover space pixels
#curr_rover_xpos = data.xpos[data.count-1]
#curr_rover_ypos = data.ypos[data.count-1]
#curr_rover_yaw = data.yaw[data.count-1]
navigable_x_world, navigable_y_world = pix_to_world(
navigable_x_rover,
navigable_y_rover,
data.xpos[data.count],
data.ypos[data.count],
data.yaw[data.count],
#curr_rover_xpos,
#curr_rover_ypos,
#curr_rover_yaw,
my_worldmap.shape[0],
my_scale
)
obstacle_x_world, obstacle_y_world = pix_to_world(
obstacle_x_rover,
obstacle_y_rover,
data.xpos[data.count],
data.ypos[data.count],
data.yaw[data.count],
#curr_rover_xpos,
#curr_rover_ypos,
#curr_rover_yaw,
my_worldmap.shape[0],
my_scale
)
rock_x_world, rock_y_world = pix_to_world(
rock_x_rover,
rock_y_rover,
data.xpos[data.count],
data.ypos[data.count],
data.yaw[data.count],
#curr_rover_xpos,
#curr_rover_ypos,
#curr_rover_yaw,
my_worldmap.shape[0],
my_scale
)
# 6) Update worldmap (to be displayed on right side of screen)
#data.worldmap[obstacle_y_world, obstacle_x_world] = (255,0,0)
#data.worldmap[rock_y_world, rock_x_world] = (255,255,255)
#data.worldmap[navigable_y_world, navigable_x_world] = (0,0,255)
data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
data.worldmap[rock_y_world, rock_x_world, 1] += 1
data.worldmap[navigable_y_world, navigable_x_world, 2] += 1
# 7) Make a mosaic image
# First create a blank image (can be whatever shape)
output_image = np.zeros(
(input_img.shape[0] + data.worldmap.shape[0],
input_img.shape[1]*2,
3)
)
# Next we populate regions of the image with various output
# Here we're putting the original image in the upper left hand corner
output_image[0:input_img.shape[0], 0:input_img.shape[1]] = input_img
# add a warped image to the mosaic
warped = perspect_transform(input_img, source, destination)
# Add the warped image in the upper right hand corner
output_image[0:input_img.shape[0], input_img.shape[1]:] = warped
# Overlay worldmap with ground truth map
map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)
# Flip map overlay so y-axis points upward and add to output_image
output_image[
input_img.shape[0]:,
0:data.worldmap.shape[1]
] = np.flipud(map_add)
# Then putting some text over the image
cv2.putText(
output_image,
"Populate this image with your analyses to make a video!",
(20, 20),
cv2.FONT_HERSHEY_COMPLEX,
0.4,
(255, 255, 255),
1
)
data.count += 1 # Keep track of the index in the Databucket()
return output_image
Explanation: Write a function to process stored images
The process_image() function below is modified by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image() function in conjunction with the moviepy video processing package to create a video from the rover camera image data taken in the simulator.
In short, we will be passing individual images into process_image() and building up an image called output_image that will be stored as one frame of the output video. A mosaic of the various steps of above analysis process and additional text can also be added.
The output video ultimately demonstrates our mapping process.
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
# Re-initialize data in case this cell is run multiple times
data = SensorData()
# Note: output video will be sped up because recording rate in
# simulator is fps=25
clip = ImageSequenceClip(data.images, fps=60)
new_clip = clip.fl_image(process_image) # process_image expects color images!
%time new_clip.write_videofile(output, audio=False)
Explanation: Make a video from processed image data
Use the moviepy library to process images and create a video.
End of explanation
from IPython.display import HTML
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(output))
Explanation: This next cell should function as an inline video player
If this fails to render the video, the alternative video rendering method in the following cell can be run. The output video mp4 is saved in the /output folder.
End of explanation
import io
import base64
video = io.open(output, 'r+b').read()
encoded_video = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded_video.decode('ascii')))
Explanation: Below is an alternative way to create a video in case the above cell did not work.
End of explanation |
12,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MAT245 Lab 5 - Linear Regression
Overview
Regression analysis is a set of statistical techniques for modelling the relationships between a dependent variable and a set of independent (or predictor) variables. Linear regression in particular assumes these relationships are linear. We are going to explore a few different ways of finding "optimal" parameters for a linear regression model.
Loading the data
We will be working with a subset of scikit-learn's Boston housing dataset. The goal is to construct a linear regression model that predicts the price of a house given a few metrics about the neighbourhood it's in. The Boston data can be loaded by
Step1: There are thirteen columns of data, and 506 samples. A description of what each column means can be found by investigating bost['DESCR']. For visualization purposes, we'll work with a 2 dimensional subset
Step2: The data in column index 2 represents proportion of non-retail business acres in the town, while column index 12 is the median value of owner-occupied homes in the neighbourhood, priced in $1000's.
The target variable (i.e. what we want to predict) are the house prices, these are stored in bost.target. Let's plot house prices as a function of our inputs to see what we're dealing with. | Python Code:
from sklearn import datasets
bost = datasets.load_boston()
bost.keys()
bost.data.shape
Explanation: MAT245 Lab 5 - Linear Regression
Overview
Regression analysis is a set of statistical techniques for modelling the relationships between a dependent variable and a set of independent (or predictor) variables. Linear regression in particular assumes these relationships are linear. We are going to explore a few different ways of finding "optimal" parameters for a linear regression model.
Loading the data
We will be working with a subset of scikit-learn's Boston housing dataset. The goal is to construct a linear regression model that predicts the price of a house given a few metrics about the neighbourhood it's in. The Boston data can be loaded by:
End of explanation
xs = bost.data[:, [2, 12]]
Explanation: There are thirteen columns of data, and 506 samples. A description of what each column means can be found by investigating bost['DESCR']. For visualization purposes, we'll work with a 2 dimensional subset:
End of explanation
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
for i in range(1, 10):
ax = fig.add_subplot(3, 3, i, projection='3d')
ax.view_init(elev=10., azim = (360.0 / 9.) * (i - 1))
ax.scatter(xs[:,0], xs[:,1], bost.target)
plt.show()
Explanation: The data in column index 2 represents proportion of non-retail business acres in the town, while column index 12 is the median value of owner-occupied homes in the neighbourhood, priced in $1000's.
The target variable (i.e. what we want to predict) are the house prices, these are stored in bost.target. Let's plot house prices as a function of our inputs to see what we're dealing with.
End of explanation |
12,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q3
This question will focusing on looping and using dictionaries.
Part A
In this part, you'll write a safe version of testing a dictionary for a specific key and extracting the corresponding element. Normally, if you try to access a key that does not exist, your program crashes with a KeyError exception.
Here, you'll write a function which
Step1: Part B
Write a function that finds all occurrences of a specific value in a list, and records--in a separate list--these occurrences in terms of the indices where they were found.
Your function should
Step2: Part C
Write a function which provides counts of very specific elements in a list. This function will return a dictionary where the keys are the elements you wanted to find, and their values are the number of times (counts) they were found in the target list.
HINT | Python Code:
v1 = safe_access({"one": [1, 2, 3], "two": [4, 5, 6], "three": "something"}, "three")
assert v1 == "something"
v2 = safe_access({"one": [1, 2, 3], "two": [4, 5, 6], "three": "something"}, "two", [10, 11, 12])
assert set(v2) == set((4, 5, 6))
default_val = 3
try:
value = safe_access({"one": 1, "two": 2}, "three", default_val)
except:
assert False
else:
assert value == default_val
Explanation: Q3
This question will focusing on looping and using dictionaries.
Part A
In this part, you'll write a safe version of testing a dictionary for a specific key and extracting the corresponding element. Normally, if you try to access a key that does not exist, your program crashes with a KeyError exception.
Here, you'll write a function which:
is named safe_access
takes 3 arguments: a dictionary, a key, and a default return value
returns 1 value: the value in the dictionary for the given key, or the default return value
The third argument to this function is a default argument, whose initial value should be the Python keyword None.
Your task in this function is three-fold:
Try to access the value in the dictionary at the given key (wrap this in a try-except statement)
If the access is successful, just return the value
If the access is NOT successful (i.e. you caught a KeyError), then return the default value instead
You must include a try-except statement in your function!
End of explanation
l1 = [1, 2, 3, 4, 5, 2]
s1 = 2
a1 = [1, 5]
assert set(a1) == set(find_all(l1, s1))
l2 = ["a", "random", "set", "of", "strings", "for", "an", "interesting", "strings", "problem"]
s2 = "strings"
a2 = [4, 8]
assert set(a2) == set(find_all(l2, s2))
l3 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
s3 = 1001
a3 = []
assert set(a3) == set(find_all(l3, s3))
Explanation: Part B
Write a function that finds all occurrences of a specific value in a list, and records--in a separate list--these occurrences in terms of the indices where they were found.
Your function should:
be named find_all
take 2 arguments: a list of items, and a single element to search for in the list
returns 1 list: a list of indices into the input list that correspond to elements in the input list that match what we were looking for
For example, find_all([1, 2, 3, 4, 5, 2], 2) would return [1, 5]. find_all([1, 2, 3], 0) should return [].
You cannot use any built-in functions besides len() and range().
End of explanation
l1 = [1, 2, 3, 4, 5, 2]
s1 = [2, 5]
a1 = {2: 2, 5: 1}
assert a1 == element_counts(l1, s1)
l2 = ["a", "random", "set", "of", "strings", "for", "an", "interesting", "strings", "problem"]
s2 = ["strings", "of", "notinthelist"]
a2 = {"strings": 2, "of": 1, "notinthelist": 0}
assert a2 == element_counts(l2, s2)
Explanation: Part C
Write a function which provides counts of very specific elements in a list. This function will return a dictionary where the keys are the elements you wanted to find, and their values are the number of times (counts) they were found in the target list.
HINT: You can use your answer to Part B here to expedite the solution! You don't have to, but you can, as it partially solves the problem here. To do that, simply call your function find_all--you do NOT need to copy/paste it into the cell below! Python will see it in an earlier cell and will know that it exists (as long as you've hit the "Play" button on that cell, of course. and if you make any changes to it, you should hit the "Play" button again; this updates it in the Python program, kind of like hitting "Save" on a Word doc).
Your function should:
be named element_counts
take 2 arguments, both lists: a list of your data, and a list of elements you want counted in your data
return a dictionary: keys are the elements you wanted counted, and values are their counts in the data
For example, element_counts([1, 2, 3, 4, 5, 2], [2, 5]) would return {2: 2, 5: 1}, as there were two 2s in the data list, and one 5. Another example would be element_counts([1, 2, 3], [0, 1]), which would return {0: 0, 1: 1}, as there were no 0s in the original list, and one 1.
You cannot use any built-in functions besides len() and range().
End of explanation |
12,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
Step1: sources looks like this
```
{'_1964'
Step2: Build the plot
Step3: Build the axes
Step4: Add the background year text
We add this first so it is below all the other glyphs
Step5: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
We want the circles to be colored by the region they're in, so we use a CategoricalColorMapper to build the map and apply it to fill color in the transform field.
Step7: Add the slider and callback
Next we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our python-made ColumnDataSource.
Step8: Render together with a slider
Last but not least, we put the chart and the slider together in a layout and diplay it inline in the notebook. | Python Code:
fertility_df, life_expectancy_df, population_df_size, regions_df, years, regions_list = process_data()
sources = {}
region_name = regions_df.Group
region_name.name = 'region'
for year in years:
fertility = fertility_df[year]
fertility.name = 'fertility'
life = life_expectancy_df[year]
life.name = 'life'
population = population_df_size[year]
population.name = 'population'
new_df = pd.concat([fertility, life, population, region_name], axis=1)
sources['_' + str(year)] = ColumnDataSource(new_df)
Explanation: Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
End of explanation
dictionary_of_sources = dict(zip([x for x in years], ['_%s' % x for x in years]))
js_source_array = str(dictionary_of_sources).replace("'", "")
Explanation: sources looks like this
```
{'_1964': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165cc0>,
'_1965': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165b00>,
'_1966': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d1656a0>,
'_1967': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165ef0>,
'_1968': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9dac18>,
'_1969': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da9b0>,
'_1970': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da668>,
'_1971': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da0f0>...
```
We will pass this dictionary to the Callback. In doing so, we will find that in our javascript we have an object called, for example _1964 that refers to our ColumnDataSource. Note that we needed the prefixing _ as JS objects cannot begin with a number.
Finally we construct a string that we can insert into our javascript code to define an object.
The string looks like this: {1962: _1962, 1963: _1963, ....}
Note the keys of this object are integers and the values are the references to our ColumnDataSources from above. So that now, in our JS code, we have an object that's storing all of our ColumnDataSources and we can look them up.
End of explanation
# Set up the plot
xdr = Range1d(1, 9)
ydr = Range1d(20, 100)
plot = Plot(
x_range=xdr,
y_range=ydr,
plot_width=800,
plot_height=400,
outline_line_color=None,
toolbar_location=None,
min_border=20,
)
Explanation: Build the plot
End of explanation
AXIS_FORMATS = dict(
minor_tick_in=None,
minor_tick_out=None,
major_tick_in=None,
major_label_text_font_size="10pt",
major_label_text_font_style="normal",
axis_label_text_font_size="10pt",
axis_line_color='#AAAAAA',
major_tick_line_color='#AAAAAA',
major_label_text_color='#666666',
major_tick_line_cap="round",
axis_line_cap="round",
axis_line_width=1,
major_tick_line_width=1,
)
xaxis = LinearAxis(ticker=SingleIntervalTicker(interval=1), axis_label="Children per woman (total fertility)", **AXIS_FORMATS)
yaxis = LinearAxis(ticker=SingleIntervalTicker(interval=20), axis_label="Life expectancy at birth (years)", **AXIS_FORMATS)
plot.add_layout(xaxis, 'below')
plot.add_layout(yaxis, 'left')
Explanation: Build the axes
End of explanation
# Add the year in background (add before circle)
text_source = ColumnDataSource({'year': ['%s' % years[0]]})
text = Text(x=2, y=35, text='year', text_font_size='150pt', text_color='#EEEEEE')
plot.add_glyph(text_source, text)
Explanation: Add the background year text
We add this first so it is below all the other glyphs
End of explanation
# Make a ColorMapper
color_mapper = CategoricalColorMapper(palette=Spectral6, factors=regions_list)
# Add the circle
renderer_source = sources['_%s' % years[0]]
circle_glyph = Circle(
x='fertility', y='life', size='population',
fill_color={'field': 'region', 'transform': color_mapper},
fill_alpha=0.8,
line_color='#7c7e71', line_width=0.5, line_alpha=0.5)
circle_renderer = plot.add_glyph(renderer_source, circle_glyph)
# Add the hover (only against the circle and not other plot elements)
tooltips = "@index"
plot.add_tools(HoverTool(tooltips=tooltips, renderers=[circle_renderer]))
# We want a legend for the circles. The legend will be populated based on the label='region'
# which is a column of the data source - it will take only the unique values.
plot.add_layout(Legend(items=[LegendItem(label='region', renderers=[circle_renderer])]))
Explanation: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
We want the circles to be colored by the region they're in, so we use a CategoricalColorMapper to build the map and apply it to fill color in the transform field.
End of explanation
# Add the slider
code =
var year = slider.value,
sources = %s,
new_source_data = sources[year].data;
renderer_source.data = new_source_data;
text_source.data = {'year': [String(year)]};
% js_source_array
callback = CustomJS(args=sources, code=code)
slider = Slider(start=years[0], end=years[-1], value=1, step=1, title="Year", callback=callback)
callback.args["renderer_source"] = renderer_source
callback.args["slider"] = slider
callback.args["text_source"] = text_source
Explanation: Add the slider and callback
Next we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our python-made ColumnDataSource.
End of explanation
# Stick the plot and the slider together
show(layout([[plot], [slider]], sizing_mode='scale_width'))
Explanation: Render together with a slider
Last but not least, we put the chart and the slider together in a layout and diplay it inline in the notebook.
End of explanation |
12,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download
Step1: Statsmodels
Linear Regression Models
Step2: Time-Series Analysis | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download: http://github.com/dsacademybr
End of explanation
# Para visualização de gráficos
from pylab import *
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels as st
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore")
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
np.__version__
pd.__version__
st.__version__
# Criando dados artificiais
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
X = sm.add_constant(X)
y = np.dot(X, beta) + e
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
print('Parameters: ', results.params)
print('R2: ', results.rsquared)
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
res = sm.OLS(y, X).fit()
print(res.summary())
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best')
Explanation: Statsmodels
Linear Regression Models
End of explanation
from statsmodels.tsa.arima_process import arma_generate_sample
# Gerando dados
np.random.seed(12345)
arparams = np.array([.75, -.25])
maparams = np.array([.65, .35])
# Parâmetros
arparams = np.r_[1, -arparams]
maparam = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs)
y = pd.Series(y, index=dates)
arma_mod = sm.tsa.ARMA(y, order=(2,2))
arma_res = arma_mod.fit(trend='nc', disp=-1)
print(arma_res.summary())
Explanation: Time-Series Analysis
End of explanation |
12,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unsupervised learning
In unsupervised learning problems, we only have
input data x(i) with no labels, and we want the algorithm to find some structure in the data. A clustering
algorithm such as the k-means algorithm attempts to group the data into k “clusters” which have some
similarity.
Examples
Step1: There is a problem.
how to find the best number of cluster ?
choose number of cluster?
Elbow method
Step2: Hierarchical (Agglomerative) Clustering
Initially each point is a cluster itself
Repeatedly combine the two nearest clusters into one.
Stop when there is just one cluster left.
Step3: Spectral Clustering
TODO
Step4: DBSCAN | Python Code:
from sklearn.datasets import make_blobs
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
plt.figure(figsize=(12, 12))
n_samples = 1500
random_state = 1760
X, y = make_blobs(n_samples=n_samples, random_state=random_state)
kmeans = KMeans(n_clusters=3)
kmeans.fit_predict(X)
y_pred = kmeans.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.show()
kmeans.score(X)
plt.figure(figsize=(12, 12))
kmeans = KMeans(n_clusters=2)
kmeans.fit_predict(X)
y_pred = kmeans.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.show()
kmeans.score(X)
plt.figure(figsize=(12, 12))
kmeans = KMeans(n_clusters=5)
kmeans.fit_predict(X)
y_pred = kmeans.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.show()
Explanation: Unsupervised learning
In unsupervised learning problems, we only have
input data x(i) with no labels, and we want the algorithm to find some structure in the data. A clustering
algorithm such as the k-means algorithm attempts to group the data into k “clusters” which have some
similarity.
Examples : Social Network Analysis, Organizing Computer
Clusters, and Astronomical Data Analysis.
Clustering
Given a set of data points, group them into some clusters so that:
* points within each cluster are similar to each other
* points from different clusters are dissimilar
Similarity is defined using a distance measure:
* Euclidean
* Cosine
* Edit distance
* ...
Different Types of Clustering
Connectivity based (Hierarchical) clustering:
Agglomerative
Centroid-based clustering
Kmeans
Distribution-based clustering
Gaussian Mixture
Density-based clustering
DBSCAN
K-Means
The Κ-means clustering algorithm uses iterative refinement to produce a final result. The algorithm inputs are the number of clusters Κ and the data set. The data set is a collection of features for each data point. The algorithms starts with initial estimates for the Κ centroids, which can either be randomly generated or randomly selected from the data set. The algorithm then iterates between two steps:
1. Data assigment step:
Each centroid defines one of the clusters. In this step, each data point is assigned to its nearest centroid, based on the squared Euclidean distance. More formally, if ci is the collection of centroids in set C, then each data point x is assigned to a cluster based on
$$\underset{c_i \in C}{\arg\min} \; dist(c_i,x)^2$$
2. Centroid update step:
In this step, the centroids are recomputed. This is done by taking the mean of all data points assigned to that centroid's cluster.
$$c_i=\frac{1}{|S_i|}\sum_{x_i \in S_i x_i}$$
see more click here
End of explanation
scores = []
for i in range(1,10):
kmeans = KMeans(n_clusters=i)
kmeans.fit_predict(X)
scores.append([i,kmeans.score(X)])
scores
import pandas as pd
plt.figure(figsize=(12,6))
plt.plot(pd.DataFrame(np.array(scores))[0],pd.DataFrame(np.array(scores))[1])
plt.show()
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
Explanation: There is a problem.
how to find the best number of cluster ?
choose number of cluster?
Elbow method
End of explanation
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
plt.title('The iris dataset')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.scatter(X[:, 0], X[:, 1], s=10, c=y, cmap='rainbow')
Z = linkage(X, 'ward')
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(Z, leaf_rotation=90., leaf_font_size=8.);
plt.show();
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(Z, leaf_rotation=90., leaf_font_size=8., truncate_mode='lastp');
from sklearn.datasets import make_moons, make_circles
from sklearn.cluster import AgglomerativeClustering
f, ax = plt.subplots(1, 2, figsize=(13, 5))
X, y = make_moons(1000, noise=.05)
cl = AgglomerativeClustering(n_clusters=2).fit(X)
ax[0].scatter(X[:, 0], X[:, 1], c=cl.labels_, s=10, cmap='rainbow');
X, y = make_circles(1000, factor=.5, noise=.05)
cl = AgglomerativeClustering(n_clusters=2).fit(X)
ax[1].scatter(X[:, 0], X[:, 1], c=cl.labels_, s=10, cmap='rainbow');
Explanation: Hierarchical (Agglomerative) Clustering
Initially each point is a cluster itself
Repeatedly combine the two nearest clusters into one.
Stop when there is just one cluster left.
End of explanation
from sklearn.datasets import make_moons, make_circles
from sklearn.cluster import SpectralClustering
f, ax = plt.subplots(1, 2, figsize=(13, 5))
X, y = make_moons(1000, noise=.05)
cl = SpectralClustering(n_clusters=2, affinity='nearest_neighbors').fit(X)
ax[0].scatter(X[:, 0], X[:, 1], c=cl.labels_, s=10, cmap='rainbow');
X, y = make_circles(1000, factor=.5, noise=.05)
cl = SpectralClustering(n_clusters=2, affinity='nearest_neighbors').fit(X)
ax[1].scatter(X[:, 0], X[:, 1], c=cl.labels_, s=10, cmap='rainbow');
Explanation: Spectral Clustering
TODO
End of explanation
from sklearn.datasets import make_moons, make_circles
from sklearn.cluster import DBSCAN
f, ax = plt.subplots(1, 2, figsize=(13, 5))
X, y = make_moons(1000, noise=.05)
cl = DBSCAN(eps=0.1).fit(X)
ax[0].scatter(X[:, 0], X[:, 1], c=cl.labels_, s=10, cmap='rainbow');
X, y = make_circles(1000, factor=.5, noise=.05)
cl = DBSCAN(eps=0.1).fit(X)
ax[1].scatter(X[:, 0], X[:, 1], c=cl.labels_, s=10, cmap='rainbow');
Explanation: DBSCAN
End of explanation |
12,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
El Oscilador armonico.
Dibujamos el espacio de fases para la ecuacion $$\ddot{x} = -\omega^2x$$
Para eso lo pasamos a un sistema
Step1: El Pendulo
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta)$$
Para eso lo pasamos a un sistema
Step2: El Pendulo con perdidas
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta) - \gamma \dot \theta$$
Para eso lo pasamos a un sistema
Step3: El resorte Oscilaciones longitudinales.
Dibujamos el espacio de fases para la ecuacion $$\ddot{y} = -2\frac{k}{m}\left(1-\frac{l_0}{\sqrt{y^2+l^2}}\right)y$$
Para eso lo pasamos a un sistema | Python Code:
@interact(xin=(-5,5,0.1),yin=(-5,5,0.1))
def plotInt(xin,yin):
xmax = 2
vmax = 5
x = linspace(-xmax, xmax, 15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-vmax, vmax, 15)
X, V = meshgrid(x,v) # Creamos una grilla con eso
# Definimos las constantes
w = 3
# Definimos las ecuaciones
Vp = -w**2*X
Xp = V
def resorte(y, t):
yp = y[1]
vp = -w**2*y[0]
return [yp, vp]
x0 = [xin, yin]
t = linspace(0,10,2000)
sh = integrate.odeint(resorte, x0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(X, V, Xp, Vp, angles='xy')
plot(x, [0]*len(x) ,[0]*len(v), v)
lfase = plot(sh[:,0],sh[:,1],'.')
ylim((-vmax,vmax))
xlim((-xmax,xmax))
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$x$', fontsize=16)
ylabel('$\\dot{x}$',fontsize=16)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
ylim((-xmax, xmax))
Explanation: El Oscilador armonico.
Dibujamos el espacio de fases para la ecuacion $$\ddot{x} = -\omega^2x$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{x}} = -\omega^2 x\
\dot{x} = V_{x}
\end{cases}
$$
End of explanation
@interact(thI=(0,np.pi,0.1),vI=(0,5,0.1))
def plotInt(thI, vI):
h = linspace(-pi,pi,15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-10,10,15)
H, V = meshgrid(h,v) # Creamos una grilla con eso
# Definimos las constantes
g = 10
l = 1
# Definimos las ecuaciones
Vp = -g/l*sin(H)
Hp = V
def pendulo(y, t):
hp = y[1]
vp = -g/l*sin(y[0])
return [hp, vp]
y0 = [thI, vI]
t = linspace(0,10,2000)
sh = integrate.odeint(pendulo, y0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(H, V, Hp, Vp, angles='xy')
plot(h, [0]*len(h) ,[0]*len(v), v)
sh[:,0] = np.mod(sh[:,0] + np.pi, 2*np.pi) - np.pi
lfase = plot(sh[:,0], sh[:,1],'.')
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$\\theta$', fontsize=16)
ylabel('$\\dot{\\theta}$', fontsize=16)
xlim((-pi,pi))
ylim((-10,10))
xtick = arange(-1,1.5,0.5)
x_label = [ r"$-\pi$",
r"$-\frac{\pi}{2}$", r"$0$",
r"$+\frac{\pi}{2}$", r"$+\pi$",
]
ax1.set_xticks(xtick*pi)
ax1.set_xticklabels(x_label, fontsize=20)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
ylim((-pi, pi))
ytick = [-pi, 0, pi]
y_label = [ r"$-\pi$", r"$0$", r"$+\pi$"]
ax2.set_yticks(ytick)
ax2.set_yticklabels(y_label, fontsize=20)
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
Explanation: El Pendulo
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta)$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{\theta}} = -\frac{g}{l}sin(\theta)\
\dot{\theta} = V_{\theta}
\end{cases}
$$
End of explanation
@interact(th0=(-2*np.pi,2*np.pi,0.1),v0=(-2,2,0.1))
def f(th0 = np.pi/3, v0 = 0):
h = linspace(-pi,pi,15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-10,10,15)
H, V = meshgrid(h,v) # Creamos una grilla con eso
# Definimos las constantes
g = 10
l = 1
ga = 0.5
# Definimos las ecuaciones
Vp = -g/l*sin(H) - ga*V #SOLO CAMBIA ACA
Hp = V
def pendulo(y, t):
hp = y[1]
vp = -g/l*sin(y[0]) - ga* y[1] # Y ACAA
return [hp, vp]
y0 = [th0, v0]
t = linspace(0,10,2000)
sh = integrate.odeint(pendulo, y0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(H, V, Hp, Vp, angles='xy')
plot(h, [0]*len(h) , h , -g/l/ga*sin(h)) # Dibujar nulclinas
lfase = plot(sh[:,0],sh[:,1],'.')
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$\\theta$', fontsize=16)
ylabel('$\\dot{\\theta}$',fontsize=16)
xlim((-pi,pi))
ylim((-10,10))
xtick = arange(-1,1.5,0.5)
x_label = [ r"$-\pi$",
r"$-\frac{\pi}{2}$", r"$0$",
r"$+\frac{\pi}{2}$", r"$+\pi$",
]
ax1.set_xticks(xtick*pi)
ax1.set_xticklabels(x_label, fontsize=20)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
ylim((-pi, pi))
ytick = [-pi, 0, pi]
y_label = [ r"$-\pi$", r"$0$", r"$+\pi$"]
ax2.set_yticks(ytick)
ax2.set_yticklabels(y_label, fontsize=20)
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
Explanation: El Pendulo con perdidas
Dibujamos el espacio de fases para la ecuacion $$\ddot{\theta} = -\frac{g}{l}sin(\theta) - \gamma \dot \theta$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{\theta}} = -\frac{g}{l}sin(\theta) - \gamma \dot \theta\
\dot{\theta} = V_{\theta}
\end{cases}
$$
End of explanation
@interact(x0=(-1,1,0.1),v0=(0,1,0.1))
def f(x0=0,v0=1):
ymax = 2
vmax = 5
y = linspace(-ymax, ymax, 15) # Definimos el rango en el que se mueven las variables y el paso
v = linspace(-vmax, vmax, 15)
Y, V = meshgrid(y,v) # Creamos una grilla con eso
# Definimos las constantes
k = 10
l = 1
l0 = 1.2
m = 1
# Definimos las ecuaciones
Vp = -2*k/m*(1-l0/(sqrt(Y**2+l**2)))*Y
Yp = V
def resorte(y, t):
yp = y[1]
vp = -2*k/m*(1-l0/(sqrt(y[0]**2+l**2)))*y[0]
return [yp, vp]
y0 = [x0, v0]
t = linspace(0,10,2000)
sh = integrate.odeint(resorte, y0, t)
fig = figure(figsize(10,5))
ax1 = subplot(121) # Hacer el grafico
quiver(Y, V, Yp, Vp, angles='xy')
plot(y, [0]*len(y) ,[0]*len(v), v)
lfase = plot(sh[:,0],sh[:,1],'.')
ylim((-vmax,vmax))
xlim((-ymax,ymax))
# Retocarlo: tamanios, colores, leyendas, etc...
xlabel('$y$', fontsize=16)
ylabel('$\\dot{y}$', fontsize=16)
ax1.set_title('Espacio de fases')
ax2 = subplot(122) # Hacer otro grafico
lines = plot(t,sh )
xlabel('Tiempo [s]')
ax2.set_title('Espacio de tiempo')
legend(['Posicion','Velocidad'])
tight_layout()
ylim((-ymax, ymax))
Explanation: El resorte Oscilaciones longitudinales.
Dibujamos el espacio de fases para la ecuacion $$\ddot{y} = -2\frac{k}{m}\left(1-\frac{l_0}{\sqrt{y^2+l^2}}\right)y$$
Para eso lo pasamos a un sistema:
$$
\begin{cases}
\dot{V_{y}} = -2\frac{k}{m}\left(1-\frac{l_0}{\sqrt{y^2+l^2}}\right)y\
\dot{y} = V_{y}
\end{cases}
$$
End of explanation |
12,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Example from Think Stats
http
Step4: Central Limit Theorem
If you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution.
The following function generates samples with difference sizes from an exponential distribution.
Step6: This function generates normal probability plots for samples with various sizes.
Step7: The following plot shows how the sum of exponential variates converges to normal as sample size increases.
Step9: The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.
Step11: The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.
Step15: If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.
To generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.
Step16: Exercises
Exercise | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def decorate(**options):
Decorate the current axes.
Call decorate with keyword arguments like
decorate(title='Title',
xlabel='x',
ylabel='y')
The keyword arguments can be any of the axis properties
https://matplotlib.org/api/axes_api.html
ax = plt.gca()
ax.set(**options)
handles, labels = ax.get_legend_handles_labels()
if handles:
ax.legend(handles, labels)
plt.tight_layout()
def normal_probability_plot(sample, fit_color='0.8', **options):
Makes a normal probability plot with a fitted line.
sample: sequence of numbers
fit_color: color string for the fitted line
options: passed along to Plot
n = len(sample)
xs = np.random.normal(0, 1, n)
xs.sort()
mean, std = np.mean(sample), np.std(sample)
fit_ys = mean + std * xs
plt.plot(xs, fit_ys, color=fit_color, label='model')
ys = np.array(sample, copy=True)
ys.sort()
plt.plot(xs, ys, **options)
Explanation: Example from Think Stats
http://thinkstats2.com
Copyright 2019 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
def make_expo_samples(beta=2.0, iters=1000):
Generates samples from an exponential distribution.
beta: parameter
iters: number of samples to generate for each size
returns: map from sample size to sample
samples = {}
for n in [1, 10, 100]:
sample = [np.sum(np.random.exponential(beta, n))
for _ in range(iters)]
samples[n] = sample
return samples
Explanation: Central Limit Theorem
If you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution.
The following function generates samples with difference sizes from an exponential distribution.
End of explanation
def normal_plot_samples(samples, ylabel=''):
Makes normal probability plots for samples.
samples: map from sample size to sample
label: string
plt.figure(figsize=(8, 3))
plot = 1
for n, sample in samples.items():
plt.subplot(1, 3, plot)
plot += 1
normal_probability_plot(sample)
decorate(title='n=%d' % n,
xticks=[],
yticks=[],
xlabel='Random normal variate',
ylabel=ylabel)
Explanation: This function generates normal probability plots for samples with various sizes.
End of explanation
samples = make_expo_samples()
normal_plot_samples(samples, ylabel='Sum of expo values')
Explanation: The following plot shows how the sum of exponential variates converges to normal as sample size increases.
End of explanation
def make_lognormal_samples(mu=1.0, sigma=1.0, iters=1000):
Generates samples from a lognormal distribution.
mu: parmeter
sigma: parameter
iters: number of samples to generate for each size
returns: list of samples
samples = {}
for n in [1, 10, 100]:
sample = [np.sum(np.random.lognormal(mu, sigma, n))
for _ in range(iters)]
samples[n] = sample
return samples
samples = make_lognormal_samples()
normal_plot_samples(samples, ylabel='sum of lognormal values')
Explanation: The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.
End of explanation
def make_pareto_samples(alpha=1.0, iters=1000):
Generates samples from a Pareto distribution.
alpha: parameter
iters: number of samples to generate for each size
returns: list of samples
samples = {}
for n in [1, 10, 100]:
sample = [np.sum(np.random.pareto(alpha, n))
for _ in range(iters)]
samples[n] = sample
return samples
samples = make_pareto_samples()
normal_plot_samples(samples, ylabel='sum of Pareto values')
Explanation: The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.
End of explanation
def generate_correlated(rho, n):
Generates a sequence of correlated values from a standard normal dist.
rho: coefficient of correlation
n: length of sequence
returns: iterator
x = np.random.normal(0, 1)
yield x
sigma = np.sqrt(1 - rho**2)
for _ in range(n-1):
x = np.random.normal(x * rho, sigma)
yield x
from scipy.stats import norm
from scipy.stats import expon
def generate_expo_correlated(rho, n):
Generates a sequence of correlated values from an exponential dist.
rho: coefficient of correlation
n: length of sequence
returns: NumPy array
normal = list(generate_correlated(rho, n))
uniform = norm.cdf(normal)
expo = expon.ppf(uniform)
return expo
def make_correlated_samples(rho=0.9, iters=1000):
Generates samples from a correlated exponential distribution.
rho: correlation
iters: number of samples to generate for each size
returns: list of samples
samples = {}
for n in [1, 10, 100]:
sample = [np.sum(generate_expo_correlated(rho, n))
for _ in range(iters)]
samples[n] = sample
return samples
samples = make_correlated_samples()
normal_plot_samples(samples,
ylabel='Sum of correlated exponential values')
Explanation: If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.
To generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercises
Exercise: In Section 5.4, we saw that the distribution of adult weights is approximately lognormal. One possible explanation is that the weight a person gains each year is proportional to their current weight. In that case, adult weight is the product of a large number of multiplicative factors:
w = w0 f1 f2 ... fn
where w is adult weight, w0 is birth weight, and fi is the weight gain factor for year i.
The log of a product is the sum of the logs of the factors:
logw = log w0 + log f1 + log f2 + ... + log fn
So by the Central Limit Theorem, the distribution of logw is approximately normal for large n, which implies that the distribution of w is lognormal.
To model this phenomenon, choose a distribution for f that seems reasonable, then generate a sample of adult weights by choosing a random value from the distribution of birth weights, choosing a sequence of factors from the distribution of f, and computing the product. What value of n is needed to converge to a lognormal distribution?
End of explanation |
12,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2 - Stepping up with SciPy
Numpy is a powerful, yet very basic library, which can be a little abstract to introduce -- and a little tedious to practice. To perform more interesting things to Numpy matrices, we now turn to a number of interesting libraries, which have been built around numpy, or which were designed to interact closely with it.
Clustering with Scipy
SciPy stands for 'Scientific Python'
Step1: Loading the data
It is time to get practical! In the data directory in the repository for this course, I have included a corpus representing novels by three famous British authors from the Victorian era
Step2: As you can see we loaded a list of titles, authors, words and a frequency table - which is named capital X. These lists are perfectly matched
Step3: The X matrix which we loaded has frequency information for these texts, concerning the 100 most frequenct words in the texts. Each column in X corresponds to the relative frequencies for a particular word
Step4: The lists of words matches the names of the columns in our frequency table. To select the frequencies for the pronoun 'my' in each text, we could therefore do
Step5: If you are interested in getting a version of this matrix which is easier to deal with, pandas is an interesting library. Basically, it wraps a lot of functionality around numpy matrixes, and makes it easier to access, for instance, columns using actual names, instead of less intuitive indices. Thus, it brings a lot of functionality to Python which you might know from e.g. R. Pandas is imported as pd conventionally
Step6: To turn X into a pandas DataFrame (which is the most important object in pandas), we could do this
Step7: This command will construct a nice table out of our data matrix, which can be easily indexed. The example with 'my' above
Step8: One very nice property of pandas, it that it can be easily used to move around data in a variety of formats (which is what I mainly use it for). Creating a LaTeX representation of this matrix, for instance, is super-easy
Step9: Saving and writing data is also possible for a whole bunch of other formats, including Excel, csv, etc.
Clustering
One common methodology in stylometry is clustering
Step10: We can now run this function on our corpus. To obtain a nice and clean matrix, we apply squareform() to pdist()'s
Step11: As is clear from the shape info, we have obtained a 9 by 9 matrix, which holds the distance between each pair of texts. Note that the distance from a text to itself is of course zero
Step12: Additionally, the distance from text A to text B, is equal to the distance from B to A
Step13: To be able to visualize a dendrogram, we must first take care of the linkages in the tree
Step14: Here, we specify that we wish to use Ward's linkage method, which is one of the most common linkage functions in stylometry. We are now ready to draw the actual dendrogram. To make sure that our plots are properly displayed in the notebook, we must first execute this line
Step15: We can now draw our dendrogram. Note that we annotate the outer leaf nodes in our tree (i.e. the texts) using the labels argument. With the orientation argument, we make sure that our dendrogram can be easily read
Step16: Using the authors as labels is of course also a good idea
Step17: As we can see, Jane Austen's novels form a tight and distinctive cloud; apparantly Dickens and Thackeray are more difficult to tell apart. The actual distance between nodes is hinted at on the horizontal length of the branches (i.e. the values on the x-axis in this plot). The previous code blocks used the Manhattan city block distance, a very simple distance metric which is also used in the calculation of Burrows's Delta. Note that we can easily switch to, for instance, the Euclidean distance
Step18: Matplotlib is still the standard plotting library for Python -- and it is in fact the one which is used to produce the dendrograms above. Nevertheless, it is not particularly aesthetically pleasing. One interesting alternative which has recently surfaced is seaborn
Step19: Interestingly, seaborn comes with a series of interesting visualization options. One option which I have used in the recent past, is its clustermap(). When passing it a data set (such as our distance matrix dm above), it will draw a heatmap, and then annotate the axis with cluster tree. In the following code block, we show how we can use seaborn to create such a clustermap. Note that we first convert the distance matrix into a pandas DataFrame.
Step20: This clustermap offers an excellent visualization of the stylistic structure in our data. Apart from the cluster trees which we already obtained, the lighter areas in the heatmap intuitively point to specific text combinations that display a lot of stylistic affinity.
A simple Burrows's Delta in SciPy
SciPy is flexible enough to be useful for a variety of analyses. One interesting application which is easy to implement with SciPy is Burrows's Delta, a well-known attribution method in stylometry. We select a single text as an 'unknown' test text (test_vector), which we will try to attribute to one of the authors in our corpus. We therefore split our data
Step21: Thus, we obtain a single test vector, consisting of 30 features, and a 8 by 30 matrix of training texts
Step22: This metric can be used to calculate the cityblock distance between any two vectors, as follows
Step23: We could have easily coded this ourselves
Step24: We now proceed to calculate the distance between our anonymous text and all texts in our training set
Step25: Or with a list comprehension
Step26: As you can see, this yields a list of 8 values
Step27: Apparently, the smallest distance we obtain is close to 1.04. (Note how numpy automatically casts our list of distances to an array.) As to the largest distance
Step28: At this point, however, we still don't know to which exact training item our test item is closest. For this purpose, we can the use the argmin() function. This will not return the actual minimal distance, but rather the index at which the smallest value can be found
Step29: Apparently, the test vector's nearest neighbour can be found in first position; this is in fact a text by the author we where looking for
Step30: With pdist('cityblock'), we can calculate the pairwise distances between all rows in a single matrix; using sp.spatial.distance.cityblock(), we can calculate the same Manhattan distance between two vectors. What can we do when we would like to calculate the distances between all the rows in one matrix, to all the rows in another matrix? This is a valid question, since often we would like to attribute more than one anonymous text to a series of candidate authors. This is where scipy's cdist() function comes in handy
Step31: Let us mimick that our test_vector is in fact a list of anonymous texts
Step32: Using cdist() we can now calculate the distance between all our test items and all our train items
Step33: As you can see, we now obtain a 2 x 8 matrix, which holds for every test texts (first dimension), the distance to all training items (second dimension). To find the training indices which minize the distance for each test item we can again use argmin(). Note that here we have to specify axis=1, because we are interested in the minima in the second dimension | Python Code:
import scipy as sp
Explanation: Chapter 2 - Stepping up with SciPy
Numpy is a powerful, yet very basic library, which can be a little abstract to introduce -- and a little tedious to practice. To perform more interesting things to Numpy matrices, we now turn to a number of interesting libraries, which have been built around numpy, or which were designed to interact closely with it.
Clustering with Scipy
SciPy stands for 'Scientific Python': as its name suggests, this library extends Numpy's raw number crunching capabilities, with interesting scientific functionality, including the calculation of distances between vectors, or common statistical tests. Scipy is commonly imported under the name sp:
End of explanation
import pickle
titles, authors, words, X = pickle.load(open("dummy.p", "rb"))
Explanation: Loading the data
It is time to get practical! In the data directory in the repository for this course, I have included a corpus representing novels by three famous British authors from the Victorian era: Jane Austen, Charles Dickens, and William Thackeray. In the next code block, I load these texts and turn them into a vectorized matrix. You can simple execute the code block and ignore it for the time being. In the next chapter, we will deep deeper into the topic of vectorization.
End of explanation
print('This dummy corpus holds:')
for title, author in zip(titles, authors):
print('\t-', title, 'by', author)
Explanation: As you can see we loaded a list of titles, authors, words and a frequency table - which is named capital X. These lists are perfectly matched: the authors and titles can for instance be easily zipped together:
End of explanation
print(X.shape)
Explanation: The X matrix which we loaded has frequency information for these texts, concerning the 100 most frequenct words in the texts. Each column in X corresponds to the relative frequencies for a particular word:
End of explanation
idx_my = words.index('my')
freqs_my = X[:, idx_my]
print(freqs_my)
Explanation: The lists of words matches the names of the columns in our frequency table. To select the frequencies for the pronoun 'my' in each text, we could therefore do:
End of explanation
import pandas as pd
Explanation: If you are interested in getting a version of this matrix which is easier to deal with, pandas is an interesting library. Basically, it wraps a lot of functionality around numpy matrixes, and makes it easier to access, for instance, columns using actual names, instead of less intuitive indices. Thus, it brings a lot of functionality to Python which you might know from e.g. R. Pandas is imported as pd conventionally:
End of explanation
df = pd.DataFrame(X, columns=words, index=titles)
Explanation: To turn X into a pandas DataFrame (which is the most important object in pandas), we could do this:
End of explanation
df['my']
Explanation: This command will construct a nice table out of our data matrix, which can be easily indexed. The example with 'my' above:
End of explanation
df.to_latex()[:1000]
Explanation: One very nice property of pandas, it that it can be easily used to move around data in a variety of formats (which is what I mainly use it for). Creating a LaTeX representation of this matrix, for instance, is super-easy:
End of explanation
from scipy.spatial.distance import pdist, squareform
Explanation: Saving and writing data is also possible for a whole bunch of other formats, including Excel, csv, etc.
Clustering
One common methodology in stylometry is clustering: by drawing a tree diagram or 'dendrogram', representing the relationships between the texts in a corpus, we attempt to visualize the main stylistic structure in our data. Texts that cluster together under a similar branch in the resulting diagram, can be argued to be stylistically closer to each other, than texts which occupy completely different places in the tree. Texts by the same authors, for instance, will often form thight clades in the tree, because they are written in a similar style.
Clustering algorithms are based on the distances between texts: clustering algorithms typically start by calculating the distance between each pair of texts in a corpus, so that know for each text how (dis)similar it is from any other text. Only after these distances have been fully calculated, we have the clustering algorithm start building a tree representation, in which the similar texts are joined together and merged into new nodes. To create a distance matrix, scipy offers the convenient functions pdist and squareform, which can be used to calculate the pairwise distances between all the rows in a matrix (i.e. all the texts in a corpus, in our case):
End of explanation
dm = squareform(pdist(X, 'cityblock'))
print(dm.shape)
Explanation: We can now run this function on our corpus. To obtain a nice and clean matrix, we apply squareform() to pdist()'s: result: like that, we obtain a matrix which has a row as well as a column for each of our original texts. This representation is a bit superfluous, because matrix[i][j] will be identical to matrix[j][i]. This is because most distance metrics in stylometry are symmetric (such as the cityblock or Manhattan distance used below): the distance of document B to document A is equal to the distance from document A to document B.
End of explanation
print(dm[3][3])
print(dm[8][8])
Explanation: As is clear from the shape info, we have obtained a 9 by 9 matrix, which holds the distance between each pair of texts. Note that the distance from a text to itself is of course zero:
End of explanation
print(dm[2][3])
print(dm[3][2])
Explanation: Additionally, the distance from text A to text B, is equal to the distance from B to A:
End of explanation
from scipy.cluster.hierarchy import linkage
linkage_object = linkage(dm, method='ward')
Explanation: To be able to visualize a dendrogram, we must first take care of the linkages in the tree: this procedure will start by merging ('linking') the most similar texts in the corpus into a new mode; only at a later stage in the tree, these nodes of very similar texts will be joined together with nodes representing other texts. We perform this - fairly abstract - step on our distance matrix as follows:
End of explanation
%matplotlib inline
Explanation: Here, we specify that we wish to use Ward's linkage method, which is one of the most common linkage functions in stylometry. We are now ready to draw the actual dendrogram. To make sure that our plots are properly displayed in the notebook, we must first execute this line:
End of explanation
from scipy.cluster.hierarchy import dendrogram
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=titles, orientation='right')
Explanation: We can now draw our dendrogram. Note that we annotate the outer leaf nodes in our tree (i.e. the texts) using the labels argument. With the orientation argument, we make sure that our dendrogram can be easily read:
End of explanation
from scipy.cluster.hierarchy import dendrogram
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
Explanation: Using the authors as labels is of course also a good idea:
End of explanation
dm = squareform(pdist(X, 'euclidean'))
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
Explanation: As we can see, Jane Austen's novels form a tight and distinctive cloud; apparantly Dickens and Thackeray are more difficult to tell apart. The actual distance between nodes is hinted at on the horizontal length of the branches (i.e. the values on the x-axis in this plot). The previous code blocks used the Manhattan city block distance, a very simple distance metric which is also used in the calculation of Burrows's Delta. Note that we can easily switch to, for instance, the Euclidean distance:
End of explanation
import seaborn as sns
Explanation: Matplotlib is still the standard plotting library for Python -- and it is in fact the one which is used to produce the dendrograms above. Nevertheless, it is not particularly aesthetically pleasing. One interesting alternative which has recently surfaced is seaborn: it is in fact not a replacement for matplotlib but rather a better styled version of Matplotlib. It is imported as sns by convention (make sure that you install it first):
End of explanation
import pandas as pd
df = pd.DataFrame(dm, columns=titles, index=titles)
cm = sns.clustermap(df)
Explanation: Interestingly, seaborn comes with a series of interesting visualization options. One option which I have used in the recent past, is its clustermap(). When passing it a data set (such as our distance matrix dm above), it will draw a heatmap, and then annotate the axis with cluster tree. In the following code block, we show how we can use seaborn to create such a clustermap. Note that we first convert the distance matrix into a pandas DataFrame.
End of explanation
test_vector = X[0]
X_train = X[1:]
print(test_vector.shape)
print(X_train.shape)
Explanation: This clustermap offers an excellent visualization of the stylistic structure in our data. Apart from the cluster trees which we already obtained, the lighter areas in the heatmap intuitively point to specific text combinations that display a lot of stylistic affinity.
A simple Burrows's Delta in SciPy
SciPy is flexible enough to be useful for a variety of analyses. One interesting application which is easy to implement with SciPy is Burrows's Delta, a well-known attribution method in stylometry. We select a single text as an 'unknown' test text (test_vector), which we will try to attribute to one of the authors in our corpus. We therefore split our data:
End of explanation
from scipy.spatial.distance import cityblock as manhattan
Explanation: Thus, we obtain a single test vector, consisting of 30 features, and a 8 by 30 matrix of training texts: i.e. the texts of our 'known' authors. We can now calculate to which training text our unknown test text is closest, using the Manhattan distance implemented in scipy:
End of explanation
print(manhattan(test_vector, X_train[3]))
Explanation: This metric can be used to calculate the cityblock distance between any two vectors, as follows:
End of explanation
d = 0.0
for a, b in zip(test_vector, X_train[3]):
d += abs(a-b)
print('Distance: ', d)
Explanation: We could have easily coded this ourselves:
End of explanation
dists = []
for v in X_train:
dists.append(manhattan(test_vector, v))
print(dists)
Explanation: We now proceed to calculate the distance between our anonymous text and all texts in our training set:
End of explanation
dists = [manhattan(test_vector, v) for v in X_train]
print(dists)
Explanation: Or with a list comprehension:
End of explanation
import numpy as np
nearest_dist = np.min(dists)
print('Smallest distance: ', nearest_dist)
Explanation: As you can see, this yields a list of 8 values: the respective distances between our test_vector and all training items. Now, we can use some convenient numpy functions, to find out which training texts shows the minimal distance to our anonymous text:
End of explanation
largest_dist = np.max(dists)
print('Largest distance: ', largest_dist)
Explanation: Apparently, the smallest distance we obtain is close to 1.04. (Note how numpy automatically casts our list of distances to an array.) As to the largest distance:
End of explanation
nn_idx = np.argmin(dists) # index of the nearest neighbour
print('Index of nearest neighbour: ', nn_idx)
Explanation: At this point, however, we still don't know to which exact training item our test item is closest. For this purpose, we can the use the argmin() function. This will not return the actual minimal distance, but rather the index at which the smallest value can be found:
End of explanation
print('Closest text:', titles[nn_idx])
print('Closest author:', authors[nn_idx])
Explanation: Apparently, the test vector's nearest neighbour can be found in first position; this is in fact a text by the author we where looking for:
End of explanation
from scipy.spatial.distance import cdist
Explanation: With pdist('cityblock'), we can calculate the pairwise distances between all rows in a single matrix; using sp.spatial.distance.cityblock(), we can calculate the same Manhattan distance between two vectors. What can we do when we would like to calculate the distances between all the rows in one matrix, to all the rows in another matrix? This is a valid question, since often we would like to attribute more than one anonymous text to a series of candidate authors. This is where scipy's cdist() function comes in handy:
End of explanation
X_test = [test_vector, test_vector]
Explanation: Let us mimick that our test_vector is in fact a list of anonymous texts:
End of explanation
dists = cdist(X_test, X_train, metric='cityblock')
print(dists)
print(dists.shape)
Explanation: Using cdist() we can now calculate the distance between all our test items and all our train items:
End of explanation
np.argmin(dists, axis=1)
Explanation: As you can see, we now obtain a 2 x 8 matrix, which holds for every test texts (first dimension), the distance to all training items (second dimension). To find the training indices which minize the distance for each test item we can again use argmin(). Note that here we have to specify axis=1, because we are interested in the minima in the second dimension:
End of explanation |
12,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train Model on Distributed Cluster
IMPORTANT
Step1: Start Server "Task 0" (localhost
Step2: Start Server "Task 1" (localhost
Step3: Define Compute-Heavy TensorFlow Graph
Step4: Define Shape
Step5: Assign Devices Manually
All CPU Devices
Note the execution time. | Python Code:
import tensorflow as tf
cluster = tf.train.ClusterSpec({"local": ["localhost:2222", "localhost:2223"]})
Explanation: Train Model on Distributed Cluster
IMPORTANT: You Must STOP All Kernels and Terminal Session
The GPU is wedged at this point. We need to set it free!!
Define ClusterSpec
End of explanation
server0 = tf.train.Server(cluster, job_name="local", task_index=0)
print(server0)
Explanation: Start Server "Task 0" (localhost:2222)
Note: If you see UnknownError: Could not start gRPC server, then you have already started the server. Please ignore this.
End of explanation
server1 = tf.train.Server(cluster, job_name="local", task_index=1)
print(server1)
Explanation: Start Server "Task 1" (localhost:2223)
Note: If you see UnknownError: Could not start gRPC server, then you have already started the server. Please ignore this.
End of explanation
import tensorflow as tf
n = 2
c1 = tf.Variable([])
c2 = tf.Variable([])
def matpow(M, n):
if n < 1:
return M
else:
return tf.matmul(M, matpow(M, n-1))
Explanation: Define Compute-Heavy TensorFlow Graph
End of explanation
shape=[2500, 2500]
Explanation: Define Shape
End of explanation
import datetime
with tf.device("/job:local/task:0/cpu:0"):
A = tf.random_normal(shape=shape)
c1 = matpow(A,n)
with tf.device("/job:local/task:1/cpu:0"):
B = tf.random_normal(shape=shape)
c2 = matpow(B,n)
with tf.Session("grpc://127.0.0.1:2222") as sess:
sum = c1 + c2
start_time = datetime.datetime.now()
print(sess.run(sum))
print("Execution time: "
+ str(datetime.datetime.now() - start_time))
Explanation: Assign Devices Manually
All CPU Devices
Note the execution time.
End of explanation |
12,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TUTORIAL coupling NPZD2 and Mussels
First create a new file by "Saving-as" NPDZ.py witha new name... lets call the new file NPZD2_Mussels.py
From now on, we are going to copy-paste from Mussels.py to the new NPZD2_Mussels.py
Copy-paste parameters from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" parameters just below the "NPZD2" parameters. The new code should look like this
Step1: Copy-paste InitCond from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" InitCond just below the "NPZD2" InitCond.
Note that now there are duplicate InitCond... do not copy the Mussel InitCond that are already in NPZD2 (i.e. Temp, Phy, Zoo, SDet)
The new code should look like this
Step2: Copy-paste zero vectors from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" zero-vectors just below the "NPZD2" zero-vectors.
Same as above, note that now there are duplicate zero-vectors... do not copy the Mussel zero-vectors that are already in NPZD2 (i.e. Temp, Phy, Zoo, SDet)
The new code should look like this
Step3: Copy-paste the section "Initializing with initial conditions" from Mussels.py to NPZD2_Mussels.py
Copy-paste the "Mussel" section just below the one for "NPZD2".
The new code should look like this
Step4: Copy-paste "Update and step" from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" "Update and step" just below the one for "NPZD2".
The new code should look like this
Step5: Copy-paste MAIN MODEL LOOP from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" MAIN MODEL LOOP just below the one for "NPZD2"
The new code should look like this
Step6: Copy-paste section "Pack to output" from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" section just below the one for "NPZD2". Note that they are duplicates, son;t copy the duplicates (just Soma, Gonad, B, and Spawning).
The new code should look like this
Step7: Update plot by adding to the plotting code for ax2
Step8: Run the model so that you can see a plot.
Anything changed?
Is total nitrogen conserved?
Are you sure?
"B" is not accounted for in the Total Nitrogen calculation.
Add "B[t+1]" to the "Total Nitrogen calculation. The new code should look like this
Step9: Run the model to plot again
Is total nitrogen conserved?
We added Mussels to NPZD2, but they are not connected yet. "Food" is eaten by the mussels but are not taken out of the NPZD2 pool.
To connect them, we have to add extra terms to our original equations. Below, as an example, es the original Phytoplankton equation and then the new equation with an added term to account for mussel filtration.
(original from Fennel et al. 2006)
$$ \frac{\partial Phy}{\partial t} = \mu Phy - gZoo - m_P Phy - \tau (SDet -Phy)Phy$$
(new) $$ \frac{\partial Phy}{\partial t} = \mu Phy - gZoo - m_P Phy - \tau (SDet -Phy)Phy - F \epsilon_P Phy$$
To actually code the "feedback", we need to add code to Ad Feaces to LDet, eat Phy/Zoo/SDet... and excrete into NH4. The new code goes after "Spawning" but before "Update and setep. Also, we need to add spawning to Zoo.
The new code should look like this | Python Code:
# Parameters
par = {}
par['mu0'] = 0.69
par['kNO3'] = 0.5
par['kNH4'] = 0.5
par['alpha'] = 0.125
par['gmax'] = 0.6 #Original 0.6
par['kP'] = 0.44
par['mP'] = 0.15
par['tau'] = 0.005
par['thetaMax'] = 0.053
par['beta'] = 0.75
par['lBM'] = 0.1
par['lE'] = 0.1
par['mZ'] = 0.25
par['rSD'] = 0.3 # Original 0.03
par['rLD'] = 0.1 # # Original 0.01
par['nmax'] = 0.05
par['kI'] = 0.1
par['I0'] = 0.0095
# Parameters Mussels
par['AE_P'] = 0.9
par['AE_D'] = 0.2
par['AE_Z'] = 0.3
par['Bpub'] = 0.43
par['Fmax_ref'] = 0.025
par['GT'] = 0.44
par['KTempH'] = 0.1
par['KTempL'] = 0.5
par['KSaltL'] = 0.25
par['KOxyL'] = 0.02
par['KFood'] = 1.
par['KRE'] = 0.86
par['OxyL'] = 17.5
par['Rm'] = 0.002
par['SaltL'] = 10.
par['TempH'] = 25.
par['TempL'] = -4.
par['beta'] = 0.12
par['epsilonP'] = 1.
par['epsilonD'] = 0.5
par['epsilonZ'] = 0.3
Explanation: TUTORIAL coupling NPZD2 and Mussels
First create a new file by "Saving-as" NPDZ.py witha new name... lets call the new file NPZD2_Mussels.py
From now on, we are going to copy-paste from Mussels.py to the new NPZD2_Mussels.py
Copy-paste parameters from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" parameters just below the "NPZD2" parameters. The new code should look like this:
End of explanation
# Initial conditions
InitCond = {}
InitCond['Phy'] = 0.2
InitCond['Zoo'] = 0.1
InitCond['SDet'] = 1.
InitCond['LDet'] = 1.
InitCond['NH4'] = 0.1
InitCond['NO3'] = 7.
InitCond['Temp'] = 6.
#InitCond['O2'] = 0.5
InitCond['Soma'] = 0.01
InitCond['Gonad'] = 0.
InitCond['Salt'] = 30. #Salinity
InitCond['Oxy'] = 30. #Oxygen
Explanation: Copy-paste InitCond from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" InitCond just below the "NPZD2" InitCond.
Note that now there are duplicate InitCond... do not copy the Mussel InitCond that are already in NPZD2 (i.e. Temp, Phy, Zoo, SDet)
The new code should look like this:
End of explanation
# Create zero-vectors
Phy = np.zeros((NoSTEPS,),float) # makes a vector array of zeros (size: NoSTEPS rows by ONE column)
Zoo = np.zeros((NoSTEPS,),float) # same as above
SDet = np.zeros((NoSTEPS,),float) # Biomass - same as above
LDet = np.zeros((NoSTEPS,),float) # same as above
NH4 = np.zeros((NoSTEPS,),float) # same as above
NO3 = np.zeros((NoSTEPS,),float) # same as above
I = np.zeros((NoSTEPS,),float) # same as above
mu = np.zeros((NoSTEPS,),float) # same as above
f_I = np.zeros((NoSTEPS,),float) # same as above
L_NO3 = np.zeros((NoSTEPS,),float) # same as above
L_NH4 = np.zeros((NoSTEPS,),float) # same as above
TotN = np.zeros((NoSTEPS,),float) # same as above
# Mussels
Soma = np.zeros((NoSTEPS,),float) # makes a vector array of zeros (size: NoSTEPS rows by ONE column)
Gonad = np.zeros((NoSTEPS,),float) # same as above
B = np.zeros((NoSTEPS,),float) # Biomass - same as above
L_Temp = np.zeros((NoSTEPS,),float) # same as above
L_Salt = np.zeros((NoSTEPS,),float) # same as above
L_Oxy = np.zeros((NoSTEPS,),float) # same as above
L_Food = np.zeros((NoSTEPS,),float) # same as above
F = np.zeros((NoSTEPS,),float) # same as above
A = np.zeros((NoSTEPS,),float) # same as above
R = np.zeros((NoSTEPS,),float) # same as above
RE = np.zeros((NoSTEPS,),float) # same as above
Spawning = np.zeros((NoSTEPS,),float) # same as above
Explanation: Copy-paste zero vectors from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" zero-vectors just below the "NPZD2" zero-vectors.
Same as above, note that now there are duplicate zero-vectors... do not copy the Mussel zero-vectors that are already in NPZD2 (i.e. Temp, Phy, Zoo, SDet)
The new code should look like this:
End of explanation
# Initializing with initial conditions
Phy[0] = InitCond['Phy']
Zoo[0] = InitCond['Zoo']
SDet[0] = InitCond['SDet']
LDet[0] = InitCond['LDet']
NH4[0] = InitCond['NH4']
NO3[0] = InitCond['NO3']
# Mussels
Soma[0] = InitCond['Soma']
Gonad[0] = InitCond['Soma']
B[0] = InitCond['Soma'] + InitCond['Gonad']
Spawning[0] = 0.
Salt = np.ones((NoSTEPS,),float) * InitCond['Salt']#Salinity
Oxy = np.ones((NoSTEPS,),float) * InitCond['Oxy'] #Oxygen
Explanation: Copy-paste the section "Initializing with initial conditions" from Mussels.py to NPZD2_Mussels.py
Copy-paste the "Mussel" section just below the one for "NPZD2".
The new code should look like this:
End of explanation
# Update and step ----------------------------------------------------
Phy[t+1] = Phy[t] + (dPhydt * dt)
Zoo[t+1] = Zoo[t] + (dZoodt * dt)
SDet[t+1] = SDet[t] + (dSDetdt * dt)
LDet[t+1] = LDet[t] + (dLDetdt * dt)
NH4[t+1] = NH4[t] + (dNH4dt * dt)
NO3[t+1] = NO3[t] + (dNO3dt * dt)
if NO3[t+1] <= 0.0001:
offset = NO3[t+1]
NH4[t+1] = NH4[t+1] + offset
NO3[t+1] = NO3[t+1] - offset
# Mussels
Soma[t+1] = Soma[t] + (dSomadt * dt)
Gonad[t+1] = Gonad[t] + (dGonaddt * dt) - Spawning[t]
B[t+1] = Soma[t+1] + Gonad[t+1]
Explanation: Copy-paste "Update and step" from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" "Update and step" just below the one for "NPZD2".
The new code should look like this:
End of explanation
# *****************************************************************************
# MAIN MODEL LOOP *************************************************************
for t in range(0,NoSTEPS-1):
muMax = par['mu0'] * (1.066 ** Temp[t]) # text
f_I[t] = (par['alpha']*I[t])/(np.sqrt(muMax**2+((par['alpha']**2)*(I[t]**2)))) #Eq5
L_NO3[t] = max(0,((NO3[t])/(par['kNO3']*NO3[t])) * (1/(1+(NH4[t]/par['kNH4'])))) #Eq3
L_NH4[t] = max(0,NH4[t]/(par['kNH4']+NH4[t])) # Eq 4
mu[t] =muMax * f_I[t] * (L_NO3[t] + L_NH4[t]) # Eq2
g = par['gmax'] * (Phy[t]**2/(par['kP']+(Phy[t]**2)))
n = par['nmax'] * (1 - max(0,(I[t]-par['I0'])/(par['kI']+I[t]-par['I0'])))
dPhydt = (mu[t] * Phy[t]) - \
(g * Zoo[t]) - \
(par['mP'] * Phy[t]) - \
(par['tau']*(SDet[t]+Phy[t])*Phy[t]) # Eq1
dZoodt = (g * par['beta'] * Zoo[t]) - \
(par['lBM']*Zoo[t]) - \
(par['lE']*((Phy[t]**2)/(par['kP']+(Phy[t]**2)))*par['beta']*Zoo[t]) - \
(par['mZ']*(Zoo[t]**2))#Eq10
dSDetdt = (g * (1-par['beta']) * Zoo[t]) + \
(par['mZ']*(Zoo[t]**2)) + \
(par['mP'] * Phy[t]) - \
(par['tau']*(SDet[t]+Phy[t])*SDet[t]) - \
(par['rSD']*SDet[t])
dLDetdt = (par['tau']*((SDet[t]+Phy[t])**2)) - \
(par['rLD']*LDet[t])
dNO3dt = -(muMax * f_I[t] * L_NO3[t] * Phy[t]) + \
(n * NH4[t])
dNH4dt = -(muMax * f_I[t] * L_NH4[t] * Phy[t]) - \
(n * NH4[t]) + \
(par['lBM'] * Zoo[t]) + \
(par['lE']*((Phy[t]**2)/(par['kP']+(Phy[t]**2)))*par['beta']*Zoo[t]) + \
(par['rSD']*SDet[t]) + \
(par['rLD']*LDet[t])
# MUSSELS -------------------------------------------------------------
# Calculate Temperature Limitation
L_Temp[t] = min(max(0.,1.-np.exp(-par['KTempL']*(Temp[t]-par['TempL']))), \
max(0.,1.+((1.-np.exp(par['KTempH']*Temp[t]))/(np.exp(par['KTempH']*par['TempH'])-1.))))
# Calculate Salinity Limitation
L_Salt[t] = max(0.,1.-np.exp(-par['KSaltL']*(Salt[t]-par['SaltL'])))
# Calculate Oxygen Limitation
L_Oxy[t] = max(0.,1.-np.exp(-par['KOxyL']*(Oxy[t]-par['OxyL'])))
# Calculate Oxygen Limitation
L_Food[t] = (Phy[t]+Zoo[t]+SDet[t])/(par['KFood']+Phy[t]+Zoo[t]+SDet[t])
# Calculate Filtration rate
Fmax = par['Fmax_ref']*(B[t]**(2./3.))
F[t] = Fmax * L_Temp[t] * L_Salt[t] * L_Oxy[t] * L_Food[t]
A[t] = F[t] * ((par['epsilonP']*par['AE_P']*Phy[t])+ \
(par['epsilonZ']*par['AE_Z']*Zoo[t])+ \
(par['epsilonD']*par['AE_D']*SDet[t]))
R[t] = (par['Rm']*B[t]) + (par['beta']*A[t])
RE[t] = max(0., (B[t]-par['Bpub'])/(par['KRE'] + B[t] - (2.*par['Bpub'])))
# Spawning
if Gonad[t]/B[t] < par['GT']:
Spawning[t] = 0.
dGonaddt = (A[t]-R[t]) * RE[t]
dSomadt = (A[t]-R[t]) * (1.-RE[t])
elif Gonad[t]/B[t] >= par['GT']:
Spawning[t] = Gonad[t]
dGonaddt = 0.
dSomadt = A[t]-R[t]
Explanation: Copy-paste MAIN MODEL LOOP from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" MAIN MODEL LOOP just below the one for "NPZD2"
The new code should look like this:
End of explanation
# Pack output into dictionary
output = {}
output['time'] = time
output['Phy'] = Phy
output['Zoo'] = Zoo
output['SDet'] = SDet
output['LDet'] = LDet
output['NH4'] = NH4
output['NO3'] = NO3
output['mu'] = mu
output['f_I'] = f_I
output['L_NO3'] = L_NO3
output['L_NH4'] = L_NH4
output['TotN'] = TotN
output['Soma'] = Soma
output['Gonad'] = Gonad
output['B'] = B
output['Spawning'] = Spawning
Explanation: Copy-paste section "Pack to output" from Mussels.py to NPZD2_Mussels.py Copy-paste the "Mussel" section just below the one for "NPZD2". Note that they are duplicates, son;t copy the duplicates (just Soma, Gonad, B, and Spawning).
The new code should look like this:
End of explanation
ax2.plot(output['time']/365,output['Phy'],'g-')
ax2.plot(output['time']/365,output['Zoo'],'r-')
ax2.plot(output['time']/365,output['SDet'],'k-')
ax2.plot(output['time']/365,output['LDet'],'k-.')
ax2.plot(output['time']/365,output['NH4'],'m-')
ax2.plot(output['time']/365,output['NO3'],'c-')
ax2.plot(output['time']/365,output['TotN'],'y-')
ax2.plot(output['time']/365,output['B'],'r.')
ax2.set_xlabel('Time (years)')
ax2.set_ylabel('Nitrogen (mmol N m$^{-3}$)')
ax2.set_title('Fennel et al 2006 Model')
plt.legend(['Phy','Zoo','SDet','LDet','NH4','NO3','TotN','B'])
Explanation: Update plot by adding to the plotting code for ax2:
```
ax2.plot(output['time']/365,output['B'],'r.')
```
The new code should look like this:
End of explanation
# Estimate Total Nitrogen
TotN[t+1] = Phy[t+1] + Zoo[t+1] + SDet[t+1] + LDet[t+1] + NH4[t+1] + NO3[t+1] + B[t+1]
Explanation: Run the model so that you can see a plot.
Anything changed?
Is total nitrogen conserved?
Are you sure?
"B" is not accounted for in the Total Nitrogen calculation.
Add "B[t+1]" to the "Total Nitrogen calculation. The new code should look like this:
End of explanation
# Spawning
if Gonad[t]/B[t] < par['GT']:
Spawning[t] = 0.
dGonaddt = (A[t]-R[t]) * RE[t]
dSomadt = (A[t]-R[t]) * (1.-RE[t])
elif Gonad[t]/B[t] >= par['GT']:
Spawning[t] = Gonad[t]
dGonaddt = 0.
dSomadt = A[t]-R[t]
#Feedback to NPZD2 model
# Faeces and Pseudofaeces
Fae = F[t] * ((par['epsilonP']*(1-par['AE_P'])*Phy[t])+ \
(par['epsilonZ']*(1-par['AE_Z'])*Zoo[t])+ \
(par['epsilonD']*(1-par['AE_D'])*SDet[t]))
dLDetdt = dLDetdt + Fae
# Remove eaten Phy, Zoo and SDet from water-column
dPhydt = dPhydt-(F[t] *par['epsilonP']*Phy[t])
dZoodt = dZoodt-(F[t] *par['epsilonZ']*Zoo[t])
dSDetdt = dSDetdt -(F[t] *par['epsilonD']*SDet[t])
# Excretion into Ammonia
dNH4dt = dNH4dt + R[t]
# Update and step ----------------------------------------------------
Phy[t+1] = Phy[t] + (dPhydt * dt)
Zoo[t+1] = Zoo[t] + (dZoodt * dt) + Spawning[t]
SDet[t+1] = SDet[t] + (dSDetdt * dt)
LDet[t+1] = LDet[t] + (dLDetdt * dt)
NH4[t+1] = NH4[t] + (dNH4dt * dt)
NO3[t+1] = NO3[t] + (dNO3dt * dt)
if NO3[t+1] <= 0.0001:
offset = NO3[t+1]
NH4[t+1] = NH4[t+1] + offset
NO3[t+1] = NO3[t+1] - offset
# Mussels
Soma[t+1] = Soma[t] + (dSomadt * dt)
Gonad[t+1] = Gonad[t] + (dGonaddt * dt) - Spawning[t]
B[t+1] = Soma[t+1] + Gonad[t+1]
Explanation: Run the model to plot again
Is total nitrogen conserved?
We added Mussels to NPZD2, but they are not connected yet. "Food" is eaten by the mussels but are not taken out of the NPZD2 pool.
To connect them, we have to add extra terms to our original equations. Below, as an example, es the original Phytoplankton equation and then the new equation with an added term to account for mussel filtration.
(original from Fennel et al. 2006)
$$ \frac{\partial Phy}{\partial t} = \mu Phy - gZoo - m_P Phy - \tau (SDet -Phy)Phy$$
(new) $$ \frac{\partial Phy}{\partial t} = \mu Phy - gZoo - m_P Phy - \tau (SDet -Phy)Phy - F \epsilon_P Phy$$
To actually code the "feedback", we need to add code to Ad Feaces to LDet, eat Phy/Zoo/SDet... and excrete into NH4. The new code goes after "Spawning" but before "Update and setep. Also, we need to add spawning to Zoo.
The new code should look like this:
End of explanation |
12,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing clusters in Python
I am wanting to see the results of clustering methods such as K-Means; this is my playground.
Initial examples are taken from K Means Clustering in Python
Step1: Using Iris Dataset
Step2: Original author converted the data to Pandas Dataframes. Note that we have separated out the inputs (x) and the outputs/labels (y).
Step3: Visualise the data
It is always important to have a look at the data. We will do this by plotting two scatter plots. One looking at the Sepal values and another looking at Petal. We will also set it to use some colours so it is clearer.
Step4: Build the K Means Model - non-Spark example
This is the easy part, providing you have the data in the correct format (which we do). Here we only need two lines. First we create the model and specify the number of clusters the model should find (n_clusters=3) next we fit the model to the data.
Step5: Visualise the classifier results
Let's plot the actual classes against the predicted classes from the K Means model.
Here we are plotting the Petal Length and Width, however each plot changes the colors of the points using either c=colormap[y.Targets] for the original class and c=colormap[model.labels_] for the predicted classess.
Step6: Fixing the coloring
Here we are going to change the class labels, we are not changing the any of the classification groups we are simply giving each group the correct number. We need to do this for measuring the performance.
Using this code below we using the np.choose() to assign new values, basically we are changing the 1’s in the predicted values to 0’s and the 0’s to 1’s. Class 2 matched so we can leave. By running the two print functions you can see that all we have done is swap the values.
NOTE
Step7: Re-plot
Now we can re plot the data as before but using predY instead of model.labels_. | Python Code:
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.cluster import KMeans
import sklearn.metrics as sm
import pandas as pd
import numpy as np
# Only needed if you want to display your plots inline if using Notebook
# change inline to auto if you have Spyder installed
%matplotlib inline
Explanation: Visualizing clusters in Python
I am wanting to see the results of clustering methods such as K-Means; this is my playground.
Initial examples are taken from K Means Clustering in Python
End of explanation
# import some data to play with
iris = datasets.load_iris()
# look at individual aspects by uncommenting the below
#iris.data
#iris.feature_names
#iris.target
#iris.target_names
Explanation: Using Iris Dataset
End of explanation
# Store the inputs as a Pandas Dataframe and set the column names
x = pd.DataFrame(iris.data)
x.columns = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y = pd.DataFrame(iris.target)
y.columns = ['Targets']
Explanation: Original author converted the data to Pandas Dataframes. Note that we have separated out the inputs (x) and the outputs/labels (y).
End of explanation
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot Sepal
plt.subplot(1, 2, 1)
plt.scatter(x.Sepal_Length, x.Sepal_Width, c=colormap[y.Targets], s=40)
plt.title('Sepal')
plt.subplot(1, 2, 2)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40)
plt.title('Petal');
Explanation: Visualise the data
It is always important to have a look at the data. We will do this by plotting two scatter plots. One looking at the Sepal values and another looking at Petal. We will also set it to use some colours so it is clearer.
End of explanation
# K Means Cluster
model = KMeans(n_clusters=3)
model.fit(x)
1
2
3
# K Means Cluster
model = KMeans(n_clusters=3)
model.fit(x)
# This is what KMeans thought
model.labels_
Explanation: Build the K Means Model - non-Spark example
This is the easy part, providing you have the data in the correct format (which we do). Here we only need two lines. First we create the model and specify the number of clusters the model should find (n_clusters=3) next we fit the model to the data.
End of explanation
# View the results
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot the Original Classifications
plt.subplot(1, 2, 1)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40)
plt.title('Real Classification')
# Plot the Models Classifications
plt.subplot(1, 2, 2)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[model.labels_], s=40)
plt.title('K Mean Classification');
Explanation: Visualise the classifier results
Let's plot the actual classes against the predicted classes from the K Means model.
Here we are plotting the Petal Length and Width, however each plot changes the colors of the points using either c=colormap[y.Targets] for the original class and c=colormap[model.labels_] for the predicted classess.
End of explanation
# The fix, we convert all the 1s to 0s and 0s to 1s.
predY = np.choose(model.labels_, [1, 0, 2]).astype(np.int64)
print (model.labels_)
print (predY)
Explanation: Fixing the coloring
Here we are going to change the class labels, we are not changing the any of the classification groups we are simply giving each group the correct number. We need to do this for measuring the performance.
Using this code below we using the np.choose() to assign new values, basically we are changing the 1’s in the predicted values to 0’s and the 0’s to 1’s. Class 2 matched so we can leave. By running the two print functions you can see that all we have done is swap the values.
NOTE: your results might be different to mine, if so you will have to figure out which class matches which and adjust the order of the values in the np.choose() function.
End of explanation
# View the results
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot Orginal
plt.subplot(1, 2, 1)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40)
plt.title('Real Classification')
# Plot Predicted with corrected values
plt.subplot(1, 2, 2)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[predY], s=40)
plt.title('K Mean Classification');
Explanation: Re-plot
Now we can re plot the data as before but using predY instead of model.labels_.
End of explanation |
12,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Case Study 1
Step4: Report some statistics about the tweets you collected
The topic of interest
Step5: 2. Find the most popular tweets in your collection of tweets
Please plot a table of the top 10 tweets that are the most popular among your collection, i.e., the tweets with the largest number of retweet counts.
Step6: Another measure of tweet "popularity" could be the number of times it is favorited. The following calculates the top-10 tweets with the most "favorites"
Step7: 3. Find the most popular Tweet Entities in your collection of tweets
Please plot a table of the top 10 hashtags, top 10 user mentions that are the most popular in your collection of tweets.
Step8: *------------------------
Problem 3
Step9: Use the following code to retreive all friends and followers of @RobGronkowski, one of the Patriots players.
Step10: Compute the mutual friends within the two groups, i.e., the users who are in both friend list and follower list, plot their ID numbers and screen names in a table
Step11: *------------------------
Problem 4
Step12: This code is used to load the above followers, having already been collected. It then makes a venn-diagram comparing the mutual followers between the three teams.
Step13: Next we wanted to estimate popularity of the Broncos and Panthers (the two remaining Super Bowl teams) in the Boston area. Our chosen metric of "popularity" is the speed at which tweets are generated. The following periodically collects tweets (constrained to the Boston geo zone) filtered for "Broncos" and "Panthers." It tracks the number of such tweets collected in each time window, allowing us to estimate a Tweets per Minute ratio. | Python Code:
# HELPER FUNCTIONS
import io
import json
import twitter
def oauth_login(token, token_secret, consumer_key, consumer_secret):
Snag an auth from Twitter
auth = twitter.oauth.OAuth(token, token_secret,
consumer_key, consumer_secret)
return auth
def save_json(filename, data):
Save json data to a filename
print 'Saving data into {0}.json...'.format(filename)
with io.open('{0}.json'.format(filename),
'w', encoding='utf-8') as f:
f.write(unicode(json.dumps(data, ensure_ascii=False)))
def load_json(filename):
Load json data from a filename
print 'Loading data from {0}.json...'.format(filename)
with open('{0}.json'.format(filename)) as f:
return json.load(f)
# API CONSTANTS
CONSUMER_KEY = '92TpJf8O0c9AWN3ZJjcN8cYxs'
CONSUMER_SECRET ='dyeCqzI2w7apETbTUvPai1oCDL5oponvZhHSmYm5XZTQbeiygq'
OAUTH_TOKEN = '106590533-SEB5EGGoyJ8EsjOKN05YuOQYu2rg5muZgMDoNrqN'
OAUTH_TOKEN_SECRET = 'BficAky6uGyGfRzDGJqZYVKo0HS6G6Ex3ijYW3zy3kjNJ'
# CREATE AND CHECK API AND STREAM
auth = oauth_login(CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
twitter_api = twitter.Twitter(auth=auth)
twitter_stream = twitter.TwitterStream(auth=auth)
if twitter_api and twitter_stream:
print 'Bingo! API and stream set up!'
else:
print 'Hmmmm, something is wrong here.'
# COLLECT TWEETS FROM STREAM WITH TRACK, 'PATRIOTS'
# track = "Patriots" # Tweets for Patriots
# TOTAL_TWEETS = 2500
# patriots = []
# patriots_counter = 0
# while patriots_counter < TOTAL_TWEETS: # collect tweets while current time is less than endTime
# # Create a stream instance
# auth = oauth_login(consumer_key=CONSUMER_KEY, consumer_secret=CONSUMER_SECRET,
# token=OAUTH_TOKEN, token_secret=OAUTH_TOKEN_SECRET)
# twitter_stream = TwitterStream(auth=auth)
# stream = twitter_stream.statuses.filter(track=track)
# counter = 0
# for tweet in stream:
# if patriots_counter == TOTAL_TWEETS:
# print 'break'
# break
# elif counter % 500 == 0 and counter != 0:
# print 'get new stream'
# break
# else:
# patriots.append(tweet)
# patriots_counter += 1
# counter += 1
# print patriots_counter, counter
# save_json('json/patriots', patriots)
# Use this code to load tweets that have already been collected
filename = "stream/json/patriots"
results = load_json(filename)
print 'Number of tweets loaded:', len(results)
# Compute additional statistics about the tweets collected
# Determine the average number of words in the text of each tweet
def average_words(tweet_texts):
total_words = sum([len(s.split()) for s in tweet_texts])
return 1.0*total_words/len(tweet_texts)
tweet_texts = [ tweet['text']
for tweet in results ]
print 'Average number of words:', average_words(tweet_texts)
# Calculate the lexical diversity of all words contained in the tweets
def lexical_diversity(tokens):
return 1.0*len(set(tokens))/len(tokens)
words = [ word
for tweet in tweet_texts
for word in tweet.split() ]
print 'Lexical Diversity:', lexical_diversity(words)
Explanation: Case Study 1 : Collecting Data from Twitter
Due Date: February 10, before the class
*------------
TEAM Members:
Haley Huang
Helen Hong
Tom Meagher
Tyler Reese
Required Readings:
* Chapter 1 and Chapter 9 of the book Mining the Social Web
* The codes for Chapter 1 and Chapter 9
NOTE
* Please don't forget to save the notebook frequently when working in IPython Notebook, otherwise the changes you made can be lost.
*----------------------
Problem 1: Sampling Twitter Data with Streaming API about a certain topic
Select a topic that you are interested in, for example, "WPI" or "Lady Gaga"
Use Twitter Streaming API to sample a collection of tweets about this topic in real time. (It would be recommended that the number of tweets should be larger than 200, but smaller than 1 million.
Store the tweets you downloaded into a local file (txt file or json file)
Our topic of interest is the New England Patriots. All Patriots fans ourselves, we were disappointed upon their elimination from the post-season, and unsure how to approach the upcoming Super Bowl. In order to sample how others may be feeling about the Patriots, we use search term "Patriots" for the Twitter streaming API.
End of explanation
from collections import Counter
from prettytable import PrettyTable
import nltk
tweet_texts = [ tweet['text']
for tweet in results ]
words = [ word
for tweet in tweet_texts
for word in tweet.split()
if word not in ['RT', '&'] # filter out RT and ampersand
]
# Use the natural language toolkit to eliminate stop words
# nltk.download('stopwords') # download stop words if you do not have it
stop_words = nltk.corpus.stopwords.words('english')
non_stop_words = [w for w in words if w.lower() not in stop_words]
# frequency of words
count = Counter(non_stop_words).most_common()
# table of the top 30 words with their counts
pretty_table = PrettyTable(field_names=['Word', 'Count'])
[ pretty_table.add_row(w) for w in count[:30] ]
pretty_table.align['Word'] = 'l'
pretty_table.align['Count'] = 'r'
print pretty_table
Explanation: Report some statistics about the tweets you collected
The topic of interest: Patriots
The total number of tweets collected: 2,500
Average number of words per tweet: 14.5536
Lexical Diversity of all words contained in the collection of tweets: 0.1906
*-----------------------
Problem 2: Analyzing Tweets and Tweet Entities with Frequency Analysis
1. Word Count:
* Use the tweets you collected in Problem 1, and compute the frequencies of the words being used in these tweets.
* Plot a table of the top 30 words with their counts
End of explanation
from collections import Counter
from prettytable import PrettyTable
# Create a list of all tweets with a retweeted_status key, and index the originator of that tweet and the text.
retweets = [
(tweet['retweet_count'],
tweet['retweeted_status']['user']['screen_name'],
tweet['text'])
#Ensure that a retweet exists
for tweet in results
if tweet.has_key('retweeted_status')
]
pretty_table = PrettyTable(field_names = ['Count','Screen Name','Text'])
# Sort tweets by descending number of retweets and display the top 10 results in a table.
[pretty_table.add_row(row) for row in sorted(retweets, reverse = True)[:10]]
pretty_table.max_width['Text'] = 50
pretty_table.align = 'l'
print pretty_table
Explanation: 2. Find the most popular tweets in your collection of tweets
Please plot a table of the top 10 tweets that are the most popular among your collection, i.e., the tweets with the largest number of retweet counts.
End of explanation
from prettytable import PrettyTable
# Determine the number of "favorites" for each tweet collected.
favorites = [
(tweet['favorite_count'],
tweet['text'])
for tweet in results
]
pretty_table = PrettyTable(field_names = ['Count','Text'])
# Sort tweets by descending number of favorites and display the top 10 results in a table.
[pretty_table.add_row(row) for row in sorted(favorites, reverse = True)[:10]]
pretty_table.max_width['Text'] = 75
pretty_table.align = 'l'
print pretty_table
Explanation: Another measure of tweet "popularity" could be the number of times it is favorited. The following calculates the top-10 tweets with the most "favorites"
End of explanation
from collections import Counter
from prettytable import PrettyTable
# Extract the screen names which appear among the collection of tweets
screen_names = [user_mention['screen_name']
for tweet in results
for user_mention in tweet['entities']['user_mentions']]
# Extract the hashtags which appear among the collection of tweets
hashtags = [ hashtag['text']
for tweet in results
for hashtag in tweet['entities']['hashtags']]
# Simultaneously determine the frequency of screen names/hashtags, and display the top 10 most common in a table.
for label, data in (('Screen Name',screen_names),
('Hashtag',hashtags)):
pretty_table = PrettyTable(field_names =[label,'Count'])
counter = Counter(data)
[ pretty_table.add_row(entity) for entity in counter.most_common()[:10]]
pretty_table.align[label] ='l'
pretty_table.align['Count'] = 'r'
print pretty_table
Explanation: 3. Find the most popular Tweet Entities in your collection of tweets
Please plot a table of the top 10 hashtags, top 10 user mentions that are the most popular in your collection of tweets.
End of explanation
#----------------------------------------------
import sys
import time
from urllib2 import URLError
from httplib import BadStatusLine
import json
from functools import partial
from sys import maxint
# The following is the "general-purpose API wrapper" presented in "Mining the Social Web" for making robust twitter requests.
# This function can be used to accompany any twitter API function. It force-breaks after receiving more than max_errors
# error messages from the Twitter API. It also sleeps and later retries when rate limits are enforced.
def make_twitter_request(twitter_api_func, max_errors = 10, *args, **kw):
def handle_twitter_http_error(e, wait_period = 2, sleep_when_rate_limited = True):
if wait_period > 3600:
print >> sys.stderr, 'Too many retries. Quitting.'
raise e
if e.e.code == 401:
print >> sys.stderr, 'Encountered 401 Error (Not Authorized)'
return None
elif e.e.code == 404:
print >> sys.stderr, 'Encountered 404 Error (Not Found)'
return None
elif e.e.code == 429:
print >> sys.stderr, 'Encountered 429 Error (Rate Limit Exceeded)'
if sleep_when_rate_limited:
print >> sys.stderr, "Retrying again in 15 Minutes...ZzZ..."
sys.stderr.flush()
time.sleep(60*15 + 5)
print >> sys.stderr, '...ZzZ...Awake now and trying again.'
return 2
else:
raise e
elif e.e.code in (500,502,503,504):
print >> sys.stderr, 'Encountered %i Error. Retrying in %i seconds' % \
(e.e.code, wait_period)
time.sleel(wait.period)
wait.period *= 1.5
return wait_period
else:
raise e
wait_period = 2
error_count = 0
while True:
try:
return twitter_api_func(*args,**kw)
except twitter.api.TwitterHTTPError, e:
error_count = 0
wait_period = handle_twitter_http_error (e, wait_period)
if wait_period is None:
return
except URLError, e:
error_count += 1
print >> sys.stderr, "URLError encountered. Continuing"
if error_count > max_errors:
print >> sys.stderr, "Too many consecutive errors...bailing out."
raise
except BadStatusLine, e:
error_count += 1
print >> sys.stderr, "BadStatusLineEncountered. Continuing"
if error_count > max_errors:
print >> sys.stderr, "Too many consecutive errors...bailing out."
raise
# This function uses the above Robust Request wrapper to retreive all friends and followers of a given user. This code
# can be found in Chapter 9, the `Twitter Cookbook' in "Mining the social web"
from functools import partial
from sys import maxint
def get_friends_followers_ids(twitter_api, screen_name = None, user_id = None, friends_limit = maxint, followers_limit = maxint):
assert(screen_name != None) != (user_id != None), \
"Must have screen_name or user_id, but not both"
# See https://dev.twitter.com/docs/api/1.1/get/friends/ids and
# https://dev.twitter.com/docs/api/1.1/get/followers/ids for details
# on API parameters
get_friends_ids = partial(make_twitter_request, twitter_api.friends.ids,
count=5000)
get_followers_ids = partial(make_twitter_request, twitter_api.followers.ids,
count=5000)
friends_ids, followers_ids = [], []
for twitter_api_func, limit, ids, label in [
[get_friends_ids, friends_limit, friends_ids, "friends"],
[get_followers_ids, followers_limit, followers_ids, "followers"]
]:
if limit == 0: continue
cursor = -1
while cursor != 0:
# Use make_twitter_request via the partially bound callable...
if screen_name:
response = twitter_api_func(screen_name=screen_name, cursor=cursor)
else: # user_id
response = twitter_api_func(user_id=user_id, cursor=cursor)
if response is not None:
ids += response['ids']
cursor = response['next_cursor']
print 'Fetched {0} total {1} ids for {2}'.format(len(ids),
label, (user_id or screen_name))
if len(ids) >= limit or response is None:
break
return friends_ids[:friends_limit], followers_ids[:followers_limit]
Explanation: *------------------------
Problem 3: Getting "All" friends and "All" followers of a popular user in twitter
choose a popular twitter user who has many followers, such as "ladygaga".
Get the list of all friends and all followers of the twitter user.
Plot 20 out of the followers, plot their ID numbers and screen names in a table.
Plot 20 out of the friends (if the user has more than 20 friends), plot their ID numbers and screen names in a table.
Our chosen twitter user is RobGronkowski, one of the Patriots players
End of explanation
# Retrieve the friends and followers of a user, and save to a json file.
# screen_name = 'RobGronkowski'
# gronk_friends_ids, gronk_followers_ids = get_friends_followers_ids(twitter_api, screen_name = screen_name)
# filename = "json/gronk_friends"
# save_json(filename, gronk_friends_ids)
# filename = "json/gronk_followers"
# save_json(filename, gronk_followers_ids)
# Use this code to load the already-retrieved friends and followers from a json file.
gronk_followers_ids = load_json('json/gronk_followers')
gronk_friends_ids = load_json('json/gronk_friends')
# The following function retrieves the screen names of Twitter users, given their user IDs. If a certain number of screen
# names is desired (for example, 20) max_ids limits the number retreived.
def get_screen_names(twitter_api, user_ids = None, max_ids = None):
response = []
items = user_ids
# Due to individual user security settings, not all user profiles can be obtained. Iterate over all user IDs
# to ensure at least (max_ids) screen names are obtained.
while len(response) < max_ids:
items_str = ','.join([str(item) for item in items[:100]])
items = items[100:]
responses = make_twitter_request(twitter_api.users.lookup, user_id = items_str)
response += responses
items_to_info = {}
# The above loop has retrieved all user information.
for user_info in response:
items_to_info[user_info['id']] = user_info
# Extract only the screen names obtained. The keys of items_to_info are the user ID numbers.
names = [items_to_info[number]['screen_name']
for number in items_to_info.keys()
]
numbers =[number for number in items_to_info.keys()]
return names , numbers
from prettytable import PrettyTable
# Given a set of user ids, this function calls get_screen_names and plots a table of the first (max_ids) ID's and screen names.
def table_ids_screen_names(twitter_api, user_ids = None, max_ids = None):
names, numbers = get_screen_names(twitter_api, user_ids = user_ids, max_ids = max_ids)
ids_screen_names = zip(numbers, names)
pretty_table = PrettyTable(field_names = ['User ID','Screen Name'])
[ pretty_table.add_row (row) for row in ids_screen_names[:max_ids]]
pretty_table.align = 'l'
print pretty_table
# Given a list of friends_ids and followers_ids, this function counts and prints the size of each collection.
# It then plots a tables of the first (max_ids) listed friends and followers.
def display_friends_followers(screen_name, friends_ids, followers_ids ,max_ids = None):
friends_ids_set, followers_ids_set = set(friends_ids),set(followers_ids)
print
print '{0} has {1} friends. Here are {2}:'.format(screen_name, len(friends_ids_set),max_ids)
print
table_ids_screen_names(twitter_api, user_ids = friends_ids, max_ids = max_ids)
print
print '{0} has {1} followers. Here are {2}:'.format(screen_name,len(followers_ids_set),max_ids)
print
table_ids_screen_names(twitter_api, user_ids = followers_ids, max_ids = max_ids)
print
display_friends_followers(screen_name = screen_name, friends_ids = gronk_friends_ids, followers_ids = gronk_followers_ids, max_ids = 20)
Explanation: Use the following code to retreive all friends and followers of @RobGronkowski, one of the Patriots players.
End of explanation
# Given a list of friends_ids and followers_ids, this function use set intersection to find the number of mutual friends.
# It then plots a table of the first (max_ids) listed mutual friends.
def display_mutual_friends(screen_name, friends_ids, followers_ids ,max_ids = None):
friends_ids_set, followers_ids_set = set(friends_ids),set(followers_ids)
print
print '{0} has {1} mutual friends. Here are {2}:'.format(screen_name, len(friends_ids_set.intersection(followers_ids_set)),max_ids)
print
mutual_friends_ids = list(friends_ids_set.intersection(followers_ids_set))
table_ids_screen_names(twitter_api, user_ids = mutual_friends_ids, max_ids = max_ids)
display_mutual_friends(screen_name = screen_name, friends_ids = gronk_friends_ids, followers_ids = gronk_followers_ids, max_ids = 20)
Explanation: Compute the mutual friends within the two groups, i.e., the users who are in both friend list and follower list, plot their ID numbers and screen names in a table
End of explanation
# ## PATRIOTS
# patriots_friends_ids, patriots_followers_ids = get_friends_followers_ids(twitter_api, screen_name = 'Patriots')
# save_json('json/Patriots_Followers',patriots_followers_ids)
# save_json('json/Patriots_Friends', patriots_friends_ids)
# ## BRONCOS
# broncos_friends_ids, broncos_followers_ids = get_friends_followers_ids(twitter_api, screen_name = 'Broncos')
# save_json('json/Broncos_Followers',broncos_followers_ids)
# save_json('json/Broncos_Friends', broncos_friends_ids)
# ## PANTHERS
# panthers_friends_ids, panthers_followers_ids = get_friends_followers_ids(twitter_api, screen_name = 'Panthers')
# save_json('json/Panthers_Followers',panthers_followers_ids)
# save_json('json/Panthers_Friends', panthers_friends_ids)
Explanation: *------------------------
Problem 4: Explore the data
Run some additional experiments with your data to gain familiarity with the twitter data ant twitter API
The following code was used to collect all Twitter followers of the Patriots, Broncos, and Panthers. Once collected, the followers were saved to files.
End of explanation
patriots_followers_ids = load_json('json/Patriots_Followers')
broncos_followers_ids = load_json('json/Broncos_Followers')
panthers_followers_ids = load_json('json/Panthers_Followers')
%matplotlib inline
from matplotlib_venn import venn3
patriots_followers_set = set(patriots_followers_ids)
broncos_followers_set = set(broncos_followers_ids)
panthers_followers_set = set(panthers_followers_ids)
venn3([patriots_followers_set, broncos_followers_set, panthers_followers_set], ('Patriots Followers', 'Broncos Followers',
'Panthers Followers'))
Explanation: This code is used to load the above followers, having already been collected. It then makes a venn-diagram comparing the mutual followers between the three teams.
End of explanation
# COLLECT TWEETS FROM STREAM WITH BRONCOS AND PANTHERS IN TWEET TEXT FROM BOSTON GEO ZONE
# from datetime import timedelta, datetime
# from time import sleep
# from twitter import TwitterStream
# track = "Broncos, Panthers" # Tweets for Broncos OR Panthers
# locations = '-73.313057,41.236511,-68.826305,44.933163' # New England / Boston geo zone
# NUMBER_OF_COLLECTIONS = 5 # number of times to collect tweets from stream
# COLLECTION_TIME = 2.5 # length of each collection in minutes
# WAIT_TIME = 10 # sleep time in between collections in minutes
# date_format = '%m/%d/%Y %H:%M:%S' # i.e. 1/1/2016 13:00:00
# broncos, panthers, counts = [], [], []
# for counter in range(1, NUMBER_OF_COLLECTIONS + 1):
# print '------------------------------------------'
# print 'COLLECTION NUMBER %s out of %s' % (counter, NUMBER_OF_COLLECTIONS)
# broncos_counter, panthers_counter = 0, 0 # set the internal counter for Broncos and Panthers to 0
# count_dict = {'start_time': datetime.now().strftime(format=date_format)} # add collection start time
# # Create a new stream instance every collection to avoid rate limits
# auth = oauth_login(consumer_key=CONSUMER_KEY, consumer_secret=CONSUMER_SECRET, token=OAUTH_TOKEN, token_secret=OAUTH_TOKEN_SECRET)
# twitter_stream = TwitterStream(auth=auth)
# stream = twitter_stream.statuses.filter(track=track, locations=locations)
# endTime = datetime.now() + timedelta(minutes=COLLECTION_TIME)
# while datetime.now() <= endTime: # collect tweets while current time is less than endTime
# for tweet in stream:
# if 'text' in tweet.keys(): # check to see if tweet contains text
# if datetime.now() > endTime:
# break # if the collection time is up, break out of the loop
# elif 'Broncos' in tweet['text'] and 'Panthers' in tweet['text']:
# broncos.append(tweet), panthers.append(tweet) # if a tweet contains both Broncos and Panthers, add the tweet to both arrays
# broncos_counter += 1
# panthers_counter += 1
# print 'Panthers: %s, Broncos: %s' % (panthers_counter, broncos_counter)
# elif 'Broncos' in tweet['text']:
# broncos.append(tweet)
# broncos_counter += 1
# print 'Broncos: %s' % broncos_counter
# elif 'Panthers' in tweet['text']:
# panthers.append(tweet)
# panthers_counter += 1
# print 'Panthers: %s' % panthers_counter
# else:
# print 'continue' # if the tweet text does not match 'Panthers' or 'Broncos', keep going
# continue
# count_dict['broncos'] = broncos_counter
# count_dict['panthers'] = panthers_counter
# count_dict['end_time'] = datetime.now().strftime(format=date_format) # add collection end time
# counts.append(count_dict)
# print counts
# if counter != NUMBER_OF_COLLECTIONS:
# print 'Sleeping until %s' % (datetime.now() + timedelta(minutes=WAIT_TIME))
# sleep(WAIT_TIME * 60) # sleep for WAIT_TIME
# else:
# print '------------------------------------------'
# # Save arrays to files
# save_json('stream/json/counts', counts)
# save_json('stream/json/broncos', broncos)
# save_json('stream/json/panthers', panthers)
# LOAD JSON FOR BRONCOS AND PANTHERS
broncos = load_json('stream/json/broncos')
panthers = load_json('stream/json/panthers')
counts = load_json('stream/json/counts')
pretty_table = PrettyTable(field_names =['Broncos Tweets', 'Collection Start Time', 'Panthers Tweets','Collection End Time'])
[ pretty_table.add_row(row.values()) for row in counts]
pretty_table.align[label] ='l'
pretty_table.align['Count'] = 'r'
print pretty_table
print 'TOTALS – Broncos: %s, Panthers: %s' % (len(broncos), len(panthers))
%matplotlib inline
# CUMULATIVE TWEET DISTRIBUTION FOR BRONCOS AND PANTHERS
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import mlab
# create numpy arrays for Broncos and Panthers tweets
broncos_tweets = np.array([row['broncos'] for row in counts])
panthers_tweets = np.array([row['panthers'] for row in counts])
bins = len(counts) * 10
# evaluate histogram
broncos_values, broncos_base = np.histogram(broncos_tweets, bins=bins)
panthers_values, panthers_base = np.histogram(panthers_tweets, bins=bins)
# evaluate cumulative function
broncos_cumulative = np.cumsum(broncos_values)
panthers_cumulative = np.cumsum(panthers_values)
# plot cumulative function
plt.plot(broncos_base[:-1], broncos_cumulative, c='darkorange')
plt.plot(panthers_base[:-1], panthers_cumulative, c='blue')
plt.grid(True)
plt.title('Cumulative Distribution of Broncos & Panthers Tweets')
plt.xlabel('Tweets')
plt.ylabel('Collection')
plt.show()
Explanation: Next we wanted to estimate popularity of the Broncos and Panthers (the two remaining Super Bowl teams) in the Boston area. Our chosen metric of "popularity" is the speed at which tweets are generated. The following periodically collects tweets (constrained to the Boston geo zone) filtered for "Broncos" and "Panthers." It tracks the number of such tweets collected in each time window, allowing us to estimate a Tweets per Minute ratio.
End of explanation |
12,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load Training and Test Data
In this section, the training/validation data is loaded. The load_data function pre-balances the data set by removing images from over-represented emotion classes.
Training Data
Step1: Load Test Set
Step2: Transformations
In this section, we will apply transformations to the existing images to increase of training data, as well as add a bit of noise in the hopes of improving the overall training activities.
Step3: Training/Test Set Distribution
The following code segment splits the data into training and test data sets. Currently this is a standard 80/20 split for training and test respectively after performing a random shuffle using the unison_shuffled_copies help method.
Step4: Define and Load Trained Model
Step5: Training the Model
The following code segment trains the model using the run_network helper function.
Step6: Experiment Results
Raw DDSM Images
Initial results based on "normal" being masked as "benign"
Step7: Analyze Predictions with Test Set
Step8: Confusion Matrix | Python Code:
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True,
normalVsAbnormal=normalVsAbnormal)
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, trainImagePath,
categories=categories,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize,
thesePathos=thesePathos,
normalVsAbnormal=normalVsAbnormal)
print X_data.shape
print Y_data.shape
Explanation: Load Training and Test Data
In this section, the training/validation data is loaded. The load_data function pre-balances the data set by removing images from over-represented emotion classes.
Training Data
End of explanation
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath,
categories=categories,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize,
thesePathos=thesePathos,
normalVsAbnormal=normalVsAbnormal)
print X_test.shape
print Y_test.shape
Explanation: Load Test Set
End of explanation
imgDataGenCount = 3
transformCount = imgDataGenCount
newImgs = np.zeros([X_data.shape[0] * transformCount, X_data.shape[1], X_data.shape[2]])
newYs = np.zeros([Y_data.shape[0] * transformCount, Y_data.shape[1]], dtype=np.int8)
print newImgs.shape
print newYs.shape
img = X_data[0]
img.shape
ndx = 0
for i in range(X_data.shape[0]):
img = X_data[i]
for n in range(imgDataGenCount):
imgX = models.imageDataGenTransform(img, Y_data[i])
imgX = imgX.reshape(150, 150)
#print imgX.shape
newImgs[ndx] = imgX
newYs[ndx] = Y_data[i]
#misc.imsave("testX.png", imgX)
ndx += 1
#break
print("Done", str(datetime.datetime.now()))
X_data2 = np.concatenate((X_data, newImgs))
Y_data2 = np.concatenate((Y_data, newYs))
print X_data2.shape
print Y_data2.shape
performedTransforms = True
if performedTransforms:
X_train = X_data2
Y_train = Y_data2
else:
X_train = X_data
Y_train = Y_data
Explanation: Transformations
In this section, we will apply transformations to the existing images to increase of training data, as well as add a bit of noise in the hopes of improving the overall training activities.
End of explanation
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
import collections
def yDist(y):
bcCounts = collections.defaultdict(int)
for a in range(0, y.shape[0]):
bcCounts[y[a][0]] += 1
return bcCounts
print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))
Explanation: Training/Test Set Distribution
The following code segment splits the data into training and test data sets. Currently this is a standard 80/20 split for training and test respectively after performing a random shuffle using the unison_shuffled_copies help method.
End of explanation
# Load the bc array for our count in the model definition
print categories
print len(categories)
del sys.modules['bc_models']
# Construct the model using our help function
import bc_models as models
model = models.bc_model_v03(len(categories), verbose=True,
input_shape=(1,X_train.shape[1],X_train.shape[2]))
Explanation: Define and Load Trained Model
End of explanation
loadWeights = False
weightsFileName = "dwdii-bc-v03-150-normVsAbnorm-13970-20170510.hdf5"
if loadWeights:
model.load_weights('weights/' + weightsFileName)
# Reshape to the appropriate shape for the CNN input
testX = X_test.reshape(X_test.shape[0], 1, X_test.shape[1],X_test.shape[2])
trainX = X_train.reshape(X_train.shape[0], 1, X_train.shape[1],X_train.shape[2])
print "Training start: " + str(datetime.datetime.now())
m, losses, acc = models.run_network([trainX, testX, Y_train, Y_test], model, batch=50, epochs=20, verbosity=1)
models.plot_losses(losses, acc)
fileLossesPng = '../../figures/plot_losses-' + weightsFileName + '.png'
plt.savefig(fileLossesPng)
model.save_weights('weights/' + weightsFileName, overwrite=True)
Explanation: Training the Model
The following code segment trains the model using the run_network helper function.
End of explanation
resultsValAcc = {}
#resultsValAcc["1"] = 0.6800
#resultsValAcc["2"] = 0.7260
#resultsValAcc["3"] = 0.6616
#resultsValAcc["03-27-2017"] = 0.6116
#resultsValAcc["04-02-2017"] = 0.4805
#resultsValAcc["04-03-2017"] = 0.5065
#resultsValAcc["04-05-2017"] = 0.5243
resultsValAcc[924] = 0.5628
resultsValAcc[2737] = 0.6326
resultsValAcc[5474] = 0.6138
import dwdii_test as dwdii
#cmp = matplotlib.colors.Colormap("Blues")
dwdii.barChart(resultsValAcc, filename="../../figures/shallowCnn_thresholded_2class_results_valacc.png", title="Shallow CNN - DDSM Data Thresholded 2 Class Test Accuracy", yAxisLabel="Accuracy %")
Explanation: Experiment Results
Raw DDSM Images
Initial results based on "normal" being masked as "benign":
* bc_model_v0 (150x150, 800/200): 182s - loss: 0.0560 - acc: 0.9813 - val_loss: 1.9918 - val_acc: 0.6800
* bc_model_v0 (150x150, 2000/500): 473s - loss: 0.0288 - acc: 0.9925 - val_loss: 1.4040 - val_acc: 0.7260
* somewhat balanced, Y_train Dist {0: 1223, 1: 777}, Y_test Dist: {0: 321, 1: 179}
Revised with "normal", "benign" and "malignant" labeled seperately:
* bc_model_v0 (150x150, 1311/328): 298s - loss: 0.0411 - acc: 0.9786 - val_loss: 1.3713 - val_acc: 0.6616
After creating fixed "train", "test" and "validate" data sets, using "train" and "test" as well as including the DDSM Benign cases:
* bc_model_v0 (150x150, 1554/363, 03.27.2017): 264s - loss: 0.0512 - acc: 0.9730 - val_loss: 1.3120 - val_acc: 0.6116
* bc_model_v0 (150x150, 2155/539, 04.02.2017): 362s - loss: 0.0600 - acc: 0.9763 - val_loss: 1.5315 - val_acc: 0.4805
bc_model_v01 - categorical_crossentropy
* bc_model_v01 (150x150, 2155/539, 04.03.2017): 361s - loss: 0.0935 - acc: 0.9800 - val_loss: 2.7872 - val_acc: 0.5065
* bc_model_v01 (150x150, 2132/536, 04.05.2017): 369s - loss: 0.0718 - acc: 0.9794 - val_loss: 2.5604 - val_acc: 0.5243
Thresholded Images
Using the "Data_Thresholded" images
* bc_model_v0 (150x150, Thresholded, 661/171, 03.28.2017): 124s - loss: 0.0529 - acc: 0.9743 - val_loss: 1.4331 - val_acc: 0.4971
Simulated Images
Using the "simulated_images" images
* bc_model_v01 (150x150, 7776/536, 04.24.2017): 1250s - loss: 0.5543 - acc: 0.7885 - val_loss: 7.1153 - val_acc: 0.4123
Using the "simulated_images_new" images
* bc_model_v01 (150x150, 7776/536, 04.30.2017): 1242s - loss: 10.1318 - acc: 0.3714 - val_loss: 6.7477 - val_acc: 0.3340
Normal Vs Abnormal
Raw
bc_model_v01 (150x150, 2893/536, 04.25.2017): 496s - loss: 0.0522 - acc: 0.9865 - val_loss: 2.2328 - val_acc: 0.6309
bc_model_v03 (150x150, 11572/726, 05.10.2017, 30 epochs): 2938s - loss: 0.1378 - acc: 0.9468 - val_loss: 1.6165 - val_acc: 0.6061
40 epochs (+10): 2987s - loss: 0.1092 - acc: 0.9615 - val_loss: 2.0735 - val_acc: 0.6157
60 epochs (+20): 3728s - loss: 0.0767 - acc: 0.9766 - val_loss: 2.5044 - val_acc: 0.6074
Data Thresholded
bc_model_v01 (150x150, 924/231,04.26.2017): 154s - loss: 0.0365 - acc: 0.9892 - val_loss: 1.9738 - val_acc: 0.5628
bc_model_v01 (150x150, 2737/694,04.29.2017): 463s - loss: 0.0390 - acc: 0.9898 - val_loss: 2.4042 - val_acc: 0.6326
bc_model_v01 ((150x150, 5474/694,04.30.2017)): 1317s - loss: 0.0552 - acc: 0.9845 - val_loss: 2.8678 - val_acc: 0.6138
Benign Vs Malignant
Raw
bc_model_v01 (150x150, 13970/321, 05.05.2017, 50 epochs): 4376s - loss: 0.0163 - acc: 0.9961 - val_loss: 3.1204 - val_acc: 0.6480
bc_model_v02 (1 dropout, 13970/321, 05.06.2017, 30 epochs): 4678s - loss: 0.0222 - acc: 0.9941 - val_loss: 2.9720 - val_acc: 0.6978
bc_model_v03 (2 dropout, 13970/321, 05.07.2017, 30 epochs): 4771s - loss: 0.0419 - acc: 0.9896 - val_loss: 2.4619 - val_acc: 0.6667
35 Epochs (+5): 5833s - loss: 0.0355 - acc: 0.9911 - val_loss: 2.6903 - val_acc: 0.6667
45 Epochs (+10): 4686s - loss: 0.0355 - acc: 0.9908 - val_loss: 2.7548 - val_acc: 0.7165
End of explanation
predictOutput = model.predict(testX, batch_size=32, verbose=1)
predClass = np.array(predictOutput[0]).argmax()
numBC = bc.reverseDict(categories)
numBC[predClass]
numBC[Y_test[0][0]]
predClasses = []
for i in range(len(predictOutput)):
arPred = np.array(predictOutput[i])
predictionProb = arPred.max()
predictionNdx = arPred.argmax()
predClassName = numBC[predictionNdx]
predClasses.append(predictionNdx)
#print "{0}: {1} ({2})".format(i, predClassName, predictionProb)
Explanation: Analyze Predictions with Test Set
End of explanation
# Use sklearn's helper method to generate the confusion matrix
cnf_matrix = skm.confusion_matrix(Y_test, predClasses)
cnf_matrix
class_names = numBC.values()
print class_names[1:3]
print class_names
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
fileCfMatrix = '../../figures/confusion_matrix-' + weightsFileName + '.png'
plt.figure()
bc.plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization, \n' + weightsFileName)
plt.savefig(fileCfMatrix)
# Load the image we just saved
from IPython.display import Image
Image(filename=fileCfMatrix)
# Plot normalized confusion matrix
fileCfMatrixNorm = '../../figures/confusion_matrix_norm-' + weightsFileName + '.png'
plt.figure()
bc.plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix \n' + weightsFileName)
plt.savefig(fileCfMatrixNorm)
# Load the image we just saved
from IPython.display import Image
Image(filename=fileCfMatrixNorm)
Explanation: Confusion Matrix
End of explanation |
12,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists
<img src="images/bcamp.png" align="center">
Using CUDA, Jupyter, PyCUDA and PyTorch
03 PyCUDA Sigmoid()
Web
Step2: Simple addition on the GPU
Step3: Plot the Sigmoid function
Step4: Timing Numpy vs. PyCUDA ... | Python Code:
# !pip install pycuda
%reset -f
import pycuda
from pycuda import compiler
import pycuda.driver as cuda
import numpy
import numpy as np
from pycuda.compiler import SourceModule
cuda.init()
print("%d device(s) found." % cuda.Device.count())
for ordinal in range(cuda.Device.count()):
dev = cuda.Device(ordinal)
print "Device #%d: %s" % (ordinal, dev.name())
print cuda
! watch --color -n1.0 gpustat
Explanation: Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists
<img src="images/bcamp.png" align="center">
Using CUDA, Jupyter, PyCUDA and PyTorch
03 PyCUDA Sigmoid()
Web: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/
Notebooks: <a href="https://github.com/QuantScientist/Data-Science-PyCUDA-GPU"> On GitHub</a>
Shlomo Kashani
<img src="images/gtx.png" width="35%" align="center">
PyCUDA Imports
End of explanation
import pycuda.autoinit
# a = np.random.uniform(low=1, high=20, size=(10,))
a = numpy.arange(-100000, 100000, 1)
a = a.astype(numpy.float32)
ARR_SIZE = numpy.int32(a.shape[-1])
print ARR_SIZE
a_gpu = cuda.mem_alloc(a.nbytes)
xout_gpu = cuda.mem_alloc(a.nbytes)
cuda.memcpy_htod(a_gpu, a)
xout_gpu=cuda.mem_alloc_like(a)
# size_gpu=cuda.mem_alloc_like(size)
mod = SourceModule(
__global__ void sigmoid(float* a, float* b, int size)
{
int index = blockDim.x * blockIdx.x + threadIdx.x;
if (index < size)
b[index] = 1.0f / (1.0f + exp(-1.0f * a[index]));
}
)
func = mod.get_function("sigmoid")
def sigmoidGPU():
func(a_gpu, xout_gpu,ARR_SIZE, block=(ARR_SIZE/1024,1,1))
a_sigmoid = numpy.empty_like(a)
cuda.memcpy_dtoh(a_sigmoid, xout_gpu)
return a_sigmoid
# print sigmoidGPU()
from scipy.special import expit
y = expit(a)
# print ("__________________________________")
# print y
Explanation: Simple addition on the GPU: CUDA Kernel definition
End of explanation
import matplotlib.pyplot as plt
plt.plot(a,y)
plt.text(4,0.8,r'$\sigma(x)=\frac{1}{1+e^{-x}}$',fontsize=15)
plt.legend(loc='lower right')
plt.show()
Explanation: Plot the Sigmoid function
End of explanation
import timeit
n_iter = ARR_SIZE
rounds = 1000 # for timeit
print 'numpy', timeit.timeit(lambda:
expit(a),
number=rounds)
print 'pycuda', timeit.timeit(lambda:
sigmoidGPU(),
number=rounds)
Explanation: Timing Numpy vs. PyCUDA ...
End of explanation |
12,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
12,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fully developed baroclinic instability of a 3-layer flow
Step1: Set up
Step2: Initial condition
Step3: Run the model
Step4: A snapshot and some diagnostics | Python Code:
import numpy as np
from numpy import pi
from matplotlib import pyplot as plt
%matplotlib inline
import pyqg
Explanation: Fully developed baroclinic instability of a 3-layer flow
End of explanation
L = 1000.e3 # length scale of box [m]
Ld = 15.e3 # deformation scale [m]
kd = 1./Ld # deformation wavenumber [m^-1]
Nx = 64 # number of grid points
H1 = 500. # layer 1 thickness [m]
H2 = 1750. # layer 2
H3 = 1750. # layer 3
U1 = 0.05 # layer 1 zonal velocity [m/s]
U2 = 0.01 # layer 2
U3 = 0.00 # layer 3
rho1 = 1025.
rho2 = 1025.275
rho3 = 1025.640
rek = 1.e-7 # linear bottom drag coeff. [s^-1]
f0 = 0.0001236812857687059 # coriolis param [s^-1]
beta = 1.2130692965249345e-11 # planetary vorticity gradient [m^-1 s^-1]
Ti = Ld/(abs(U1)) # estimate of most unstable e-folding time scale [s]
dt = Ti/500. # time-step [s]
tmax = 300*Ti # simulation time [s]
m = pyqg.LayeredModel(nx=Nx, nz=3, U = [U1,U2,U3],V = [0.,0.,0.],L=L,f=f0,beta=beta,
H = [H1,H2,H3], rho=[rho1,rho2,rho3],rek=rek,
dt=dt,tmax=tmax, twrite=5000, tavestart=Ti*10)
Explanation: Set up
End of explanation
sig = 1.e-7
qi = sig*np.vstack([np.random.randn(m.nx,m.ny)[np.newaxis,],
np.random.randn(m.nx,m.ny)[np.newaxis,],
np.random.randn(m.nx,m.ny)[np.newaxis,]])
m.set_q(qi)
Explanation: Initial condition
End of explanation
m.run()
Explanation: Run the model
End of explanation
plt.figure(figsize=(18,4))
plt.subplot(131)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,(m.q[0,]+m.Qy[0]*m.y)/(U1/Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Layer 1 PV')
plt.subplot(132)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,(m.q[1,]+m.Qy[1]*m.y)/(U1/Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Layer 2 PV')
plt.subplot(133)
plt.pcolormesh(m.x/m.rd,m.y/m.rd,(m.q[2,]+m.Qy[2]*m.y)/(U1/Ld),cmap='Spectral_r')
plt.xlabel(r'$x/L_d$')
plt.ylabel(r'$y/L_d$')
plt.colorbar()
plt.title('Layer 3 PV')
kespec_1 = m.get_diagnostic('KEspec')[0].sum(axis=0)
kespec_2 = m.get_diagnostic('KEspec')[1].sum(axis=0)
kespec_3 = m.get_diagnostic('KEspec')[2].sum(axis=0)
plt.loglog( m.kk, kespec_1, '.-' )
plt.loglog( m.kk, kespec_2, '.-' )
plt.loglog( m.kk, kespec_3, '.-' )
plt.legend(['layer 1','layer 2', 'layer 3'], loc='lower left')
plt.ylim([1e-9,1e-0]); plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Kinetic Energy Spectrum');
ebud = [ m.get_diagnostic('APEgenspec').sum(axis=0),
m.get_diagnostic('APEflux').sum(axis=0),
m.get_diagnostic('KEflux').sum(axis=0),
-m.rek*(m.Hi[-1]/m.H)*m.get_diagnostic('KEspec')[1].sum(axis=0)*m.M**2 ]
ebud.append(-np.vstack(ebud).sum(axis=0))
ebud_labels = ['APE gen','APE flux div.','KE flux div.','Diss.','Resid.']
[plt.semilogx(m.kk, term) for term in ebud]
plt.legend(ebud_labels, loc='upper right')
plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Spectral Energy Transfers');
Explanation: A snapshot and some diagnostics
End of explanation |
12,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Примеры анализа данных аэрокосмической съемки
Дмитрий Колесов ([email protected])
NextGIS
О чем пойдет речь
Что это за такие "Данные аэрокосмической съемки"
Как их обрабатывать
На чем можно споткнуться
Где их брать
Исторический обзор
Появление аэрофотосьемки
Ручное дешифрирование
Многозональная (многоканальная) съемка
Примеры задач
Применяется, в задачах, где нужно
Step1: Не все так плохо, если
создать больше классов;
аккуратнее выбирать обучающие примеры;
выбирать примеры не с одного участка снимка, а разных;
...
<img src="IMG/try.jpg" >
Подводные камни и как с ними бороться
Модель, обученная на одной сцене обычно не годится для анализа другой | Python Code:
import numpy as np
import pandas as pd
points = pd.read_csv('rand.txt')
points.tail()
y = points["class"]
X = points[['r1', 'r2', 'r3', 'r4', 'r5', 'r6', 'r7', 'r8', 'r9', 'r10', 'r11']]
# Разбиваем на обучающее и тестовое множества:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Нормируем входные данные:
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
# from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
# Подгоняем модель
pipe_lr = Pipeline([
# ('pca', PCA(n_components=10)),
('clf', LogisticRegression(random_state=1))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range, 'clf__penalty': ['l1', 'l2']}]
gs = GridSearchCV(estimator=pipe_lr, param_grid=param_grid,
scoring='accuracy', cv=10)
gs = gs.fit(X, y)
print(gs.best_score_)
print(gs.best_params_)
clf = gs.best_estimator_
clf.fit(X_train, y_train)
print('Test accuracy: %.3f' % clf.score(X_test, y_test))
Explanation: Примеры анализа данных аэрокосмической съемки
Дмитрий Колесов ([email protected])
NextGIS
О чем пойдет речь
Что это за такие "Данные аэрокосмической съемки"
Как их обрабатывать
На чем можно споткнуться
Где их брать
Исторический обзор
Появление аэрофотосьемки
Ручное дешифрирование
Многозональная (многоканальная) съемка
Примеры задач
Применяется, в задачах, где нужно:
обследовать большие территории (например, экологические исследования Сибири);
наземные исследования слишком дороги/опасны;
нужно быстрое реагирование (например, пожары);
...
Сферы применения:
Нефтегазовая отрасль.
Экология.
Военное дело.
Сельское хозяйство.
...
Пара наших задач
Поиск незаконных рубок леса
<img src="IMG/crop0.jpg" >
Пример: До вырубки, После вырубки. Дальний Восток, зимняя рубка. Распознать ее можно по более светлому пятну по сравнению с окружающим его участком. Сам участок помимо того, что поменял цвет, стал более "шершавый" на вид.
Поиск посадок марихуаны в Калифорнии
<img src="IMG/KD2S.png">
Да, почти на взлетном поле
Немного теории
Солнечное излучение и датчик
<img src="IMG/Dos_img_1.jpg">
Многоканальные снимки
<img src="IMG/Reflectance.png" >
Пространство признаков
<img src="IMG/feature_space.png" >
Пример №1: Landsat-8, Казань
Спектральный канал | Длины волн | Разрешение (размер 1 пикселя)
----------------------|---------|---
Канал 1 — Побережья и аэрозоли (Coastal / Aerosol, New Deep Blue) | 0.433 — 0.453 мкм | 30 м
Канал 2 — Синий (Blue) | 0.450 — 0.515 мкм | 30 м
Канал 3 — Зелёный (Green) | 0.525 — 0.600 мкм | 30 м
Канал 4 — Красный (Red) | 0.630 — 0.680 мкм | 30 м
Канал 5 — Ближний ИК (Near Infrared, NIR) | 0.845 — 0.885 мкм | 30 м
Канал 6 — Ближний ИК (Short Wavelength Infrared, SWIR 2) | 1.560 — 1.660 мкм | 30 м
Канал 7 — Ближний ИК (Short Wavelength Infrared, SWIR 3) | 2.100 — 2.300 мкм | 30 м
Канал 8 — Панхроматический (Panchromatic, PAN) | 0.500 — 0.680 мкм | 15 м
Канал 9 — Перистые облака (Cirrus, SWIR) | 1.360 — 1.390 мкм | 30 м
Канал 10 — Дальний ИК (Long Wavelength Infrared, TIR1) |10.30 — 11.30 мкм |100 м
Канал 11 — Дальний ИК (Long Wavelength Infrared, TIR2) |11.50 — 12.50 мкм | 100 м
End of explanation
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
x = np.array(range(72))
y = 10 + 0.5*x + 10*np.sin(x)
plt.plot(x,y)
plot_acf(y);
Explanation: Не все так плохо, если
создать больше классов;
аккуратнее выбирать обучающие примеры;
выбирать примеры не с одного участка снимка, а разных;
...
<img src="IMG/try.jpg" >
Подводные камни и как с ними бороться
Модель, обученная на одной сцене обычно не годится для анализа другой:
Другое состояние атмосферы.
Произошли изменения земной поверности (растительность уже поменялась).
Модель, хоршо работающая на одном участке снимка, не работает на другом:
Другое состояние атмосферы.
Иной рельеф.
Состояние атмосферы. Атмосферная коррекция
<img src="IMG/Dos_img_1.jpg">
Влияние рельефа. Топографическая коррекция
<img src="IMG/reflection.png" >
Пространственная автокорреляция и обучение моделей
End of explanation |
12,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nested Statements and Scope
Now that we have gone over on writing our own functions, its important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a name-space. Variable names also have a scope, the scope determines the visibility of that variable name to other parts of your code.
Lets start with a quick thought experiment, imagine the following code
Step1: What do you imagine the output of printer() is? 25 or 50? What is the output of print x? 25 or 50?
Step2: Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are referencing in your code. Lets break down the rules
Step3: Enclosing function locals
This occurs when we have a function inside a function (nested functions)
Step4: Note how Sammy was used, because the hello() function was enclosed inside of the greet function!
Global
Luckily in Jupyter a quick way to test for global variables is to see if another cell recognizes the variable!
Step5: Built-in
These are the built-in function names in Python (don't overwrite these!)
Step6: Local Variables
When you declare variables inside a function definition, they are not related in any way to other variables with the same names used outside the function - i.e. variable names are local to the function. This is called the scope of the variable. All variables have the scope of the block they are declared in starting from the point of definition of the name.
Example
Step7: The first time that we print the value of the name x with the first line in the function’s body, Python uses the value of the parameter declared in the main block, above the function definition.
Next, we assign the value 2 to x. The name x is local to our function. So, when we change the value of x in the function, the x defined in the main block remains unaffected.
With the last print statement, we display the value of x as defined in the main block, thereby confirming that it is actually unaffected by the local assignment within the previously called function.
The global statement
If you want to assign a value to a name defined at the top level of the program (i.e. not inside any kind of scope such as functions or classes), then you have to tell Python that the name is not local, but it is global. We do this using the global statement. It is impossible to assign a value to a variable defined outside a function without the global statement.
You can use the values of such variables defined outside the function (assuming there is no variable with the same name within the function). However, this is not encouraged and should be avoided since it becomes unclear to the reader of the program as to where that variable’s definition is. Using the global statement makes it amply clear that the variable is defined in an outermost block.
Example | Python Code:
x = 25
def printer():
x = 50
return x
print x
print printer()
Explanation: Nested Statements and Scope
Now that we have gone over on writing our own functions, its important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a name-space. Variable names also have a scope, the scope determines the visibility of that variable name to other parts of your code.
Lets start with a quick thought experiment, imagine the following code:
End of explanation
print x
print printer()
Explanation: What do you imagine the output of printer() is? 25 or 50? What is the output of print x? 25 or 50?
End of explanation
# x is local here:
f = lambda x:x**2
Explanation: Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are referencing in your code. Lets break down the rules:
This idea of scope in your code is very important to understand in order to properly assign and call variable names.
In simple terms, the idea of scope can be described by 3 general rules:
Name assignments will create or change local names by default.
Name references search (at most) four scopes, these are:
local
enclosing functions
global
built-in
Names declared in global and nonlocal statements map assigned names to enclosing module and function scopes.
The statement in #2 above can be defined by the LEGB rule.
LEGB Rule.
L: Local — Names assigned in any way within a function (def or lambda)), and not declared global in that function.
E: Enclosing function locals — Name in the local scope of any and all enclosing functions (def or lambda), from inner to outer.
G: Global (module) — Names assigned at the top-level of a module file, or declared global in a def within the file.
B: Built-in (Python) — Names preassigned in the built-in names module : open,range,SyntaxError,...
Quick examples of LEGB
Local
End of explanation
name = 'This is a global name'
def greet():
# Enclosing function
name = 'Sammy'
def hello():
print 'Hello '+name
hello()
greet()
Explanation: Enclosing function locals
This occurs when we have a function inside a function (nested functions)
End of explanation
print name
Explanation: Note how Sammy was used, because the hello() function was enclosed inside of the greet function!
Global
Luckily in Jupyter a quick way to test for global variables is to see if another cell recognizes the variable!
End of explanation
len
Explanation: Built-in
These are the built-in function names in Python (don't overwrite these!)
End of explanation
x = 50
def func(x):
print 'x is', x
x = 2
print 'Changed local x to', x
func(x)
print 'x is still', x
Explanation: Local Variables
When you declare variables inside a function definition, they are not related in any way to other variables with the same names used outside the function - i.e. variable names are local to the function. This is called the scope of the variable. All variables have the scope of the block they are declared in starting from the point of definition of the name.
Example:
End of explanation
x = 50
def func():
global x
print 'This function is now using the global x!'
print 'Because of global x is: ', x
x = 2
print 'Ran func(), changed global x to', x
print 'Before calling func(), x is: ', x
func()
print 'Value of x (outside of func()) is: ', x
Explanation: The first time that we print the value of the name x with the first line in the function’s body, Python uses the value of the parameter declared in the main block, above the function definition.
Next, we assign the value 2 to x. The name x is local to our function. So, when we change the value of x in the function, the x defined in the main block remains unaffected.
With the last print statement, we display the value of x as defined in the main block, thereby confirming that it is actually unaffected by the local assignment within the previously called function.
The global statement
If you want to assign a value to a name defined at the top level of the program (i.e. not inside any kind of scope such as functions or classes), then you have to tell Python that the name is not local, but it is global. We do this using the global statement. It is impossible to assign a value to a variable defined outside a function without the global statement.
You can use the values of such variables defined outside the function (assuming there is no variable with the same name within the function). However, this is not encouraged and should be avoided since it becomes unclear to the reader of the program as to where that variable’s definition is. Using the global statement makes it amply clear that the variable is defined in an outermost block.
Example:
End of explanation |
12,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 4
This project is for deep MNIST for experts.
Step1: Build a Multilayer Convolutional Network
This section will help to build more complex model thant the previous linear model(with softmax classifier).
Weight Initialization
To create this model, we're going to need to create a lot of weights and biases. One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients. Since we're using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid "dead neurons". Instead of doing this repeatedly while we build the model, let's create two handy functions to do it for us.
Step2: Convolution and Pooling
TensorFlow also gives us a lot of flexibility in convolution and pooling operations. How do we handle the boundaries? What is our stride size? In this example, we're always going to choose the vanilla version. Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input. Our pooling is plain old max pooling over 2x2 blocks. To keep our code cleaner, let's also abstract those operations into functions.
Step3: First Convolutional Layer
We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolution will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel.
Step4: To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to
image width and height, and the final dimension corresponding to the number of color channels.
Step5: We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. The max_pool_2x2 method will reduce the image size to 14x14.
Step6: Second Convolutional Layer
In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch.
Step7: Densely Connected Layer
Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU.
Step8: Dropout
To reduce overfitting, we will apply dropout before the readout layer. We create a placeholder for the probability that a neuron's output is kept during dropout. This allows us to turn dropout on during training, and turn it off during testing. TensorFlow's tf.nn.dropout op automatically handles scaling neuron outputs in addition to masking them, so dropout just works without any additional scaling.
Step9: Readout Layer
Finally, we add a layer, just like for the one layer softmax regression above.
Step10: Train and Evaluate the Model
How well does this model do? To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above.
The differences are that | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets('MNIST_data', one_hot = True)
################## build a softmax regression model
# input data
x = tf.placeholder(tf.float32, shape = [None, 784])
# real labels
y_ = tf.placeholder(tf.float32, shape = [None, 10])
# variables(or weights)
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# training strategy(loss function, what type of gradident etc...)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# start session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# evaluate the model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
Explanation: Example 4
This project is for deep MNIST for experts.
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev = 0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: Build a Multilayer Convolutional Network
This section will help to build more complex model thant the previous linear model(with softmax classifier).
Weight Initialization
To create this model, we're going to need to create a lot of weights and biases. One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients. Since we're using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid "dead neurons". Instead of doing this repeatedly while we build the model, let's create two handy functions to do it for us.
End of explanation
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding = 'SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
Explanation: Convolution and Pooling
TensorFlow also gives us a lot of flexibility in convolution and pooling operations. How do we handle the boundaries? What is our stride size? In this example, we're always going to choose the vanilla version. Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input. Our pooling is plain old max pooling over 2x2 blocks. To keep our code cleaner, let's also abstract those operations into functions.
End of explanation
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
Explanation: First Convolutional Layer
We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolution will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel.
End of explanation
x_image = tf.reshape(x, [-1, 28, 28, 1])
Explanation: To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to
image width and height, and the final dimension corresponding to the number of color channels.
End of explanation
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
Explanation: We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. The max_pool_2x2 method will reduce the image size to 14x14.
End of explanation
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
Explanation: Second Convolutional Layer
In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch.
End of explanation
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
Explanation: Densely Connected Layer
Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU.
End of explanation
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
Explanation: Dropout
To reduce overfitting, we will apply dropout before the readout layer. We create a placeholder for the probability that a neuron's output is kept during dropout. This allows us to turn dropout on during training, and turn it off during testing. TensorFlow's tf.nn.dropout op automatically handles scaling neuron outputs in addition to masking them, so dropout just works without any additional scaling.
End of explanation
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
Explanation: Readout Layer
Finally, we add a layer, just like for the one layer softmax regression above.
End of explanation
sess1 = tf.Session()
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess1.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"% accuracy.eval(session=sess1, feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
Explanation: Train and Evaluate the Model
How well does this model do? To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above.
The differences are that:
We will replace the steepest gradient descent optimizer with the more sophisticated ADAM optimizer.
We will include the additional parameter keep_prob in feed_dict to control the dropout rate.
We will add logging to every 100th iteration in the training process.
Feel free to go ahead and run this code, but it does 20,000 training iterations and may take a while (possibly up to half an hour), depending on your processor.
End of explanation |
12,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
17
Step1: For this example, we're going to re-create model 17 from the
Self Instructing Manual. (pp. 128)
Step2: We will use the usual choice and availability variables.
Step3: The "totcost/hhinc" data is computed once as a new variable when loading the model data.
The same applies for tottime filtered by motorized modes (we harness the convenient fact
that all the motorized modes have identifying numbers 4 or less), and "ovtt/dist".
Step4: Since the model we want to create groups together DA, SR2 and SR3+ jointly as
reference alternatives with respect to income, we can simply omit all of these alternatives
from the block that applies to hhinc.
For vehicles per worker, the preferred model include a joint parameter on SR2 and SR3+,
but not including DA and not fixed at zero. Here we might use a shadow_parameter (also
called an alias in some places), which allows
us to specify one or more parameters that are simply a fixed proportion of another parameter.
For example, we can say that vehbywrk_SR2 will be equal to vehbywrk_SR.
Step5: We didn't explicitly define our parameters first, which is fine; Larch will
find them in the utility functions (or elsewhere in more complex models).
But they may be found in a weird order that is hard to read in reports.
We can define an ordering scheme by assigning to the parameter_groups attribute,
like this
Step6: Each item in parameter_ordering is a tuple, with a label and one or more regular expressions,
which will be compared against
all the parameter names. Any names that match will be pulled out and put into the
reporting order sequentially. Thus if a parameter name would match more than one
regex, it will appear in the ordering only for the first match.
Having created this model, we can then estimate it | Python Code:
# TEST
import larch.numba as lx
import larch
import pandas as pd
pd.set_option("display.max_columns", 999)
pd.set_option('expand_frame_repr', False)
pd.set_option('display.precision', 3)
larch._doctest_mode_ = True
Explanation: 17: MTC Expanded MNL Mode Choice
End of explanation
import larch.numba as lx
d = lx.examples.MTC(format='dataset')
m = lx.Model(d)
Explanation: For this example, we're going to re-create model 17 from the
Self Instructing Manual. (pp. 128)
End of explanation
m.availability_var = 'avail'
m.choice_ca_var = 'chose'
from larch.roles import P, X
m.utility_ca = (
+ X("totcost/hhinc") * P("costbyincome")
+ X("tottime * (altnum <= 4)") * P("motorized_time")
+ X("tottime * (altnum >= 5)") * P("nonmotorized_time")
+ X("ovtt/dist * (altnum <= 4)") * P("motorized_ovtbydist")
)
Explanation: We will use the usual choice and availability variables.
End of explanation
for a in [4,5,6]:
m.utility_co[a] += X("hhinc") * P("hhinc#{}".format(a))
Explanation: The "totcost/hhinc" data is computed once as a new variable when loading the model data.
The same applies for tottime filtered by motorized modes (we harness the convenient fact
that all the motorized modes have identifying numbers 4 or less), and "ovtt/dist".
End of explanation
for i in d['alt_names'][1:3]:
name = str(i.values)
a = int(i.altid)
m.utility_co[a] += (
+ X("vehbywrk") * P("vehbywrk_SR")
+ X("wkccbd+wknccbd") * P("wkcbd_"+name)
+ X("wkempden") * P("wkempden_"+name)
+ P("ASC_"+name)
)
for i in d['alt_names'][3:]:
name = str(i.values)
a = int(i.altid)
m.utility_co[a] += (
+ X("vehbywrk") * P("vehbywrk_"+name)
+ X("wkccbd+wknccbd") * P("wkcbd_"+name)
+ X("wkempden") * P("wkempden_"+name)
+ P("ASC_"+name)
)
Explanation: Since the model we want to create groups together DA, SR2 and SR3+ jointly as
reference alternatives with respect to income, we can simply omit all of these alternatives
from the block that applies to hhinc.
For vehicles per worker, the preferred model include a joint parameter on SR2 and SR3+,
but not including DA and not fixed at zero. Here we might use a shadow_parameter (also
called an alias in some places), which allows
us to specify one or more parameters that are simply a fixed proportion of another parameter.
For example, we can say that vehbywrk_SR2 will be equal to vehbywrk_SR.
End of explanation
m.ordering = (
('LOS', ".*cost.*", ".*time.*", ".*dist.*",),
('Zonal', "wkcbd.*", "wkempden.*",),
('Household', "hhinc.*", "vehbywrk.*",),
('ASCs', "ASC.*",),
)
Explanation: We didn't explicitly define our parameters first, which is fine; Larch will
find them in the utility functions (or elsewhere in more complex models).
But they may be found in a weird order that is hard to read in reports.
We can define an ordering scheme by assigning to the parameter_groups attribute,
like this:
End of explanation
m.maximize_loglike()
# TEST
r = _
from pytest import approx
assert r.loglike == approx(-3444.185105027836)
assert r.n_cases == 5029
assert 'success' in r.message.lower()
assert r.x.to_dict() == approx({
'ASC_Bike': -1.6288174781480145,
'ASC_SR2': -1.8077821796310174,
'ASC_SR3+': -3.4336998987834213,
'ASC_Transit': -0.6850205869302504,
'ASC_Walk': 0.06826615821030824,
'costbyincome': -0.05239236004239274,
'hhinc#4': -0.0053231144110710265,
'hhinc#5': -0.008643179890815506,
'hhinc#6': -0.005997795266774085,
'motorized_ovtbydist': -0.1328389672470942,
'motorized_time': -0.02018676908268187,
'nonmotorized_time': -0.04544467417768392,
'vehbywrk_Bike': -0.7021221804213855,
'vehbywrk_SR': -0.31664078667048384,
'vehbywrk_Transit': -0.9462364952409247,
'vehbywrk_Walk': -0.7218049107571212,
'wkcbd_Bike': 0.48936706067828845,
'wkcbd_SR2': 0.25986035009653136,
'wkcbd_SR3+': 1.069304378606234,
'wkcbd_Transit': 1.308896887615559,
'wkcbd_Walk': 0.10177663194876692,
'wkempden_Bike': 0.0019282498545339284,
'wkempden_SR2': 0.0015778182187284415,
'wkempden_SR3+': 0.002257039208670294,
'wkempden_Transit': 0.003132740135033535,
'wkempden_Walk': 0.0028906014986955593,
})
m.calculate_parameter_covariance()
m.parameter_summary()
# TEST
assert m.pf.t_stat.to_dict() == approx({
'ASC_Bike': -3.8110051632761968,
'ASC_SR2': -17.03471916394958,
'ASC_SR3+': -22.610264384635116,
'ASC_Transit': -2.764269785206984,
'ASC_Walk': 0.19617043561070976,
'costbyincome': -5.0360570040949515,
'hhinc#4': -2.6923847354101915,
'hhinc#5': -1.676857732750138,
'hhinc#6': -1.9049215648409885,
'motorized_ovtbydist': -6.763234843764025,
'motorized_time': -5.291965825624687,
'nonmotorized_time': -7.878190061966541,
'vehbywrk_Bike': -2.7183965402594508,
'vehbywrk_SR': -4.751992210976383,
'vehbywrk_Transit': -7.999145737275119,
'vehbywrk_Walk': -4.261234830020787,
'wkcbd_Bike': 1.3552321494507682,
'wkcbd_SR2': 2.1066605695091867,
'wkcbd_SR3+': 5.590372196382326,
'wkcbd_Transit': 7.899400934474615,
'wkcbd_Walk': 0.40370690248331875,
'wkempden_Bike': 1.5864614051558108,
'wkempden_SR2': 4.042074989321517,
'wkempden_SR3+': 4.993778175062689,
'wkempden_Transit': 8.684498489531592,
'wkempden_Walk': 3.8952326996888065,
})
assert m.pf.robust_t_stat.to_dict() == approx({
'ASC_Bike': -3.350788895379893,
'ASC_SR2': -15.450849978191432,
'ASC_SR3+': -22.047875467016553,
'ASC_Transit': -2.546641253284614,
'ASC_Walk': 0.19546387137430002,
'costbyincome': -3.927312777634008,
'hhinc#4': -2.6000468880002883,
'hhinc#5': -1.448502844590286,
'hhinc#6': -1.7478834622063846,
'motorized_ovtbydist': -5.512721233692836,
'motorized_time': -5.1781560789822985,
'nonmotorized_time': -7.890366874224642,
'vehbywrk_Bike': -2.26956809717166,
'vehbywrk_SR': -4.1884543094363345,
'vehbywrk_Transit': -6.907359588761182,
'vehbywrk_Walk': -3.552049105845569,
'wkcbd_Bike': 1.3353508709464412,
'wkcbd_SR2': 2.1061572488997933,
'wkcbd_SR3+': 5.629597757231176,
'wkcbd_Transit': 8.258699769521979,
'wkcbd_Walk': 0.3932045643537346,
'wkempden_Bike': 1.640126229774069,
'wkempden_SR2': 3.8222350454916496,
'wkempden_SR3+': 4.974652568010134,
'wkempden_Transit': 8.178299823852544,
'wkempden_Walk': 4.06724937563278,
})
# TEST
# model also works for IDCE
df = pd.read_csv(lx.example_file("MTCwork.csv.gz"), index_col=['casenum','altnum'])
df.index = df.index.rename('altid', level=1)
df['altnum'] = df.index.get_level_values(1)
m.datatree = lx.Dataset.construct.from_idce(df)
m.availability_var = '1'
assert m.loglike() == approx(-3444.185105027836)
assert m.n_cases == 5029
assert 'ca' not in m.dataset
assert m.dataset['ce_data'].shape == (22033,4)
Explanation: Each item in parameter_ordering is a tuple, with a label and one or more regular expressions,
which will be compared against
all the parameter names. Any names that match will be pulled out and put into the
reporting order sequentially. Thus if a parameter name would match more than one
regex, it will appear in the ordering only for the first match.
Having created this model, we can then estimate it:
End of explanation |
12,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_artifacts_reject
Step1: .. _marking_bad_channels
Step2: Why setting a channel bad?
Step3: Let's now interpolate the bad channels (displayed in red above)
Step4: Let's plot the cleaned data
Step5: .. note
Step6: As the data is epoched, all the epochs overlapping with segments whose
description starts with 'bad' are rejected by default. To turn rejection off,
use keyword argument reject_by_annotation=False when constructing
Step7: .. note
Step8: We then drop/reject the bad epochs
Step9: And plot the so-called drop log that details the reason for which some
epochs have been dropped. | Python Code:
import numpy as np
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
Explanation: .. _tut_artifacts_reject:
Rejecting bad data (channels and segments)
End of explanation
raw.info['bads'] = ['MEG 2443']
Explanation: .. _marking_bad_channels:
Marking bad channels
Sometimes some MEG or EEG channels are not functioning properly
for various reasons. These channels should be excluded from
analysis by marking them bad as. This is done by setting the 'bads'
in the measurement info of a data container object (e.g. Raw, Epochs,
Evoked). The info['bads'] value is a Python string. Here is
example:
End of explanation
# Reading data with a bad channel marked as bad:
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# restrict the evoked to EEG and MEG channels
evoked.pick_types(meg=True, eeg=True, exclude=[])
# plot with bads
evoked.plot(exclude=[])
print(evoked.info['bads'])
Explanation: Why setting a channel bad?: If a channel does not show
a signal at all (flat) it is important to exclude it from the
analysis. If a channel as a noise level significantly higher than the
other channels it should be marked as bad. Presence of bad channels
can have terribe consequences on down stream analysis. For a flat channel
some noise estimate will be unrealistically low and
thus the current estimate calculations will give a strong weight
to the zero signal on the flat channels and will essentially vanish.
Noisy channels can also affect others when signal-space projections
or EEG average electrode reference is employed. Noisy bad channels can
also adversely affect averaging and noise-covariance matrix estimation by
causing unnecessary rejections of epochs.
Recommended ways to identify bad channels are:
Observe the quality of data during data
acquisition and make notes of observed malfunctioning channels to
your measurement protocol sheet.
View the on-line averages and check the condition of the channels.
Compute preliminary off-line averages with artifact rejection,
SSP/ICA, and EEG average electrode reference computation
off and check the condition of the channels.
View raw data with :func:mne.io.Raw.plot without SSP/ICA
enabled and identify bad channels.
.. note::
Setting the bad channels should be done as early as possible in the
analysis pipeline. That's why it's recommended to set bad channels
the raw objects/files. If present in the raw data
files, the bad channel selections will be automatically transferred
to averaged files, noise-covariance matrices, forward solution
files, and inverse operator decompositions.
The actual removal happens using :func:pick_types <mne.pick_types> with
exclude='bads' option (see :ref:picking_channels).
Instead of removing the bad channels, you can also try to repair them.
This is done by interpolation of the data from other channels.
To illustrate how to use channel interpolation let us load some data.
End of explanation
evoked.interpolate_bads(reset_bads=False)
Explanation: Let's now interpolate the bad channels (displayed in red above)
End of explanation
evoked.plot(exclude=[])
Explanation: Let's plot the cleaned data
End of explanation
eog_events = mne.preprocessing.find_eog_events(raw)
n_blinks = len(eog_events)
# Center to cover the whole blink with full duration of 0.5s:
onset = eog_events[:, 0] / raw.info['sfreq'] - 0.25
duration = np.repeat(0.5, n_blinks)
raw.annotations = mne.Annotations(onset, duration, ['bad blink'] * n_blinks)
raw.plot(events=eog_events) # To see the annotated segments.
Explanation: .. note::
Interpolation is a linear operation that can be performed also on
Raw and Epochs objects.
For more details on interpolation see the page :ref:channel_interpolation.
.. _marking_bad_segments:
Marking bad raw segments with annotations
MNE provides an :class:mne.Annotations class that can be used to mark
segments of raw data and to reject epochs that overlap with bad segments
of data. The annotations are automatically synchronized with raw data as
long as the timestamps of raw data and annotations are in sync.
See :ref:sphx_glr_auto_tutorials_plot_brainstorm_auditory.py
for a long example exploiting the annotations for artifact removal.
The instances of annotations are created by providing a list of onsets and
offsets with descriptions for each segment. The onsets and offsets are marked
as seconds. onset refers to time from start of the data. offset is
the duration of the annotation. The instance of :class:mne.Annotations
can be added as an attribute of :class:mne.io.Raw.
End of explanation
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
Explanation: As the data is epoched, all the epochs overlapping with segments whose
description starts with 'bad' are rejected by default. To turn rejection off,
use keyword argument reject_by_annotation=False when constructing
:class:mne.Epochs. When working with neuromag data, the first_samp
offset of raw acquisition is also taken into account the same way as with
event lists. For more see :class:mne.Epochs and :class:mne.Annotations.
.. _rejecting_bad_epochs:
Rejecting bad epochs
When working with segmented data (Epochs) MNE offers a quite simple approach
to automatically reject/ignore bad epochs. This is done by defining
thresholds for peak-to-peak amplitude and flat signal detection.
In the following code we build Epochs from Raw object. One of the provided
parameter is named reject. It is a dictionary where every key is a
channel type as a sring and the corresponding values are peak-to-peak
rejection parameters (amplitude ranges as floats). Below we define
the peak-to-peak rejection values for gradiometers,
magnetometers and EOG:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {"auditory/left": 1}
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
baseline = (None, 0) # means from the first instant to t = 0
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks_meg, baseline=baseline, reject=reject,
reject_by_annotation=True)
Explanation: .. note::
The rejection values can be highly data dependent. You should be careful
when adjusting these values. Make sure not too many epochs are rejected
and look into the cause of the rejections. Maybe it's just a matter
of marking a single channel as bad and you'll be able to save a lot
of data.
We then construct the epochs
End of explanation
epochs.drop_bad()
Explanation: We then drop/reject the bad epochs
End of explanation
print(epochs.drop_log[40:45]) # only a subset
epochs.plot_drop_log()
Explanation: And plot the so-called drop log that details the reason for which some
epochs have been dropped.
End of explanation |
12,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HMM with Poisson observations for detecting changepoints in the rate of a signal
This notebook is based on the
Multiple Changepoint Detection and Bayesian Model Selection Notebook of TensorFlow
Step1: Data
The synthetic data corresponds to a single time series of counts, where the rate of the underlying generative process changes at certain points in time.
Step2: Model with fixed $K$
To model the changing Poisson rate, we use an HMM. We initially assume the number of states is known to be $K=4$. Later we will try comparing HMMs with different $K$.
We fix the initial state distribution to be uniform, and fix the transition matrix to be the following, where we set $p=0.05$
Step4: Now we create an HMM where the observation distribution is a Poisson with learnable parameters. We specify the parameters in log space and initialize them to random values around the log of the overall mean count (to set the scal
Step5: Model fitting using Gradient Descent
We compute a MAP estimate of the Poisson rates $\lambda$ using batch gradient descent, using the Adam optimizer applied to the log likelihood (from the HMM) plus the log prior for $p(\lambda)$.
Step6: We see that the method learned a good approximation to the true (generating) parameters, up to a permutation of the states (since the labels are unidentifiable). However, results can vary with different random seeds. We may find that the rates are the same for some states, which means those states are being treated as identical, and are therefore redundant.
Plotting the posterior over states
Step7: Model with unknown $K$
In general we don't know the true number of states. One way to select the 'best' model is to compute the one with the maximum marginal likelihood. Rather than summing over both discrete latent states and integrating over the unknown parameters $\lambda$, we just maximize over the parameters (empirical Bayes approximation).
$$p(x_{1
Step8: Model fitting with gradient descent
Step9: Plot marginal likelihood of each model
Step10: Plot posteriors | Python Code:
from IPython.utils import io
with io.capture_output() as captured:
!pip install -qq distrax
!pip install -qq flax
import logging
logging.getLogger("absl").setLevel(logging.CRITICAL)
import numpy as np
import jax
from jax.random import split, PRNGKey
import jax.numpy as jnp
from jax import jit, lax, vmap
from jax.experimental import optimizers
try:
import tensorflow_probability as tfp
except ModuleNotFoundError:
%pip install -qq tensorflow-probability
import tensorflow_probability as tfp
from matplotlib import pylab as plt
%matplotlib inline
import scipy.stats
try:
import distrax
except ModuleNotFoundError:
%pip install -qq distrax
import distrax
from distrax import HMM
Explanation: HMM with Poisson observations for detecting changepoints in the rate of a signal
This notebook is based on the
Multiple Changepoint Detection and Bayesian Model Selection Notebook of TensorFlow
End of explanation
true_rates = [40, 3, 20, 50]
true_durations = [10, 20, 5, 35]
random_state = 0
observed_counts = jnp.concatenate(
[
scipy.stats.poisson(rate).rvs(num_steps, random_state=random_state)
for (rate, num_steps) in zip(true_rates, true_durations)
]
).astype(jnp.float32)
plt.plot(observed_counts);
Explanation: Data
The synthetic data corresponds to a single time series of counts, where the rate of the underlying generative process changes at certain points in time.
End of explanation
def build_latent_state(num_states, max_num_states, daily_change_prob):
# Give probability 0 to states outside of the current model.
def prob(s):
return jnp.where(s < num_states + 1, 1 / num_states, 0.0)
states = jnp.arange(1, max_num_states + 1)
initial_state_probs = vmap(prob)(states)
# Build a transition matrix that transitions only within the current
# `num_states` states.
def transition_prob(i, s):
return jnp.where(
(s <= num_states) & (i <= num_states) & (1 < num_states),
jnp.where(s == i, 1 - daily_change_prob, daily_change_prob / (num_states - 1)),
jnp.where(s == i, 1, 0),
)
transition_probs = vmap(transition_prob, in_axes=(None, 0))(states, states)
return initial_state_probs, transition_probs
num_states = 4
daily_change_prob = 0.05
initial_state_probs, transition_probs = build_latent_state(num_states, num_states, daily_change_prob)
print("Initial state probs:\n{}".format(initial_state_probs))
print("Transition matrix:\n{}".format(transition_probs))
Explanation: Model with fixed $K$
To model the changing Poisson rate, we use an HMM. We initially assume the number of states is known to be $K=4$. Later we will try comparing HMMs with different $K$.
We fix the initial state distribution to be uniform, and fix the transition matrix to be the following, where we set $p=0.05$:
$$ \begin{align} z_1 &\sim \text{Categorical}\left(\left{\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}\right}\right)\ z_t | z_{t-1} &\sim \text{Categorical}\left(\left{\begin{array}{cc}p & \text{if } z_t = z_{t-1} \ \frac{1-p}{4-1} & \text{otherwise}\end{array}\right}\right) \end{align}$$
End of explanation
def make_hmm(log_rates, transition_probs, initial_state_probs):
Make a Hidden Markov Model with Poisson observation distribution.
return HMM(
obs_dist=tfp.substrates.jax.distributions.Poisson(log_rate=log_rates),
trans_dist=distrax.Categorical(probs=transition_probs),
init_dist=distrax.Categorical(probs=initial_state_probs),
)
rng_key = PRNGKey(0)
rng_key, rng_normal, rng_poisson = split(rng_key, 3)
# Define variable to represent the unknown log rates.
trainable_log_rates = jnp.log(jnp.mean(observed_counts)) + jax.random.normal(rng_normal, (num_states,))
hmm = make_hmm(trainable_log_rates, transition_probs, initial_state_probs)
Explanation: Now we create an HMM where the observation distribution is a Poisson with learnable parameters. We specify the parameters in log space and initialize them to random values around the log of the overall mean count (to set the scal
End of explanation
def loss_fn(trainable_log_rates, transition_probs, initial_state_probs):
cur_hmm = make_hmm(trainable_log_rates, transition_probs, initial_state_probs)
return -(jnp.sum(rate_prior.log_prob(jnp.exp(trainable_log_rates))) + cur_hmm.forward(observed_counts)[0])
def update(i, opt_state, transition_probs, initial_state_probs):
params = get_params(opt_state)
loss, grads = jax.value_and_grad(loss_fn)(params, transition_probs, initial_state_probs)
return opt_update(i, grads, opt_state), loss
def fit(trainable_log_rates, transition_probs, initial_state_probs, n_steps):
opt_state = opt_init(trainable_log_rates)
def train_step(opt_state, step):
opt_state, loss = update(step, opt_state, transition_probs, initial_state_probs)
return opt_state, loss
steps = jnp.arange(n_steps)
opt_state, losses = lax.scan(train_step, opt_state, steps)
return get_params(opt_state), losses
rate_prior = distrax.LogStddevNormal(5, 5)
opt_init, opt_update, get_params = optimizers.adam(1e-1)
n_steps = 201
params, losses = fit(trainable_log_rates, transition_probs, initial_state_probs, n_steps)
rates = jnp.exp(params)
hmm = make_hmm(params, transition_probs, initial_state_probs)
print("Inferred rates: {}".format(rates))
print("True rates: {}".format(true_rates))
plt.plot(losses)
plt.ylabel("Negative log marginal likelihood");
Explanation: Model fitting using Gradient Descent
We compute a MAP estimate of the Poisson rates $\lambda$ using batch gradient descent, using the Adam optimizer applied to the log likelihood (from the HMM) plus the log prior for $p(\lambda)$.
End of explanation
_, _, posterior_probs, _ = hmm.forward_backward(observed_counts)
def plot_state_posterior(ax, state_posterior_probs, title):
ln1 = ax.plot(state_posterior_probs, c="tab:blue", lw=3, label="p(state | counts)")
ax.set_ylim(0.0, 1.1)
ax.set_ylabel("posterior probability")
ax2 = ax.twinx()
ln2 = ax2.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax2.set_title(title)
ax2.set_xlabel("time")
lns = ln1 + ln2
labs = [l.get_label() for l in lns]
ax.legend(lns, labs, loc=4)
ax.grid(True, color="white")
ax2.grid(False)
fig = plt.figure(figsize=(10, 10))
plot_state_posterior(fig.add_subplot(2, 2, 1), posterior_probs[:, 0], title="state 0 (rate {:.2f})".format(rates[0]))
plot_state_posterior(fig.add_subplot(2, 2, 2), posterior_probs[:, 1], title="state 1 (rate {:.2f})".format(rates[1]))
plot_state_posterior(fig.add_subplot(2, 2, 3), posterior_probs[:, 2], title="state 2 (rate {:.2f})".format(rates[2]))
plot_state_posterior(fig.add_subplot(2, 2, 4), posterior_probs[:, 3], title="state 3 (rate {:.2f})".format(rates[3]))
plt.tight_layout()
print(rates)
# max marginals
most_probable_states = jnp.argmax(posterior_probs, axis=-1)
most_probable_rates = rates[most_probable_states]
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(1, 1, 1)
ax.plot(most_probable_rates, c="tab:green", lw=3, label="inferred rate")
ax.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax.set_ylabel("latent rate")
ax.set_xlabel("time")
ax.set_title("Inferred latent rate over time")
ax.legend(loc=4);
# max probaility trajectory (Viterbi)
most_probable_states = hmm.viterbi(observed_counts)
most_probable_rates = rates[most_probable_states]
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(1, 1, 1)
color_list = np.array(["tab:red", "tab:green", "tab:blue", "k"])
colors = color_list[most_probable_states]
for i in range(len(colors)):
ax.plot(i, most_probable_rates[i], "-o", c=colors[i], lw=3, alpha=0.75)
ax.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax.set_ylabel("latent rate")
ax.set_xlabel("time")
ax.set_title("Inferred latent rate over time");
Explanation: We see that the method learned a good approximation to the true (generating) parameters, up to a permutation of the states (since the labels are unidentifiable). However, results can vary with different random seeds. We may find that the rates are the same for some states, which means those states are being treated as identical, and are therefore redundant.
Plotting the posterior over states
End of explanation
max_num_states = 6
states = jnp.arange(1, max_num_states + 1)
# For each candidate model, build the initial state prior and transition matrix.
batch_initial_state_probs, batch_transition_probs = vmap(build_latent_state, in_axes=(0, None, None))(
states, max_num_states, daily_change_prob
)
print("Shape of initial_state_probs: {}".format(batch_initial_state_probs.shape))
print("Shape of transition probs: {}".format(batch_transition_probs.shape))
print("Example initial state probs for num_states==3:\n{}".format(batch_initial_state_probs[2, :]))
print("Example transition_probs for num_states==3:\n{}".format(batch_transition_probs[2, :, :]))
rng_key, rng_normal = split(rng_key)
# Define variable to represent the unknown log rates.
trainable_log_rates = jnp.log(jnp.mean(observed_counts)) + jax.random.normal(rng_normal, (max_num_states,))
Explanation: Model with unknown $K$
In general we don't know the true number of states. One way to select the 'best' model is to compute the one with the maximum marginal likelihood. Rather than summing over both discrete latent states and integrating over the unknown parameters $\lambda$, we just maximize over the parameters (empirical Bayes approximation).
$$p(x_{1:T}|K) \approx \max_\lambda \int p(x_{1:T}, z_{1:T} | \lambda, K) dz$$
We can do this by fitting a bank of separate HMMs in parallel, one for each value of $K$. We need to make them all the same size so we can batch them efficiently. To do this, we pad the transition matrices (and other paraemeter vectors) so they all have the same shape, and then use masking.
End of explanation
n_steps = 201
params, losses = vmap(fit, in_axes=(None, 0, 0, None))(
trainable_log_rates, batch_transition_probs, batch_initial_state_probs, n_steps
)
rates = jnp.exp(params)
plt.plot(losses.T)
plt.ylabel("Negative log marginal likelihood");
Explanation: Model fitting with gradient descent
End of explanation
plt.plot(-losses[:, -1])
plt.ylim([-400, -200])
plt.ylabel("marginal likelihood $\\tilde{p}(x)$")
plt.xlabel("number of latent states")
plt.title("Model selection on latent states");
Explanation: Plot marginal likelihood of each model
End of explanation
for i, learned_model_rates in enumerate(rates):
print("rates for {}-state model: {}".format(i + 1, learned_model_rates[: i + 1]))
def posterior_marginals(trainable_log_rates, initial_state_probs, transition_probs):
hmm = make_hmm(trainable_log_rates, transition_probs, initial_state_probs)
_, _, marginals, _ = hmm.forward_backward(observed_counts)
return marginals
posterior_probs = vmap(posterior_marginals, in_axes=(0, 0, 0))(
params, batch_initial_state_probs, batch_transition_probs
)
most_probable_states = jnp.argmax(posterior_probs, axis=-1)
fig = plt.figure(figsize=(14, 12))
for i, learned_model_rates in enumerate(rates):
ax = fig.add_subplot(4, 3, i + 1)
ax.plot(learned_model_rates[most_probable_states[i]], c="green", lw=3, label="inferred rate")
ax.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax.set_ylabel("latent rate")
ax.set_xlabel("time")
ax.set_title("{}-state model".format(i + 1))
ax.legend(loc=4)
plt.tight_layout()
Explanation: Plot posteriors
End of explanation |
12,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring multidimensional data using xray
Here is a little graphical representation of the way to think about this data. For clarification on how multidimensional data are represented in xray, visit
Step1: Loading an example file into a dataset
Step2: This is an example of what our soil moisture data from the radio tower install will look like. Each site has a lat, lon, and elevation and at the site we will record rainfall as well as soil temp and soil moisture at two depths. So there are up to 3 dimensions along which the data are recorded
Step3: Inspecting and selecting from dataset
To select the data for a specific site we just write
Step4: Now if we are only interested in soil moisture at the upper depth at a specific time, we can pull out just that one data point
Step5: For precip there are no depth values, so a specific data point can be pulled just by selecting for time and site
Step6: Test what this dataset looks like in pandas and netCDF
Step7: Going back and forth between datasets and dataframes
Step8: Loading dataframes and transfering to datasets | Python Code:
from IPython.display import Image
Image(url='http://xray.readthedocs.org/en/latest/_images/dataset-diagram.png', embed=True, width=950, height=300)
Explanation: Exploring multidimensional data using xray
Here is a little graphical representation of the way to think about this data. For clarification on how multidimensional data are represented in xray, visit: http://xray.readthedocs.org/en/latest/
End of explanation
import numpy as np
import pandas as pd
import xray
Explanation: Loading an example file into a dataset
End of explanation
temp = 15 + 8 * np.random.randn(2, 2, 3)
VW = 15 + 10 * abs(np.random.randn(2, 2, 3))
precip = 10 * np.random.rand(2, 3)
depths = [5, 20]
lons = [-99.83, -99.79]
lats = [42.63, 42.59]
elevations = [1600, 1650]
ds = xray.Dataset({'temperature': (['site', 'depth', 'time'], temp, {'units':'C'}),
'soil_moisture': (['site', 'depth', 'time'], VW, {'units':'percent'}),
'precipitation': (['site', 'time'], precip, {'units':'mm'})},
coords={'lon': (['site'], lons, {'units':'degrees east'}),
'lat': (['site'], lats, {'units':'degrees north'}),
'elevation': (['site'], elevations, {'units':'m'}),
'site': ['Acacia', 'Riverine'],
'depth': (['depth'], depths, {'units': 'cm'}),
'time': pd.date_range('2015-05-19', periods=3)})
ds
Explanation: This is an example of what our soil moisture data from the radio tower install will look like. Each site has a lat, lon, and elevation and at the site we will record rainfall as well as soil temp and soil moisture at two depths. So there are up to 3 dimensions along which the data are recorded: site, depth, and time.
End of explanation
ds.sel(site='Acacia')
Explanation: Inspecting and selecting from dataset
To select the data for a specific site we just write:
End of explanation
print ds.soil_moisture.sel(site='Acacia', time='2015-05-19', depth=5).values
Explanation: Now if we are only interested in soil moisture at the upper depth at a specific time, we can pull out just that one data point:
End of explanation
print ds.precipitation.sel(site='Acacia', time='2015-05-19').values
Explanation: For precip there are no depth values, so a specific data point can be pulled just by selecting for time and site:
End of explanation
ds.to_dataframe()
ds.to_netcdf('test.nc')
Explanation: Test what this dataset looks like in pandas and netCDF
End of explanation
sites = ['MainTower'] # can be replaced if there are more specific sites
lons = [36.8701] # degrees east
lats = [0.4856] # degrees north
elevations = [1610] # m above see level
coords={'site': (['site'], sites),
'lon': (['site'], lons, dict(units='degrees east')),
'lat': (['site'], lats, dict(units='degrees north')),
'elevation': (['site'], elevations, dict(units='m')),
'time': pd.date_range('2015-05-19', periods=3)}
precip = 10 * np.random.rand(1, 3)
ds = xray.Dataset({'precipitation': (['site', 'time'], precip, {'units':'mm'})},
coords=coords)
ds
df = ds.to_dataframe()
df
df.index
Explanation: Going back and forth between datasets and dataframes
End of explanation
from __init__ import *
from TOA5_to_netcdf import *
lons = [36.8701] # degrees east
lats = [0.4856] # degrees north
elevations = [1610] # m above see level
coords={'lon': (['site'], lons, dict(units='degrees east')),
'lat': (['site'], lats, dict(units='degrees north')),
'elevation': (['site'], elevations, dict(units='m'))}
path = os.getcwd().replace('\\','/')+'/current_data/'
input_file = path + 'CR3000_SN4709_flux.dat'
input_dict = {'has_header': True,
'header_file': input_file,
'datafile': 'soil',
'path': path,
'filename': 'CR3000_SN4709_flux.dat'}
df = createDF(input_file, input_dict, attrs)[0]
attrs, local_attrs = get_attrs(input_dict['header_file'], attrs)
ds = createDS(df, input_dict, attrs, local_attrs, site, coords_vals)
ds.to_netcdf(path='test2.nc', format='NETCDF3_64BIT')
xray.open_dataset('test2.nc')
Explanation: Loading dataframes and transfering to datasets
End of explanation |
12,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 4
Step1: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: QUIZ QUESTION
Step7: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint
Step8: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION
Step9: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION
Step10: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
Step11: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step12: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
Step13: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step14: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step15: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step16: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step17: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]
Step18: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step19: QUIZ QUESTIONS
Step20: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
Step21: QUIZ QUESTION | Python Code:
import graphlab as gl
import numpy as np
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = gl.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# assign poly_sframe[name] to be feature^power
#poly_sframe[name]= feature.apply(lambda x: x**power)
poly_sframe[name]= feature**power # can use this as well
return poly_sframe
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
sales = gl.SFrame('data/kc_house_data.gl/')
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
sales = sales.sort(['sqft_living','price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
l2_small_penalty = 1e-5
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
poly15_columns = poly15_data.column_names()
poly15_data['price'] = sales['price']
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
model = gl.linear_regression.create(poly15_data, target='price', features=poly15_columns,
l2_penalty=1e-5,
validation_set=None, verbose=False)
model.coefficients
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
End of explanation
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
Explanation: QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
poly15_set_1 = polynomial_sframe(set_1['sqft_living'], 15)
poly15_set_1_names = poly15_set_1.column_names()
poly15_set_1['price'] = set_1['price']
#print(poly15_set_1.head(2))
model15_set_1 = gl.linear_regression.create(poly15_set_1, target = 'price', l2_penalty=l2_small_penalty ,
features = poly15_set_1_names, validation_set = None,
verbose=False)
plt.plot(poly15_set_1['power_15'], poly15_set_1['price'], '.',
poly15_set_1['power_15'], model15_set_1.predict(poly15_set_1), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_1.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_2 = polynomial_sframe(set_2['sqft_living'], 15)
poly15_set_2_names = poly15_set_2.column_names()
poly15_set_2['price'] = set_2['price']
model15_set_2 = gl.linear_regression.create(poly15_set_2, target = 'price', l2_penalty=l2_small_penalty,
features=poly15_set_2_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_2['power_15'], poly15_set_2['price'], '.',
poly15_set_2['power_15'], model15_set_2.predict(poly15_set_2), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_2.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_3 = polynomial_sframe(set_3['sqft_living'], 15)
poly15_set_3_names = poly15_set_3.column_names()
poly15_set_3['price'] = set_3['price']
model15_set_3 = gl.linear_regression.create(poly15_set_3, target = 'price', l2_penalty=l2_small_penalty,
features = poly15_set_3_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_3['power_15'], poly15_set_3['price'], '.',
poly15_set_3['power_15'], model15_set_3.predict(poly15_set_3), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_3.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_4 = polynomial_sframe(set_4['sqft_living'], 15)
poly15_set_4_names = poly15_set_4.column_names()
poly15_set_4['price'] = set_4['price']
model15_set_4 = gl.linear_regression.create(poly15_set_4, target = 'price', l2_penalty=l2_small_penalty,
features = poly15_set_4_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_4['power_15'], poly15_set_4['price'], '.',
poly15_set_4['power_15'], model15_set_4.predict(poly15_set_4), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_4.get('coefficients')
print(coeff[coeff['name']=='power_1'])
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
l2_penalty=1e5
poly15_set_1 = polynomial_sframe(set_1['sqft_living'], 15)
poly15_set_1_names = poly15_set_1.column_names()
poly15_set_1['price'] = set_1['price']
#print(poly15_set_1.head(2))
model15_set_1 = gl.linear_regression.create(poly15_set_1, target = 'price', l2_penalty=l2_penalty ,
features = poly15_set_1_names, validation_set = None,
verbose=False)
plt.plot(poly15_set_1['power_15'], poly15_set_1['price'], '.',
poly15_set_1['power_15'], model15_set_1.predict(poly15_set_1), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_1.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_2 = polynomial_sframe(set_2['sqft_living'], 15)
poly15_set_2_names = poly15_set_2.column_names()
poly15_set_2['price'] = set_2['price']
model15_set_2 = gl.linear_regression.create(poly15_set_2, target = 'price', l2_penalty=l2_penalty,
features=poly15_set_2_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_2['power_15'], poly15_set_2['price'], '.',
poly15_set_2['power_15'], model15_set_2.predict(poly15_set_2), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_2.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_3 = polynomial_sframe(set_3['sqft_living'], 15)
poly15_set_3_names = poly15_set_3.column_names()
poly15_set_3['price'] = set_3['price']
model15_set_3 = gl.linear_regression.create(poly15_set_3, target = 'price', l2_penalty=l2_penalty,
features = poly15_set_3_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_3['power_15'], poly15_set_3['price'], '.',
poly15_set_3['power_15'], model15_set_3.predict(poly15_set_3), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_3.get('coefficients')
print(coeff[coeff['name']=='power_1'])
poly15_set_4 = polynomial_sframe(set_4['sqft_living'], 15)
poly15_set_4_names = poly15_set_4.column_names()
poly15_set_4['price'] = set_4['price']
model15_set_4 = gl.linear_regression.create(poly15_set_4, target = 'price', l2_penalty=l2_penalty,
features = poly15_set_4_names,
validation_set = None, verbose=False)
plt.plot(poly15_set_4['power_15'], poly15_set_4['price'], '.',
poly15_set_4['power_15'], model15_set_4.predict(poly15_set_4), '-', linewidth=2)
plt.grid(True)
coeff=model15_set_4.get('coefficients')
print(coeff[coeff['name']=='power_1'])
Explanation: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = gl.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
Explanation: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
train_valid_shuffled[0:10] # rows 0 to 9
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
validation4=train_valid_shuffled[5818:7758]
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
End of explanation
print int(round(validation4['price'].mean(), 0))
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
# train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])
# segment3 indices - (5818, 7757)
n = len(train_valid_shuffled)
train4 = train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:n])
#train4 = part1.copy()
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
End of explanation
print int(round(train4['price'].mean(), 0))
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
validation_error=[]
for i in range(k):
n = len(data)
start = (n*i)/k
end = (n*(i+1))/k-1
validation_set = data[start:end+1]
training_set = data[0:start].append(data[end+1:n])
model = gl.linear_regression.create(training_set, target=output_name, features=features_list,
l2_penalty=l2_penalty,
validation_set=None,
verbose=False)
err = np.sum(np.square(validation_set[output_name] - model.predict(validation_set)))
validation_error.append(err)
if i==k-1:
rss = np.mean(validation_error)
return rss
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
k = 10
data = polynomial_sframe(train_valid_shuffled['sqft_living'],15)
features_name = data.column_names()
data['price'] = train_valid_shuffled['price']
min_err=None
best_l2_penalty=None
l2_penalty_list=np.logspace(1, 7, num=13)
l2_error=[]
for l2_penalty in l2_penalty_list:
error = k_fold_cross_validation(k, l2_penalty, data, 'price', features_list=features_name)
l2_error.append(error)
if min_err is None or min_err > error:
min_err = error
best_l2_penalty = l2_penalty
print min_err, best_l2_penalty
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
End of explanation
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(l2_penalty_list, l2_error, 'r-')
plt.xscale('log')
#plt.yscale('log')
plt.xlabel('l2 penalty')
plt.ylabel('RSS')
plt.grid(True)
Explanation: QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
best_l2_penalty = 1000
data = polynomial_sframe(train_valid_shuffled['sqft_living'],15)
features_name = data.column_names()
data['price'] = train_valid_shuffled['price']
model = gl.linear_regression.create(data, target='price', l2_penalty=best_l2_penalty,
validation_set=None, features=features_name,
verbose=None)
Explanation: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
End of explanation
test_data = polynomial_sframe(test['sqft_living'],15)
test_data['price'] = test['price']
RSS = np.sum(np.square(test['price'] - model.predict(test_data)))
print RSS
Explanation: QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
End of explanation |
12,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stationarity and detrending (ADF/KPSS)
Stationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.
Two statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.
This first cell imports standard packages and sets plots to appear inline.
Step1: Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.
Step2: Some preprocessing is carried out on the data. The "YEAR" column is used in creating index.
Step3: The data is plotted now.
Step4: ADF test
ADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are
Step5: KPSS test
KPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.
Null Hypothesis
Step6: The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
ADF test is now applied on the data.
Step7: Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.
The KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
KPSS test is now applied on the data.
Step8: Based upon the significance level of 0.05 and the p-value of KPSS test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is non-stationary as per the KPSS test.
It is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows
Step9: ADF test is now applied on these detrended values and stationarity is checked.
Step10: Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.
KPSS test is now applied on these detrended values and stationarity is checked. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
Explanation: Stationarity and detrending (ADF/KPSS)
Stationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.
Two statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.
This first cell imports standard packages and sets plots to appear inline.
End of explanation
sunspots = sm.datasets.sunspots.load_pandas().data
Explanation: Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.
End of explanation
sunspots.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del sunspots["YEAR"]
Explanation: Some preprocessing is carried out on the data. The "YEAR" column is used in creating index.
End of explanation
sunspots.plot(figsize=(12,8))
Explanation: The data is plotted now.
End of explanation
from statsmodels.tsa.stattools import adfuller
def adf_test(timeseries):
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print (dfoutput)
Explanation: ADF test
ADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are:
Null Hypothesis: The series has a unit root.
Alternate Hypothesis: The series has no unit root.
If the null hypothesis in failed to be rejected, this test may provide evidence that the series is non-stationary.
A function is created to carry out the ADF test on a time series.
End of explanation
from statsmodels.tsa.stattools import kpss
def kpss_test(timeseries):
print ('Results of KPSS Test:')
kpsstest = kpss(timeseries, regression='c', nlags="auto")
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
for key,value in kpsstest[3].items():
kpss_output['Critical Value (%s)'%key] = value
print (kpss_output)
Explanation: KPSS test
KPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.
Null Hypothesis: The process is trend stationary.
Alternate Hypothesis: The series has a unit root (series is not stationary).
A function is created to carry out the KPSS test on a time series.
End of explanation
adf_test(sunspots['SUNACTIVITY'])
Explanation: The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
ADF test is now applied on the data.
End of explanation
kpss_test(sunspots['SUNACTIVITY'])
Explanation: Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.
The KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
KPSS test is now applied on the data.
End of explanation
sunspots['SUNACTIVITY_diff'] = sunspots['SUNACTIVITY'] - sunspots['SUNACTIVITY'].shift(1)
sunspots['SUNACTIVITY_diff'].dropna().plot(figsize=(12,8))
Explanation: Based upon the significance level of 0.05 and the p-value of KPSS test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is non-stationary as per the KPSS test.
It is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows:
Case 1: Both tests conclude that the series is not stationary - The series is not stationary
Case 2: Both tests conclude that the series is stationary - The series is stationary
Case 3: KPSS indicates stationarity and ADF indicates non-stationarity - The series is trend stationary. Trend needs to be removed to make series strict stationary. The detrended series is checked for stationarity.
Case 4: KPSS indicates non-stationarity and ADF indicates stationarity - The series is difference stationary. Differencing is to be used to make series stationary. The differenced series is checked for stationarity.
Here, due to the difference in the results from ADF test and KPSS test, it can be inferred that the series is trend stationary and not strict stationary. The series can be detrended by differencing or by model fitting.
Detrending by Differencing
It is one of the simplest methods for detrending a time series. A new series is constructed where the value at the current time step is calculated as the difference between the original observation and the observation at the previous time step.
Differencing is applied on the data and the result is plotted.
End of explanation
adf_test(sunspots['SUNACTIVITY_diff'].dropna())
Explanation: ADF test is now applied on these detrended values and stationarity is checked.
End of explanation
kpss_test(sunspots['SUNACTIVITY_diff'].dropna())
Explanation: Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.
KPSS test is now applied on these detrended values and stationarity is checked.
End of explanation |
12,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12
Step1: AOS frames
Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data.
Step2: Virtual Channel 63 (Only Idle Data)
Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's.
Step3: Virtual channel 0
Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol.
Step4: APID 5
As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag. | Python Code:
def timestamps(packets):
epoch = np.datetime64('2000-01-01T12:00:00')
t = np.array([struct.unpack('>I', p[ccsds.SpacePacketPrimaryHeader.sizeof():][:4])[0]
for p in packets], 'uint32')
return epoch + t * np.timedelta64(1, 's')
def load_frames(path):
frame_size = 223 * 5 - 2
frames = np.fromfile(path, dtype = 'uint8')
frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))
return frames
frames = np.concatenate((
load_frames('lucy_frames_eb3frn_20211019_233036.u8'),
load_frames('lucy_frames_eb3frn_20211019_235245.u8')))
frames.shape[0]
Explanation: Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12:00:00).
Looking at the idle APID packets, the next byte might indicate fractional seconds (since it is still part of the secondary header rather than idle data), but it is difficult to be sure.
End of explanation
aos = [AOSFrame.parse(f) for f in frames]
collections.Counter([a.primary_header.transfer_frame_version_number for a in aos])
collections.Counter([a.primary_header.spacecraft_id for a in aos])
collections.Counter([a.primary_header.virtual_channel_id for a in aos])
Explanation: AOS frames
Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data.
End of explanation
vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63]
[a.primary_header for a in vc63[:10]]
vc63_frames = np.array([f for f, a in zip(frames, aos) if a.primary_header.virtual_channel_id == 63])
np.unique(vc63_frames[:, 6:8], axis = 0)
bytes(vc63_frames[0, 6:8]).hex()
np.unique(vc63_frames[:, 8:])
hex(170)
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc63])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 63 (OID) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
first_part = fc < 219000
fc[first_part].size/(fc[first_part][-1]-fc[0]+1)
Explanation: Virtual Channel 63 (Only Idle Data)
Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's.
End of explanation
vc0 = [a for a in aos if a.primary_header.virtual_channel_id == 0]
[a.primary_header for a in vc0[:10]]
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc0])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 0 (telemetry) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
first_part = fc < 995800
fc[first_part].size/(fc[first_part][-1]-fc[0]+1)
vc0_packets = list(ccsds.extract_space_packets(vc0, 49, 0))
vc0_t = timestamps(vc0_packets)
vc0_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc0_packets]
vc0_apids = collections.Counter([p.APID for p in vc0_sp_headers])
vc0_apids
apid_axis = {a : k for k, a in enumerate(sorted(vc0_apids))}
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(vc0_t, [apid_axis[p.APID] for p in vc0_sp_headers], '.')
plt.yticks(ticks=range(len(apid_axis)), labels=apid_axis)
plt.xlabel('Space Packet timestamp')
plt.ylabel('APID')
plt.title('Lucy Virtual Channel 0 APID distribution');
vc0_by_apid = {apid : [p for h,p in zip(vc0_sp_headers, vc0_packets)
if h.APID == apid] for apid in vc0_apids}
plot_apids(vc0_by_apid)
Explanation: Virtual channel 0
Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol.
End of explanation
tags = {2: Int16ub, 3: Int16ub, 15: Int32ub, 31: Int16ub, 32: Int16ub, 1202: Float64b,
1203: Float64b, 1204: Float64b, 1205: Float64b, 1206: Float64b, 1208: Float32b,
1209: Float32b, 1210: Float32b, 1601: Float32b, 1602: Float32b, 1603: Float32b,
1630: Float32b, 1631: Float32b, 1632: Float32b, 17539: Float32b, 17547: Float32b,
17548: Float32b, 21314: Int32sb, 21315: Int32sb, 21316: Int32sb, 21317: Int32sb,
46555: Int32sb, 46980: Int16ub, 46981: Int16ub, 46982: Int16ub, 47090: Int16ub,
47091: Int16ub, 47092: Int16ub,
}
values = list()
for packet in vc0_by_apid[5]:
t = timestamps([packet])[0]
packet = packet[6+5:] # skip primary and secondary headers
while True:
tag = Int16ub.parse(packet)
packet = packet[2:]
value = tags[tag].parse(packet)
packet = packet[tags[tag].sizeof():]
values.append((tag, value, t))
if len(packet) == 0:
break
values_keys = {v[0] for v in values}
values = {k: [(v[2], v[1]) for v in values if v[0] == k] for k in values_keys}
for k in sorted(values_keys):
vals = values[k]
plt.figure()
plt.title(f'Key {k}')
plt.plot([v[0] for v in vals], [v[1] for v in vals], '.')
Explanation: APID 5
As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag.
End of explanation |
12,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a Random Forest Classification Model for NFL Decisions
Game Time Decisions
Its 1st and Goal with less than a minute on the clock in the 4th quarter. Your team is down by 3 points, what do you do? Should you run the ball hoping to break into the endzone? What about passing the ball instead? Or should you go for the field goal just to get the tie. Everyone has an opinion on various NFL scenarios but we will build a model that tells you which decision the pros would make given the same situation.
Data
To build our predictive model we first need some data. The NFL has released play by play information for every game since 2009. We can look at the features of each play to predict what the ultimate play decision was. Given that a play can have many different intended outcomes we will need to build a supervised classification model.
Inputs
From the NFL play-by-play data we can use the following info from each play as inputs to our RF model.
* Quarter
* Time Remaining
* Down
* Yards to 1st Down
* Field Position
* Score
Output
The RF model will be able to return a probability estimation for what the best decision would be for a given set of inputs (NFL situation). The possible play types would be the following
Step1: The testing shows the model performs poorly with "min_samples_leaf" values below 20 compared to values above 20.
The highest tested mark occurs at a value of 75 which we will use in the final model.
N_estimators
Next up lets tune the N_estimators parameter.
We expect the smaller values to take less time since we are requiring less trees to be made. The larger we can make this value the better however since more estimators is not going to hurt our model. More models do eventually take too much time/memory to easily compute.
We will build and test the model for the following parameters and then plot each one's score to see it's impact on model performance
Step2: Given the overall range of model score values to fall within 0.1 the number of estimators is not showing significant change to the model performance. A test was also done at 1000 estimators with a value in line with the above results. We will use a value of 250 n_estimators since the high score at 50 n_estimators could be a result of statistical randomness. The value of 250 strikes a good balance between computation time and statistical rigor.
Step3: Using the model
Model Score - 70.9%
After tuning our model we get a 2016 prediction score of 70.9%. So nearly 3 out of 4 plays were predicted correctly. Based on this it would make good sense to return the probability estimates for each decision when using the model so you can see what the next closest decision was since it is likely that many of these situations have multiple outcomes that different teams tend to use more often than the next. Stronger models using more features can capitalize on this and build a more robust predictor given individual team/player information.
Step4: Multiclass ROC
Understanding model class prediction performace | Python Code:
# Testing "min_samples_leaf"
min_samples_leaf = [1,3,6,9,12,15,18,22,25,50,75,100,125,150,175,200,225,250,275,300]
n_estimators = [30]
min_samples_leaf_scores = []
for n in n_estimators:
print('-' * 40)
for l in min_samples_leaf:
print('--- Testing', '({},{})'.format(n,l))
start = time.time()
rfc = nfl_model.build_random_forest_model(
X_train,
y_train,
n_estimators=n,
max_depth=None,
min_samples_split=2,
min_samples_leaf=l,
max_features='auto',
bootstrap=True,
oob_score=True,
n_jobs=-1,
random_state=0
)
stop = time.time()
score = rfc.score(X_test, y_test)
run_time = stop - start
min_samples_leaf_scores.append([n, l, score, run_time])
print(' Run Time: ', run_time)
print(' Score: ', score)
sns.set_context('talk')
sns.set_style('ticks')
records = [{'n':x[0], 'l':x[1], 'score':x[2]*100, 'time':x[3]} for x in min_samples_leaf_scores]
results_df = pd.DataFrame.from_records(records)
fig,ax = plt.subplots(1,1,figsize=(12,7))
ax.plot(results_df.l, results_df.score)
ax.set_title('Random Forest Parameter Tunning: "Min_sample_leaf"')
ax.set_xlabel('Parameter Value')
ax.set_ylabel('Model Score')
sns.despine()
ax.vlines(x=75,ymin=ax.get_ylim()[0],ymax=ax.get_ylim()[1],colors='g', label='1st - 75')
ax.legend()
plt.show()
Explanation: Building a Random Forest Classification Model for NFL Decisions
Game Time Decisions
Its 1st and Goal with less than a minute on the clock in the 4th quarter. Your team is down by 3 points, what do you do? Should you run the ball hoping to break into the endzone? What about passing the ball instead? Or should you go for the field goal just to get the tie. Everyone has an opinion on various NFL scenarios but we will build a model that tells you which decision the pros would make given the same situation.
Data
To build our predictive model we first need some data. The NFL has released play by play information for every game since 2009. We can look at the features of each play to predict what the ultimate play decision was. Given that a play can have many different intended outcomes we will need to build a supervised classification model.
Inputs
From the NFL play-by-play data we can use the following info from each play as inputs to our RF model.
* Quarter
* Time Remaining
* Down
* Yards to 1st Down
* Field Position
* Score
Output
The RF model will be able to return a probability estimation for what the best decision would be for a given set of inputs (NFL situation). The possible play types would be the following:
* Pass
* Run
* Punt
* Field Goal
* QB Kneel
Disclamer!
This model is designed to favor the "average" NFL decision for a given situation. It does not look to see if this decision will result in a positive outcome however since all historical decisions by NFL teams are made under the desire for a positive outcome so the model should reflect that desire as well. Stronger models utilizing more features of the data that indicate play success or failure could be built from the same dataset.
Train/Test Data
For this model we will split our data into train and test data by season. The model will train using data from the 2009-2015 seasons and then will be tested on the 2016 season.
Random Forest Model Tuning
Min_samples_leaf
Lets tune the min_samples_leaf parameter.
We expect the smaller values to take more time since it means we are allowing the trees to terminate only once 'x' samples remain in a leaf's data subset. It will take more splits and decisions to make leaves with samples of 1 vs 300. It may also lead to overfitting at lower values since more decisions are being forced in each tree on smaller subsets of data. If the value is too large the model will build much quicker but it will not have made enough decisions in it's trees to get a deep enough look into the data and ultimately will miss some key indicators.
We will build and test the model for the following parameters and then plot each one's score to see it's impact on model performance:
[1,3,6,9,12,15,18,22,25,50,75,100,125,150,175,200,225,250,275,300]
End of explanation
# Testing "n_estimators"
min_samples_leaf = [75]
n_estimators = [30,50,75,100,125,150,175,200,250,300]
n_estimator_scores = []
for n in n_estimators:
print('-' * 40)
for l in min_samples_leaf:
print('--- Testing', '({},{})'.format(n,l))
start = time.time()
rfc = nfl_model.build_random_forest_model(
X_train,
y_train,
n_estimators=n,
max_depth=None,
min_samples_split=2,
min_samples_leaf=l,
max_features='auto',
bootstrap=True,
oob_score=True,
n_jobs=-1,
random_state=0
)
stop = time.time()
score = rfc.score(X_test, y_test)
run_time = stop - start
n_estimator_scores.append([n, l, score, run_time])
print(' Run Time: ', run_time)
print(' Score: ', score)
records = [{'n':x[0], 'l':x[1], 'score':x[2]*100, 'time':x[3]} for x in n_estimator_scores]
results_df = pd.DataFrame.from_records(records)
fig,ax = plt.subplots(1,1,figsize=(12,7))
ax.plot(results_df.n, results_df.score)
ax.set_title('Random Forest Parameter Tunning: "n_estimator"')
ax.set_xlabel('Parameter Value')
ax.set_ylabel('Model Score')
sns.despine()
ax.vlines(x=50,ymin=ax.get_ylim()[0],ymax=ax.get_ylim()[1],colors='green', label = '1st - 50')
ax.vlines(x=250,ymin=ax.get_ylim()[0],ymax=ax.get_ylim()[1],colors='orange', label = '2nd - 250')
ax.legend()
plt.show()
Explanation: The testing shows the model performs poorly with "min_samples_leaf" values below 20 compared to values above 20.
The highest tested mark occurs at a value of 75 which we will use in the final model.
N_estimators
Next up lets tune the N_estimators parameter.
We expect the smaller values to take less time since we are requiring less trees to be made. The larger we can make this value the better however since more estimators is not going to hurt our model. More models do eventually take too much time/memory to easily compute.
We will build and test the model for the following parameters and then plot each one's score to see it's impact on model performance:
[30,50,75,100,125,150,175,200,250,300]
End of explanation
winning_rfc = nfl_model.build_random_forest_model(
X_train,
y_train,
n_estimators=250,
max_depth=None,
min_samples_split=2,
min_samples_leaf=75,
max_features='auto',
bootstrap=True,
oob_score=True,
n_jobs=-1,
random_state=0
)
winning_rfc.score(X_test, y_test)
Explanation: Given the overall range of model score values to fall within 0.1 the number of estimators is not showing significant change to the model performance. A test was also done at 1000 estimators with a value in line with the above results. We will use a value of 250 n_estimators since the high score at 50 n_estimators could be a result of statistical randomness. The value of 250 strikes a good balance between computation time and statistical rigor.
End of explanation
qtr = 4
down = 3
ydstogo = 10
TimeUnder = 1
yrdline100 = 40
ScoreDiff = 7
test_case = [[qtr, down, ydstogo, TimeUnder, yrdline100, ScoreDiff]]
classes = winning_rfc.classes_
rfcp = winning_rfc.predict_proba(test_case)[0]*100
rfcp = [str(round(x,2)) for x in rfcp]
print("")
print("Random Forest")
for item in zip(classes,rfcp):
print(item)
nfl_model.store_model(winning_rfc,'random_forest_classifier', 3)
Explanation: Using the model
Model Score - 70.9%
After tuning our model we get a 2016 prediction score of 70.9%. So nearly 3 out of 4 plays were predicted correctly. Based on this it would make good sense to return the probability estimates for each decision when using the model so you can see what the next closest decision was since it is likely that many of these situations have multiple outcomes that different teams tend to use more often than the next. Stronger models using more features can capitalize on this and build a more robust predictor given individual team/player information.
End of explanation
from sklearn.metrics import roc_curve, auc
y_test_classes = {cls:[True if c == cls else False for c in y_test ] for cls in winning_rfc.classes_.tolist()}
rfc_result = winning_rfc.predict_proba(X_test)
rfc_classes = winning_rfc.classes_.tolist()
y_predicted_probs = {cls:[item[rfc_classes.index(cls)] for item in rfc_result] for cls in rfc_classes}
fpr = {cls:[] for cls in rfc_classes}
tpr = {cls:[] for cls in rfc_classes}
for cls in rfc_classes:
data = roc_curve(y_test_classes[cls]*1,y_predicted_probs[cls])
fpr[cls] = data[0]
tpr[cls] = data[1]
colors = {'Pass':'red', 'Run':'blue', 'Punt':'green', 'QB Kneel':'orange', 'Field Goal':'yellow'}
plt.figure()
lw = 2
for cls in rfc_classes:
plt.plot(fpr[cls],tpr[cls], color=colors[cls],lw=lw,label=cls + ' ROC curve (area = %0.2f)' % auc(fpr[cls],tpr[cls]))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('NFL PlayType RF Classifier - ROC')
plt.legend(loc="lower right")
plt.show()
# Manual ROC calculation - *when not using predict_proba*
TP = 0
FP = 0
P = 0
N = 0
for index in range(len(rfc_result)):
hypo_class = rfc_result[index]=='Pass'
actu_class = ova_classes['Pass'][index]
if actu_class == True:
P += 1
if hypo_class == True:
TP += 1
else:
N += 1
if hypo_class == True:
FP += 1
print('TP :',TP)
print('FP :',FP)
print('P :',P)
print('N :',N)
print('TPR:',TP/P)
print('FPR:',FP/N)
Explanation: Multiclass ROC
Understanding model class prediction performace
End of explanation |
12,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
IPython, pandas and matplotlib have a number of useful options you can use to make it easier to view and format your data. This notebook collects a bunch of them in one place. I hope this will be a useful reference.
The original blog posting is on http
Step2: One of the simple things we can do is override the default CSS to customize our DataFrame output.
This specific example is from - Brandon Rhodes' talk at pycon
For the purposes of the notebook, I'm defining CSS as a variable but you could easily read in from a file as well.
Step3: Now add this CSS into the current notebook's HTML.
Step4: You can see how the CSS is now applied to the DataFrame and how you could easily modify it to customize it to your liking.
Jupyter notebooks do a good job of automatically displaying information but sometimes you want to force data to display. Fortunately, ipython provides and option. This is especially useful if you want to display multiple dataframes.
Step5: Using pandas settings to control output
Pandas has many different options to control how data is displayed.
You can use max_rows to control how many rows are displayed
Step6: Depending on the data set, you may only want to display a smaller number of columns.
Step7: You can control how many decimal points of precision to display
Step8: You can also format floating point numbers using float_format
Step9: This does apply to all the data. In our example, applying dollar signs to everything would not be correct for this example.
Step10: Third Party Plugins
Qtopian has a useful plugin called qgrid - https
Step11: Showing the data is straighforward.
Step12: The plugin is very similar to the capability of an Excel autofilter. It can be handy to quickly filter and sort your data.
Improving your plots
I have mentioned before how the default pandas plots don't look so great. Fortunately, there are style sheets in matplotlib which go a long way towards improving the visualization of your data.
Here is a simple plot with the default values.
Step13: We can use some of the matplolib styles available to us to make this look better.
http
Step14: You can see all the styles available | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Introduction
IPython, pandas and matplotlib have a number of useful options you can use to make it easier to view and format your data. This notebook collects a bunch of them in one place. I hope this will be a useful reference.
The original blog posting is on http://pbpython.com/ipython-pandas-display-tips.html
Import modules and some sample data
First, do our standard pandas, numpy and matplotlib imports as well as configure inline displays of plots.
End of explanation
CSS =
body {
margin: 0;
font-family: Helvetica;
}
table.dataframe {
border-collapse: collapse;
border: none;
}
table.dataframe tr {
border: none;
}
table.dataframe td, table.dataframe th {
margin: 0;
border: 1px solid white;
padding-left: 0.25em;
padding-right: 0.25em;
}
table.dataframe th:not(:empty) {
background-color: #fec;
text-align: left;
font-weight: normal;
}
table.dataframe tr:nth-child(2) th:empty {
border-left: none;
border-right: 1px dashed #888;
}
table.dataframe td {
border: 2px solid #ccf;
background-color: #f4f4ff;
}
Explanation: One of the simple things we can do is override the default CSS to customize our DataFrame output.
This specific example is from - Brandon Rhodes' talk at pycon
For the purposes of the notebook, I'm defining CSS as a variable but you could easily read in from a file as well.
End of explanation
from IPython.core.display import HTML
HTML('<style>{}</style>'.format(CSS))
SALES=pd.read_csv("../data/sample-sales-tax.csv", parse_dates=True)
SALES.head()
Explanation: Now add this CSS into the current notebook's HTML.
End of explanation
from IPython.display import display
display(SALES.head(2))
display(SALES.tail(2))
display(SALES.describe())
Explanation: You can see how the CSS is now applied to the DataFrame and how you could easily modify it to customize it to your liking.
Jupyter notebooks do a good job of automatically displaying information but sometimes you want to force data to display. Fortunately, ipython provides and option. This is especially useful if you want to display multiple dataframes.
End of explanation
pd.set_option("display.max_rows",4)
SALES
Explanation: Using pandas settings to control output
Pandas has many different options to control how data is displayed.
You can use max_rows to control how many rows are displayed
End of explanation
pd.set_option("display.max_columns",6)
SALES
Explanation: Depending on the data set, you may only want to display a smaller number of columns.
End of explanation
pd.set_option('precision',2)
SALES
pd.set_option('precision',7)
SALES
Explanation: You can control how many decimal points of precision to display
End of explanation
pd.set_option('float_format', '{:.2f}'.format)
SALES
Explanation: You can also format floating point numbers using float_format
End of explanation
pd.set_option('float_format', '${:.2f}'.format)
SALES
Explanation: This does apply to all the data. In our example, applying dollar signs to everything would not be correct for this example.
End of explanation
import qgrid
qgrid.nbinstall(overwrite=True)
Explanation: Third Party Plugins
Qtopian has a useful plugin called qgrid - https://github.com/quantopian/qgrid
Import it and install it.
End of explanation
qgrid.show_grid(SALES, remote_js=True)
Explanation: Showing the data is straighforward.
End of explanation
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
Explanation: The plugin is very similar to the capability of an Excel autofilter. It can be handy to quickly filter and sort your data.
Improving your plots
I have mentioned before how the default pandas plots don't look so great. Fortunately, there are style sheets in matplotlib which go a long way towards improving the visualization of your data.
Here is a simple plot with the default values.
End of explanation
plt.style.use('ggplot')
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
Explanation: We can use some of the matplolib styles available to us to make this look better.
http://matplotlib.org/users/style_sheets.html
End of explanation
plt.style.available
plt.style.use('bmh')
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
plt.style.use('fivethirtyeight')
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
Explanation: You can see all the styles available
End of explanation |
12,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists
<img src="../images/bcamp.png" align="center">
13 PyTorch Logistic Regression
Web
Step1: Load a CSV file for Binary classification
Step2: Create PyTorch GPU tensors
Note how we transfrom the np arrays
Step3: Define the NN model
First a simple two leyer network and then a more involved version
Initial weights selection
There are many ways to select the initial weights to a neural network architecture. A common initialization scheme is random initialization, which sets the biases and weights of all the nodes in each hidden layer randomly, so they are in a random point of the space, and objective function, and then find a nearby local minima using an algorithm like SGD or Adam.
We use a xavier initializer, in effect (according to theory) initializing the weights of the network to values that would be closer to the optimal, and therefore require less epochs to train.
References
Step4: The cross-entropy loss function
A binary cross-entropy Criterion (which expects 0 or 1 valued targets)
Step5: Start training in Batches
Step6: Cross validation, metrics, ROC_AUC etc | Python Code:
# !pip install pycuda
%reset -f
import numpy
import numpy as np
from __future__ import print_function
from __future__ import division
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import os
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
from torchvision import transforms
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from sklearn.preprocessing import MultiLabelBinarizer
import time
%matplotlib inline
from pylab import rcParams
rcParams['figure.figsize'] = (6, 6) # setting default size of plots
import tensorflow as tf
print("tensorflow:" + tf.__version__)
!set "KERAS_BACKEND=tensorflow"
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
# !pip install http://download.pytorch.org/whl/cu75/torch-0.2.0.post1-cp27-cp27mu-manylinux1_x86_64.whl
# !pip install torchvision
# ! pip install cv2
# import cv2
Explanation: Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists
<img src="../images/bcamp.png" align="center">
13 PyTorch Logistic Regression
Web: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/
Notebooks: <a href="https://github.com/QuantScientist/Data-Science-PyCUDA-GPU"> On GitHub</a>
Shlomo Kashani
<img src="../images/pt.jpg" width="35%" align="center">
PyTorch Imports
End of explanation
% reset -f
# ! pip install tables
import torch
from torch.autograd import Variable
import numpy as np
import pandas
import numpy as np
import pandas as pd
from sklearn import cross_validation
from sklearn import metrics
from sklearn.metrics import roc_auc_score, log_loss, roc_auc_score, roc_curve, auc
import matplotlib.pyplot as plt
from sklearn import cross_validation
from sklearn import metrics
from sklearn.metrics import roc_auc_score, log_loss, roc_auc_score, roc_curve, auc
from sklearn.cross_validation import StratifiedKFold, ShuffleSplit, cross_val_score, train_test_split
import logging
handler=logging.basicConfig(level=logging.INFO)
lgr = logging.getLogger(__name__)
%matplotlib inline
F_NAME_TRAIN= 'data-03-diabetes.csv'
# F_NAME_TRAIN='numerai/numerai_training_data.csv'
# X_df_train= pd.read_csv(F_NAME_TRAIN)
X_df_train= pd.read_csv(F_NAME_TRAIN,header=None, dtype=np.float32)
X_df_train_SINGLE=X_df_train.copy(deep=True)
# X_df_train_SINGLE.drop('id', axis=1, inplace=True)
# X_df_train_SINGLE.drop('era', axis=1, inplace=True)
# X_df_train_SINGLE.drop('data_type', axis=1, inplace=True)
# drop the header
# X_df_train_SINGLE.to_csv('numerai/numerai_training_data_clean.csv', header=False)
# X_df_train_SINGLE= pd.read_csv('numerai/numerai_training_data_clean.csv', header=None, dtype=np.float32)
# X_df_train_SINGLE=X_df_train_SINGLE.dropna()
answers_1_SINGLE = list (X_df_train_SINGLE[X_df_train_SINGLE.columns[-1]].values)
answers_1_SINGLE= map(int, answers_1_SINGLE)
X_df_train_SINGLE = X_df_train_SINGLE.drop(X_df_train_SINGLE.columns[-1], axis=1)
# X_df_train_SINGLE=X_df_train_SINGLE.apply(lambda x: pandas.to_numeric(x, errors='ignore'))
print(X_df_train_SINGLE.shape)
X_df_train_SINGLE.head(5)
# (np.where(np.isnan(X_df_train_SINGLE)))
# (np.where(np.isinf(X_df_train_SINGLE)))
X_df_train_SINGLE.info()
Explanation: Load a CSV file for Binary classification
End of explanation
use_cuda = False
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
Tensor = FloatTensor
# fix seed
seed=17*19
np.random.seed(seed)
torch.manual_seed(seed)
if use_cuda:
torch.cuda.manual_seed(seed)
# sk learn
trainX, testX, trainY, testY = train_test_split(X_df_train_SINGLE, answers_1_SINGLE, test_size=.33, random_state=999)
# Train data
x_data_np = np.array(trainX.values, dtype=np.float32)
y_data_np = np.array(trainY, dtype=np.float32)
y_data_np=y_data_np.reshape((y_data_np.shape[0],1)) # Must be reshaped for PyTorch!
print(x_data_np.shape, y_data_np.shape)
print(type(x_data_np), type(y_data_np))
if use_cuda:
lgr.info ("Using the GPU")
X = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch
Y = Variable(torch.from_numpy(y_data_np).cuda())
else:
lgr.info ("Using the CPU")
X = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch
Y = Variable(torch.from_numpy(y_data_np))
print(type(X.data), type(Y.data)) # should be 'torch.cuda.FloatTensor'
print(type(X.data), type(Y.data)) # should be 'torch.cuda.FloatTensor'
Explanation: Create PyTorch GPU tensors
Note how we transfrom the np arrays
End of explanation
keep_prob=0.85
# p is the probability of being dropped in PyTorch
dropout = torch.nn.Dropout(p=1 - keep_prob)
# hiddenLayer1Size=32
# hiddenLayer2Size=16
# # # Hypothesis using sigmoid
# linear1=torch.nn.Linear(x_data_np.shape[1], hiddenLayer1Size, bias=True) # size mismatch, m1: [5373 x 344], m2: [8 x 1] at /pytorch/torch/lib/TH/generic/THTensorMath.c:1293
# # xavier initializer
# torch.nn.init.xavier_uniform(linear1.weight)
# linear2=torch.nn.Linear(hiddenLayer1Size, hiddenLayer2Size)
# # xavier initializer
# torch.nn.init.xavier_uniform(linear2.weight)
# linear3=torch.nn.Linear(hiddenLayer2Size, 1)
# # xavier initializer
# torch.nn.init.xavier_uniform(linear3.weight)
# sigmoid = torch.nn.Sigmoid()
# tanh=torch.nn.Tanh()
# model = torch.nn.Sequential(linear1,dropout, tanh, linear2,dropout, tanh, linear3,dropout, sigmoid)
#Hypothesis using sigmoid
linear1=torch.nn.Linear(x_data_np.shape[1], 1, bias=True)
# xavier initializer
torch.nn.init.xavier_uniform(linear1.weight)
sigmoid = torch.nn.Sigmoid()
# model = torch.nn.Sequential(linear1,dropout, sigmoid)
model = torch.nn.Sequential(linear1, sigmoid)
if use_cuda:
lgr.info ("Using the GPU")
model = model.cuda() # On GPU
else:
lgr.info ("Using the CPU")
lgr.info('Model {}'.format(model))
# see https://github.com/facebookresearch/SentEval/blob/master/senteval/tools/classifier.py
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# optimizer = torch.optim.SGD(model.parameters(), lr=1e-1,momentum=0.9, weight_decay=1e-4)
# optimizer = torch.optim.Adam(model.parameters())
lgr.info('Optimizer {}'.format(optimizer))
Explanation: Define the NN model
First a simple two leyer network and then a more involved version
Initial weights selection
There are many ways to select the initial weights to a neural network architecture. A common initialization scheme is random initialization, which sets the biases and weights of all the nodes in each hidden layer randomly, so they are in a random point of the space, and objective function, and then find a nearby local minima using an algorithm like SGD or Adam.
We use a xavier initializer, in effect (according to theory) initializing the weights of the network to values that would be closer to the optimal, and therefore require less epochs to train.
References:
nninit.xavier_uniform(tensor, gain=1) - Fills tensor with values according to the method described in "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. and Bengio, Y., using a uniform distribution.
nninit.xavier_normal(tensor, gain=1) - Fills tensor with values according to the method described in "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. and Bengio, Y., using a normal distribution.
nninit.kaiming_uniform(tensor, gain=1) - Fills tensor with values according to the method described in "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al. using a uniform distribution.
nninit.kaiming_normal(tensor, gain=1) - Fills tensor with values according to the method described in ["Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al.]
End of explanation
import sympy as sp
sp.interactive.printing.init_printing(use_latex=True)
from IPython.display import display, Math, Latex
maths = lambda s: display(Math(s))
latex = lambda s: display(Latex(s))
#the loss function is as follows:
maths("\mathbf{Loss Function:} J(x, z) = -\sum_k^d[x_k \log z_k + (1-x_k)log(1-z_k)]")
Explanation: The cross-entropy loss function
A binary cross-entropy Criterion (which expects 0 or 1 valued targets) :
lua
criterion = nn.BCECriterion()
The BCE loss is defined as :
<img src="../images/bce2.png" align="center">
End of explanation
import time
start_time = time.time()
epochs=20000
all_losses = []
for step in range(epochs):
optimizer.zero_grad()
hypothesis = model(X)
# cost/loss function
cost = -(Y * torch.log(hypothesis) + (1 - Y)
* torch.log(1 - hypothesis)).mean()
cost.backward()
optimizer.step()
# Keep loss
if step % 150 == 0:
loss = cost.data[0]
all_losses.append(loss)
if step % 4000 == 0:
print(step, cost.data.cpu().numpy())
# RuntimeError: can't convert CUDA tensor to numpy (it doesn't support GPU arrays).
# Use .cpu() to move the tensor to host memory first.
predicted = (model(X).data > 0.5).float()
# predicted = (model(X).data ).float() # This is like predict proba
predictions=predicted.cpu().numpy()
accuracy = (predicted == Y.data).float().mean()
print('TRAINNING Accuracy:' + str(accuracy))
# print ('TRAINING LOG_LOSS=' + str(log_loss(trainY, predictions)))
# R_SCORE=roc_auc_score(Y.data.cpu().numpy(),predictions )
# print ('TRAINING ROC AUC:' + str(R_SCORE))
end_time = time.time()
print ('{} {:6.3f} seconds'.format('GPU:', end_time-start_time))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(all_losses)
plt.show()
Explanation: Start training in Batches
End of explanation
model.eval()
# Validation data
x_data_np_val = np.array(testX.values, dtype=np.float32)
y_data_np_val = np.array(testY, dtype=np.float32)
y_data_np_val=y_data_np_val.reshape((y_data_np_val.shape[0],1)) # Must be reshaped for PyTorch!
print(x_data_np_val.shape, y_data_np_val.shape)
print(type(x_data_np_val), type(y_data_np_val))
if use_cuda:
lgr.info ("Using the GPU")
X_val = Variable(torch.from_numpy(x_data_np_val).cuda()) # Note the conversion for pytorch
Y_val = Variable(torch.from_numpy(y_data_np_val).cuda())
else:
lgr.info ("Using the CPU")
X_val = Variable(torch.from_numpy(x_data_np_val)) # Note the conversion for pytorch
Y_val = Variable(torch.from_numpy(y_data_np_val))
# VALIDATION
predicted_val = (model(X_val).data).float()
predictions_val=predicted_val.cpu().numpy()
accuracy_val = (predicted_val == Y_val.data).float().mean()
R_SCORE_VAL=roc_auc_score(Y_val.data.cpu().numpy(),predictions_val)
print ('VALIDATION ROC AUC:' + str(R_SCORE_VAL))
false_positive_rate, true_positive_rate, thresholds = roc_curve(testY, predictions_val)
roc_auc = auc(false_positive_rate, true_positive_rate)
plt.title('LOG_LOSS=' + str(log_loss(testY, predictions_val)))
plt.plot(false_positive_rate, true_positive_rate, 'b', label='AUC = %0.6f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([-0.1, 1.2])
plt.ylim([-0.1, 1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# Lab 5 Logistic Regression Classifier
import torch
from torch.autograd import Variable
import numpy as np
torch.manual_seed(777) # for reproducibility
xy = np.loadtxt('data-03-diabetes.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
# Make sure the shape and data are OK
print(x_data.shape, y_data.shape)
X = Variable(torch.from_numpy(x_data))
Y = Variable(torch.from_numpy(y_data))
# Hypothesis using sigmoid
linear = torch.nn.Linear(8, 1, bias=True)
sigmoid = torch.nn.Sigmoid()
model = torch.nn.Sequential(linear, sigmoid)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for step in range(10001):
optimizer.zero_grad()
hypothesis = model(X)
# cost/loss function
cost = -(Y * torch.log(hypothesis) + (1 - Y)
* torch.log(1 - hypothesis)).mean()
cost.backward()
optimizer.step()
if step % 200 == 0:
print(step, cost.data.numpy())
# Accuracy computation
predicted = (model(X).data > 0.5).float()
accuracy = (predicted == Y.data).float().mean()
print("\nHypothesis: ", hypothesis.data.numpy(), "\nCorrect (Y): ", predicted.numpy(), "\nAccuracy: ", accuracy)
Explanation: Cross validation, metrics, ROC_AUC etc
End of explanation |
12,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this week's homework assignment we want you to start playing around with different classifiers and try to submit a prediction to the Kaggle competition. Nothing fancy, just so you won't try to do that the first time in the last day before the due date.
This notebook is designed to assist you in playing around with those classifiers, though most of the code is already in the homework assignment writeup.
Step1: Data Loading
In the homework assignment you are going to use iris for the playing around part, here we'll just use a sample of the Kaggle data.
Step2: All your work should be done on the training data set. To be able to make educated decisions on which classifier you're going to use, you should split it into train and validation data sets.
Step3: World of Classifiers
Time to start doing some classifications. We'll use all those you are required to from the assignment on the data. We'll skip the KNN one, if you want a reminder on how to use them see previous discussions.
IMPORTANT NOTE!!!
For the Kaggle dataset you need to submit probabilities and not just class predictions. Don't worry, you don't need to code that, just use the predictSoft() function.
Decision Tree
Step4: The predictSoft method returns an $M \times C$ table in which for each point you have the proability of each class.
Step5: We can also compute the AUC for both the training and validation data sets.
Step6: Play with different parameters to see how AUC changes.
Printing decision tree
Funny enough, whoever wrote the decision tree classifier provided a printing mechanism. However, it only works up to depth 2, so not very useful for us.
Step7: Linear Classifier
Step8: <b> Note that we do not need to scale the data for decision tree. </b>
Step9: And the AUC IS
Step10: This is why we're using a validation data set. We can see already that for THIS specific configuration the decision tree is much better. It is very likely that it'll be better on the test data.
Neural Network
Yeah, even that is given to you in the mltools package. We'll use it in our examples. Havign said that, if you want to use some more fancy packages you are more than welcome to do that.
Step11: After we construct the classifier, we can define the sizes of its layers and initialize their values with "init_weights".
Definition of nn.init_weights
Step12: WHAT DID WE DO WRONG?
Step13: The AUC results are bad because we just used a lame configuration of the NN. NN can be engineered until your last day, but some things should make sense to you.
One example is the option to change the activation function. This is the function that is in the inner layers. By default the code comes with the tanh, but the logistic (sigmoid) is also coded in and you can just specify it.
Step14: Writing your own activation function
Not suprisingly, you can also provide a custom activation function. Note that for the last layer you will probably always want the sigmoid function, so only change the inner layers ones.
The function definition is this
Step15: Plotting
We've learn that one way of guessing how well we're doing with different model parameters is to plot the train and validation errors as a function of that paramter (e.g, k in the KNN of degree in the linear classifier and regression).
Now it seems like there could be more parameters involved? One example is the degree and the regularizer value (see. HW assignment for more examples).
When it's two features you can simple use heatmaps. The X-axis and Y-axis represent the parameters and the "heat" is the validation/train error as a "third" dimension".
We're going to use a dummy function to show that. Let's assume we have two parameters p1 and p2 and the prediction accuracy is p1 + p2 (yup, that stupid). In the HW assignment it's actually the auc.
Step16: <h2> For homework | Python Code:
# Import all required libraries
from __future__ import division # For python 2.*
import numpy as np
import matplotlib.pyplot as plt
import mltools as ml
np.random.seed(0)
%matplotlib inline
Explanation: In this week's homework assignment we want you to start playing around with different classifiers and try to submit a prediction to the Kaggle competition. Nothing fancy, just so you won't try to do that the first time in the last day before the due date.
This notebook is designed to assist you in playing around with those classifiers, though most of the code is already in the homework assignment writeup.
End of explanation
# Data Loading
X = np.genfromtxt('data/X_train.txt', delimiter=None)
Y = np.genfromtxt('data/Y_train.txt', delimiter=None)
# The test data
Xte = np.genfromtxt('data/X_test.txt', delimiter=None)
Explanation: Data Loading
In the homework assignment you are going to use iris for the playing around part, here we'll just use a sample of the Kaggle data.
End of explanation
Xtr, Xva, Ytr, Yva = ml.splitData(X, Y)
Xtr, Ytr = ml.shuffleData(Xtr, Ytr)
# Taking a subsample of the data so that trains faster. You should train on whole data for homework and Kaggle.
Xt, Yt = Xtr[:4000], Ytr[:4000]
Explanation: All your work should be done on the training data set. To be able to make educated decisions on which classifier you're going to use, you should split it into train and validation data sets.
End of explanation
# The decision tree classifier has minLeaf and maxDepth parameters. You should know what it means by now.
learner = ml.dtree.treeClassify(Xt, Yt, minLeaf=25, maxDepth=15)
# Prediction
probs = learner.predictSoft(Xte)
Explanation: World of Classifiers
Time to start doing some classifications. We'll use all those you are required to from the assignment on the data. We'll skip the KNN one, if you want a reminder on how to use them see previous discussions.
IMPORTANT NOTE!!!
For the Kaggle dataset you need to submit probabilities and not just class predictions. Don't worry, you don't need to code that, just use the predictSoft() function.
Decision Tree
End of explanation
probs
Explanation: The predictSoft method returns an $M \times C$ table in which for each point you have the proability of each class.
End of explanation
print("{0:>15}: {1:.4f}".format('Train AUC', learner.auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', learner.auc(Xva, Yva)))
Explanation: We can also compute the AUC for both the training and validation data sets.
End of explanation
learner = ml.dtree.treeClassify()
learner.train(Xt, Yt, maxDepth=2)
print (learner)
Explanation: Play with different parameters to see how AUC changes.
Printing decision tree
Funny enough, whoever wrote the decision tree classifier provided a printing mechanism. However, it only works up to depth 2, so not very useful for us.
End of explanation
# Scaling the data
XtrP, params = ml.rescale(Xt)
XteP,_ = ml.rescale(Xte, params)
print(XtrP.shape, XteP.shape)
Explanation: Linear Classifier
End of explanation
## Linear models:
learner = ml.linearC.linearClassify()
learner.train(XtrP, Yt, initStep=0.5, stopTol=1e-6, stopIter=100)
probs = learner.predictSoft(XteP)
Explanation: <b> Note that we do not need to scale the data for decision tree. </b>
End of explanation
print("{0:>15}: {1:.4f}".format('Train AUC',learner.auc(XtrP, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', learner.auc(Xva, Yva)))
Explanation: And the AUC IS:
End of explanation
nn = ml.nnet.nnetClassify()
Explanation: This is why we're using a validation data set. We can see already that for THIS specific configuration the decision tree is much better. It is very likely that it'll be better on the test data.
Neural Network
Yeah, even that is given to you in the mltools package. We'll use it in our examples. Havign said that, if you want to use some more fancy packages you are more than welcome to do that.
End of explanation
nn.init_weights([14, 5, 3], 'random', Xt, Yt)
nn.train(Xt, Yt, stopTol=1e-8, stepsize=.25, stopIter=50)
Explanation: After we construct the classifier, we can define the sizes of its layers and initialize their values with "init_weights".
Definition of nn.init_weights:
nn.init_weights(self, sizes, init, X, Y)
From the method description: sizes = [Ninput, N1, N2, ... , Noutput], where Ninput = # of input features, and Nouput = # classes
Training the model using gradient descent, we can track the surrogate loss (here, MSE loss on the output vector, compared to a 1-of-K representation of the class), as well as the 0/1 classification loss (error rate):
End of explanation
# Need to specify the right number of input and output layers.
nn.init_weights([Xt.shape[1], 5, len(np.unique(Yt))], 'random', Xt, Yt)
nn.train(Xt, Yt, stopTol=1e-8, stepsize=.25, stopIter=50) # Really small stopIter so it will stop fast :)
print("{0:>15}: {1:.4f}".format('Train AUC',nn.auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', nn.auc(Xva, Yva)))
Explanation: WHAT DID WE DO WRONG?
End of explanation
nn.setActivation('logistic')
nn.train(Xt, Yt, stopTol=1e-8, stepsize=.25, stopIter=100)
print("{0:>15}: {1:.4f}".format('Train AUC',nn.auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', nn.auc(Xva, Yva)))
Explanation: The AUC results are bad because we just used a lame configuration of the NN. NN can be engineered until your last day, but some things should make sense to you.
One example is the option to change the activation function. This is the function that is in the inner layers. By default the code comes with the tanh, but the logistic (sigmoid) is also coded in and you can just specify it.
End of explanation
# Here's a dummy activation method (f(x) = x)
sig = lambda z: np.atleast_2d(z)
dsig = lambda z: np.atleast_2d(1)
nn = ml.nnet.nnetClassify()
nn.init_weights([Xt.shape[1], 5, len(np.unique(Yt))], 'random', Xt, Yt)
nn.setActivation('custom', sig, dsig)
nn.train(Xt, Yt, stopTol=1e-8, stepsize=.25, stopIter=100)
print("{0:>15}: {1:.4f}".format('Train AUC',nn.auc(Xt, Yt)))
print("{0:>15}: {1:.4f}".format('Validation AUC', nn.auc(Xva, Yva)))
Explanation: Writing your own activation function
Not suprisingly, you can also provide a custom activation function. Note that for the last layer you will probably always want the sigmoid function, so only change the inner layers ones.
The function definition is this:
setActivation(self, method, sig=None, d_sig=None, sig_0=None, d_sig_0=None)
You can call it with method='custom' and then specify both sig and d_sig. (the '0' ones are for the last layer)
End of explanation
p1 = np.arange(5)
p2 = np.arange(5)
auc = np.zeros([p1.shape[0], p2.shape[0]])
for i in range(p1.shape[0]):
for j in range(p2.shape[0]):
auc[i][j] = p1[i] + p2[j]
auc
f, ax = plt.subplots(1, 1, figsize=(8, 5))
cax = ax.matshow(auc)
f.colorbar(cax)
ax.set_xticks(p1)
ax.set_xticklabels(['%d' % p for p in p1])
ax.set_yticks(p2)
ax.set_yticklabels(['%d' % p for p in p2])
plt.show()
Explanation: Plotting
We've learn that one way of guessing how well we're doing with different model parameters is to plot the train and validation errors as a function of that paramter (e.g, k in the KNN of degree in the linear classifier and regression).
Now it seems like there could be more parameters involved? One example is the degree and the regularizer value (see. HW assignment for more examples).
When it's two features you can simple use heatmaps. The X-axis and Y-axis represent the parameters and the "heat" is the validation/train error as a "third" dimension".
We're going to use a dummy function to show that. Let's assume we have two parameters p1 and p2 and the prediction accuracy is p1 + p2 (yup, that stupid). In the HW assignment it's actually the auc.
End of explanation
probs
# Create the data for submission by taking the P(Y=1) column from probs and just add a running index as the first column.
Y_sub = np.vstack([np.arange(Xte.shape[0]), probs[:, 1]]).T
# We specify the header (ID, Prob1) and also specify the comments as '' so the header won't be commented out with
# the # sign.
np.savetxt('data/Y_sub.txt', Y_sub, '%d, %.5f', header='ID,Prob1', comments='', delimiter=',')
Explanation: <h2> For homework: <br> <br>
x and y will be hyperparameters of the model <br>
f will be some performance metric (error, AUC etc.) </h2>
<br>
Submitting Predictions
Let's assume that the last classifier we ran was the best one (after we used all that we know to verify it is the best one including that plot from the previoud block). Now let's run it on the test and create a file that can be submitted.
Each line in the file is a point id and the probability of P(Y=1). There's also a header file. Here's how you can create it simply from the probs matrix.
End of explanation |
12,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Merging annotations from tiled arrays
Overview
Step1: 1. Connect girder client and set parameters
Step2: 2. Polygon merger
The Polygon_merger() is the top level function for performing the merging.
Step3: Required arguments for initialization
Ground truth codes file
This contains the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns
Step4: maskpaths
These are absolute paths for the masks generated by tiled analysis to be used.
Step5: Note that the pattern _left-123_ and _top-123_ is assumed to encode the x and y offset
of the mask at base magnification. If you prefer some other convention, you will need to manually provide the
parameter roi_offsets to the method Polygon_merger.set_roi_bboxes.
Step6: 3. Initialize and run the merger
To keep things clean we discard background contours (in this case, stroma) that
are now enclosed with another contour. See docs for masks_to_annotations_handler.py
if this is confusing. This is purely aesthetic.
Step7: This is the result
Step8: 4. Visualize results on HistomicsTK | Python Code:
import os
CWD = os.getcwd()
import os
import girder_client
from pandas import read_csv
from histomicstk.annotations_and_masks.polygon_merger import Polygon_merger
from histomicstk.annotations_and_masks.masks_to_annotations_handler import (
get_annotation_documents_from_contours, )
Explanation: Merging annotations from tiled arrays
Overview:
This notebook describes how to merge annotations generated by tiled analysis of a whole-slide image. Since tiled analysis is carried out on small tiles, the annotations produced by image segmentation algorithms will be disjoint at the tile boundaries, prohibiting analysis of large structures that span multiple tiles.
The example presented below addresses the case where the annotations are stored in an array format that preserves the spatial organization of tiles. This scenario arises when iterating through the columns and rows of a tiled representation of a whole-slide image. Analysis of the organized array format is faster and preferred since the interfaces where annotations need to be merged are known. In cases where where the annotations to be merged do not come from tiled analysis, or where the tile results are not organized, an alternative method based on R-trees provides a slightly slower solution.
This extends on some of the work described in Amgad et al, 2019:
Mohamed Amgad, Habiba Elfandy, Hagar Hussein, ..., Jonathan Beezley, Deepak R Chittajallu, David Manthey, David A Gutman, Lee A D Cooper, Structured crowdsourcing enables convolutional segmentation of histology images, Bioinformatics, 2019, btz083
This is a sample result:
Implementation summary
In the tiled array approach the tiles must be rectangular and unrotated. The algorithm used merges polygons in coordinate space so that almost-arbitrarily large structures can be handled without encountering memory issues. The algorithm works as follows:
Extract contours from the given masks using functionality from the masks_to_annotations_handler.py, making sure to account for contour offset so that all coordinates are relative to whole-slide image frame.
Identify contours that touch tile interfaces.
Identify shared edges between tiles.
For each shared edge, find contours that neighbor each other (using bounding box location) and verify that they should be paired using shapely.
Using 4-connectivity link all pairs of contours that are to be merged.
Use morphologic processing to dilate and fill gaps in the linked pairs and then erode to generate the final merged contour.
This initial steps ensures that the number of comparisons made is << n^2. This is important since algorithm complexity plays a key role as whole slide images may contain tens of thousands of annotated structures.
Where to look?
histomicstk/
|_annotations_and_masks/
|_polygon_merger.py
|_tests/
|_ test_polygon_merger.py
|_ test_annotations_to_masks_handler.py
End of explanation
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SAMPLE_SLIDE_ID = '5d586d76bd4404c6b1f286ae'
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(interactive=True)
# gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
# read GTCodes dataframe
PTESTS_PATH = os.path.join(CWD, '..', '..', 'tests')
GTCODE_PATH = os.path.join(PTESTS_PATH, 'test_files', 'sample_GTcodes.csv')
GTCodes_df = read_csv(GTCODE_PATH)
GTCodes_df.index = GTCodes_df.loc[:, 'group']
# This is where masks for adjacent rois are saved
MASK_LOADPATH = os.path.join(
PTESTS_PATH,'test_files', 'annotations_and_masks', 'polygon_merger_roi_masks')
maskpaths = [
os.path.join(MASK_LOADPATH, j) for j in os.listdir(MASK_LOADPATH)
if j.endswith('.png')]
Explanation: 1. Connect girder client and set parameters
End of explanation
print(Polygon_merger.__doc__)
print(Polygon_merger.__init__.__doc__)
print(Polygon_merger.run.__doc__)
Explanation: 2. Polygon merger
The Polygon_merger() is the top level function for performing the merging.
End of explanation
GTCodes_df.head()
Explanation: Required arguments for initialization
Ground truth codes file
This contains the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns:
group: group name of annotation (string), eg. "mostly_tumor"
GT_code: int, desired ground truth code (in the mask) Pixels of this value belong to corresponding group (class)
color: str, rgb format. eg. rgb(255,0,0).
NOTE:
Zero pixels have special meaning and do NOT encode specific ground truth class. Instead, they simply mean 'Outside ROI' and should be IGNORED during model training or evaluation.
End of explanation
[os.path.split(j)[1] for j in maskpaths[:5]]
Explanation: maskpaths
These are absolute paths for the masks generated by tiled analysis to be used.
End of explanation
print(Polygon_merger.set_roi_bboxes.__doc__)
Explanation: Note that the pattern _left-123_ and _top-123_ is assumed to encode the x and y offset
of the mask at base magnification. If you prefer some other convention, you will need to manually provide the
parameter roi_offsets to the method Polygon_merger.set_roi_bboxes.
End of explanation
pm = Polygon_merger(
maskpaths=maskpaths, GTCodes_df=GTCodes_df,
discard_nonenclosed_background=True, verbose=1,
monitorPrefix='test')
contours_df = pm.run()
Explanation: 3. Initialize and run the merger
To keep things clean we discard background contours (in this case, stroma) that
are now enclosed with another contour. See docs for masks_to_annotations_handler.py
if this is confusing. This is purely aesthetic.
End of explanation
contours_df.head()
Explanation: This is the result
End of explanation
# deleting existing annotations in target slide (if any)
existing_annotations = gc.get('/annotation/item/' + SAMPLE_SLIDE_ID)
for ann in existing_annotations:
gc.delete('/annotation/%s' % ann['_id'])
# get list of annotation documents
annotation_docs = get_annotation_documents_from_contours(
contours_df.copy(), separate_docs_by_group=True,
docnamePrefix='test',
verbose=False, monitorPrefix=SAMPLE_SLIDE_ID + ": annotation docs")
# post annotations to slide -- make sure it posts without errors
for annotation_doc in annotation_docs:
resp = gc.post(
"/annotation?itemId=" + SAMPLE_SLIDE_ID, json=annotation_doc)
Explanation: 4. Visualize results on HistomicsTK
End of explanation |
12,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading the data
Step1: Cleaning
Step2: Most bought product
Step3: Most of the reviews (68%) are rated 5
Step4: Separate what's Positive and what's Negative
Step5: This data is imbalanced as we can see that 84% percent of the products tend to have Positive sentiment, i.e., rates equal or above 4.
Step6: Split dataset
Step7: 20% test data and 80% train data
Step8: Word Vector and document-term matrix
Step9: Transform documents in document-term matrix.
What's a document-term matrix?
It is a document which relates each document to the words and frequencies. In this example, there are 57485 words as dimensions and 166752 rows representing the documents (remember that we removed the ratings equal to 3)
Example
Step10: Build the model using 80% of the data
Step11: Predict using test data (20%)
Step12: Metrics - Accuracy, ROC AUC, Confusion Matrix
Step13: Accuracy
Step14: ROC AUC
Step15: Confusion matrix
Step16: Evaluate the most positive and most negative reviews on Giraffes
Step17: Top 5 Positive reviews
Step18: Top 5 Negative reviews | Python Code:
products = pd.read_csv('amazon_baby.csv')
products.head()
products.count()
products.shape
def cleanNaN(value):
if pd.isnull(value):
return ""
else:
return value
Explanation: Loading the data
End of explanation
products['review'] = products['review'].apply(cleanNaN)
products['name'] = products['name'].apply(cleanNaN)
Explanation: Cleaning
End of explanation
products['name'].value_counts()
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(giraffe_reviews['rating'])
#giraffe_reviews['rating'].plot(y='rating', orientation='horizontal', kind='hist', bins=5)
Explanation: Most bought product
End of explanation
giraffe_reviews.rating.value_counts(normalize=True)
Explanation: Most of the reviews (68%) are rated 5
End of explanation
products = products[products['rating'] != 3]
products['sentiment'] = products['rating'] >=4
products.head()
products.sentiment.value_counts()
Explanation: Separate what's Positive and what's Negative
End of explanation
products.sentiment.value_counts(normalize=True)
products.count()
Explanation: This data is imbalanced as we can see that 84% percent of the products tend to have Positive sentiment, i.e., rates equal or above 4.
End of explanation
X = products['review']
y = products['sentiment']
Explanation: Split dataset
End of explanation
# split into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print('Train: ', X_train.shape)
print('Test: ', X_test.shape)
Explanation: 20% test data and 80% train data
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
# learn training data vocabulary, then create document-term matrix
vect.fit(X_train)
#vocabulary contains the word count vector with all the words and how much they appear per review
vect.vocabulary_["great"]
Explanation: Word Vector and document-term matrix
End of explanation
X_train_dtm = vect.transform(X_train)
X_train_dtm.shape
# transform testing data (using fitted vocabulary) into a document-term matrix
X_test_dtm = vect.transform(X_test)
X_test_dtm.shape
Explanation: Transform documents in document-term matrix.
What's a document-term matrix?
It is a document which relates each document to the words and frequencies. In this example, there are 57485 words as dimensions and 166752 rows representing the documents (remember that we removed the ratings equal to 3)
Example:
| |I|like|hate|Vullie|
|--|--|--|--|
|Doc1 |1 |1 |0 |1|
|Doc2 |1 |0 |1 |1|
End of explanation
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train_dtm, y_train)
Explanation: Build the model using 80% of the data
End of explanation
# class predictions and predicted probabilities
y_pred_class = logreg.predict(X_test_dtm)
y_pred_prob = logreg.predict_proba(X_test_dtm)[:, 1]
Explanation: Predict using test data (20%)
End of explanation
# calculate accuracy and AUC
from sklearn import metrics
Explanation: Metrics - Accuracy, ROC AUC, Confusion Matrix
End of explanation
print(metrics.accuracy_score(y_test, y_pred_class))
Explanation: Accuracy:
End of explanation
roc_auc = metrics.roc_auc_score(y_test, y_pred_prob)
print(roc_auc)
roccurve = metrics.roc_curve(y_test, y_pred_prob)
Explanation: ROC AUC:
End of explanation
print(metrics.confusion_matrix(y_test, y_pred_class))
Explanation: Confusion matrix:
End of explanation
giraffe_vect_dtm = vect.transform(giraffe_reviews['review'])
giraffe_vect_dtm.shape
giraffe_reviews['predicted_sentiment'] = logreg.predict(giraffe_vect_dtm)
giraffe_reviews['predicted_sentiment_prob'] = logreg.predict_proba(giraffe_vect_dtm)[:, 1]
giraffe_reviews.head()
giraffe_reviews = giraffe_reviews.sort_values('predicted_sentiment', ascending=False)
Explanation: Evaluate the most positive and most negative reviews on Giraffes
End of explanation
pd.set_option('display.max_colwidth', -1)
giraffe_reviews[:5].review
Explanation: Top 5 Positive reviews
End of explanation
giraffe_reviews[-5:].review
Explanation: Top 5 Negative reviews
End of explanation |
12,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark.
Step1: Script settings
Step2: We create one big dataframe, the columns are the sensors | Python Code:
import os
import sys
import inspect
import numpy as np
import datetime as dt
import time
import pytz
import pandas as pd
import pdb
import tmpo
#import charts
from opengrid import config
from opengrid.library import houseprint
c=config.Config()
DEV = c.get('env', 'type') == 'dev' # DEV is True if we are in development environment, False if on the droplet
if not DEV:
# production environment: don't try to display plots
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.dates import HourLocator, DateFormatter, AutoDateLocator
try:
if os.path.exists(c.get('tmpo', 'data')):
path_to_tmpo_data = c.get('tmpo', 'data')
except:
path_to_tmpo_data = None
if DEV:
if c.get('env', 'plots') == 'inline':
%matplotlib inline
else:
%matplotlib qt
else:
pass # don't try to render plots
plt.rcParams['figure.figsize'] = 12,8
Explanation: This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark.
End of explanation
number_of_days = 7
Explanation: Script settings
End of explanation
hp = houseprint.load_houseprint_from_file('new_houseprint.pkl')
hp.init_tmpo(path_to_tmpo_data=path_to_tmpo_data)
start = pd.Timestamp(time.time() - number_of_days*86400, unit='s')
sensors = hp.get_sensors()
#sensors.remove('b325dbc1a0d62c99a50609e919b9ea06')
for sensor in sensors:
s = sensor.get_data(head=start, resample='s')
try:
s = s.resample(rule='60s', how='max')
s = s.diff()*3600/60
# plot with charts (don't show it) and save html
charts.plot(pd.DataFrame(s), stock=True,
save=os.path.join(c.get('data', 'folder'), 'figures', 'TimeSeries_'+sensor.key+'.html'), show=True)
except:
pass
len(sensors)
Explanation: We create one big dataframe, the columns are the sensors
End of explanation |
12,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
Step5: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise
Step7: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
Explanation: Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
Implement helper functions that you will use when implementing a TensorFlow model
Implement a fully functioning ConvNet using TensorFlow
After this assignment you will be able to:
Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks").
1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
End of explanation
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Run the next cell to load the "SIGNS" dataset you are going to use.
End of explanation
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples.
End of explanation
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape=[None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1', shape=[4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable('W2', shape=[2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
Explanation: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
More Info.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
Explanation: Expected Output:
<table>
<tr>
<td>
W1 =
</td>
<td>
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br>
-0.06847463 0.05245192]
</td>
</tr>
<tr>
<td>
W2 =
</td>
<td>
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br>
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br>
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
</td>
</tr>
</table>
1.2 - Forward propagation
In TensorFlow, there are built-in functions that carry out the convolution steps for you.
tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here
tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here
tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here.
tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here.
tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here.
In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
Exercise:
Implement the forward_propagation function below to build the following model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
Explanation: Expected Output:
<table>
<td>
Z3 =
</td>
<td>
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br>
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
</td>
</table>
1.3 - Compute cost
Implement the compute cost function below. You might find these two functions helpful:
tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here.
tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here.
Exercise: Compute the cost below using the function above.
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
train_op = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([train_op, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
Explanation: Expected Output:
<table>
<td>
cost =
</td>
<td>
2.91034
</td>
</table>
1.4 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches.
Exercise: Complete the function below.
The model below should:
create placeholders
initialize parameters
forward propagate
compute the cost
create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables
End of explanation
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
End of explanation
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
End of explanation |
12,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to optimization
The basic components
The objective function (also called the 'cost' function)
Step1: The "optimizer"
Step2: Additional components
"Box" constraints
Step3: The gradient and/or hessian
Step4: The penalty functions
$\psi(x) = f(x) + k*p(x)$
Step5: Optimizer classifications
Constrained versus unconstrained (and importantly LP and QP)
Step6: The typical optimization algorithm (local or global) is unconstrained. Constrained algorithms tend strongly to be local, and also often use LP/QP approximations. Hence, most optimization algorithms are good either for quick linear/quadratic approximation under some constraints, or are intended for nonlinear functions without constraints. Any information about the problem that impacts the potential solution can be seen as constraining information. Constraining information is typically applied as a penatly, or as a box constraint on an input. The user is thus typically forced to pick whether they want to apply constraints but treat the problem as a LP/QP approximation, or to ignore the constraining information in exchange for a nonliear solver.
Step7: Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took in finding the solution. scipy.optimize has a version of this, with options={'retall'
Step8: Global optimizers tend to be much slower than local optimizers, and often use randomness to pick points within some box constraints instead of starting with an initial guess. The choice then is between algorithms that are non-deterministic and algorithms that are deterministic but depend very strongly on the selected starting point.
Local optimization algorithms have names like "gradient descent" and "steepest descent", while global optimizations tend to use things like "stocastic" and "genetic" algorithms.
Not covered
Step9: Least-squares tends to be chosen when the user wants a measure of the covariance, typically as an error estimate.
Integer programming
Integer programming (IP) or Mixed-integer programming (MIP) requires special optimizers that only select parameter values from the set of integers. These optimizers are typically used for things like cryptography, or other optimizations over a discrete set of possible solutions.
Typical uses
Function minimization
Data fitting
Root finding
Step10: Parameter estimation
Step11: Standard diagnostic tools
Eyeball the plotted solution against the objective
Run several times and take the best result
Analyze a log of intermediate results, per iteration
Rare | Python Code:
import numpy as np
objective = np.poly1d([1.3, 4.0, 0.6])
print(objective)
Explanation: Introduction to optimization
The basic components
The objective function (also called the 'cost' function)
End of explanation
import scipy.optimize as opt
x_ = opt.fmin(objective, [3])
print("solved: x={}".format(x_))
%matplotlib notebook
x = np.linspace(-4,1,101)
import matplotlib.pylab as mpl
mpl.plot(x, objective(x))
mpl.plot(x_, objective(x_), 'ro')
Explanation: The "optimizer"
End of explanation
import scipy.special as ss
import scipy.optimize as opt
import numpy as np
import matplotlib.pylab as mpl
x = np.linspace(2, 7, 200)
# 1st order Bessel
j1x = ss.j1(x)
mpl.plot(x, j1x)
# use scipy.optimize's more modern "results object" interface
result = opt.minimize_scalar(ss.j1, method="bounded", bounds=[2, 4])
j1_min = ss.j1(result.x)
mpl.plot(result.x, j1_min,'ro')
Explanation: Additional components
"Box" constraints
End of explanation
import mystic.models as models
print(models.rosen.__doc__)
import mystic
mystic.model_plotter(mystic.models.rosen, kwds='-f -d -x 1 -b "-3:3:.1, -1:5:.1, 1"')
import scipy.optimize as opt
import numpy as np
# initial guess
x0 = [1.3, 1.6, -0.5, -1.8, 0.8]
result = opt.minimize(opt.rosen, x0)
print(result.x)
# number of function evaluations
print(result.nfev)
# again, but this time provide the derivative
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print(result.x)
# number of function evaluations and derivative evaluations
print(result.nfev, result.njev)
print('')
# however, note for a different x0...
for i in range(5):
x0 = np.random.randint(-20,20,5)
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print("{} @ {} evals".format(result.x, result.nfev))
Explanation: The gradient and/or hessian
End of explanation
# http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#tutorial-sqlsp
'''
Maximize: f(x) = 2*x0*x1 + 2*x0 - x0**2 - 2*x1**2
Subject to: x0**3 - x1 == 0
x1 >= 1
'''
import numpy as np
def objective(x, sign=1.0):
return sign*(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
def derivative(x, sign=1.0):
dfdx0 = sign*(-2*x[0] + 2*x[1] + 2)
dfdx1 = sign*(2*x[0] - 4*x[1])
return np.array([ dfdx0, dfdx1 ])
# unconstrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,),
jac=derivative, method='SLSQP', options={'disp': True})
print("unconstrained: {}".format(result.x))
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - 1]),
'jac' : lambda x: np.array([0.0, 1.0])})
# constrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,), jac=derivative,
constraints=cons, method='SLSQP', options={'disp': True})
print("constrained: {}".format(result.x))
Explanation: The penalty functions
$\psi(x) = f(x) + k*p(x)$
End of explanation
# from scipy.optimize.minimize documentation
'''
**Unconstrained minimization**
Method *Nelder-Mead* uses the Simplex algorithm [1]_, [2]_. This
algorithm has been successful in many applications but other algorithms
using the first and/or second derivatives information might be preferred
for their better performances and robustness in general.
Method *Powell* is a modification of Powell's method [3]_, [4]_ which
is a conjugate direction method. It performs sequential one-dimensional
minimizations along each vector of the directions set (`direc` field in
`options` and `info`), which is updated at each iteration of the main
minimization loop. The function need not be differentiable, and no
derivatives are taken.
Method *CG* uses a nonlinear conjugate gradient algorithm by Polak and
Ribiere, a variant of the Fletcher-Reeves method described in [5]_ pp.
120-122. Only the first derivatives are used.
Method *BFGS* uses the quasi-Newton method of Broyden, Fletcher,
Goldfarb, and Shanno (BFGS) [5]_ pp. 136. It uses the first derivatives
only. BFGS has proven good performance even for non-smooth
optimizations. This method also returns an approximation of the Hessian
inverse, stored as `hess_inv` in the OptimizeResult object.
Method *Newton-CG* uses a Newton-CG algorithm [5]_ pp. 168 (also known
as the truncated Newton method). It uses a CG method to the compute the
search direction. See also *TNC* method for a box-constrained
minimization with a similar algorithm.
Method *Anneal* uses simulated annealing, which is a probabilistic
metaheuristic algorithm for global optimization. It uses no derivative
information from the function being optimized.
Method *dogleg* uses the dog-leg trust-region algorithm [5]_
for unconstrained minimization. This algorithm requires the gradient
and Hessian; furthermore the Hessian is required to be positive definite.
Method *trust-ncg* uses the Newton conjugate gradient trust-region
algorithm [5]_ for unconstrained minimization. This algorithm requires
the gradient and either the Hessian or a function that computes the
product of the Hessian with a given vector.
**Constrained minimization**
Method *L-BFGS-B* uses the L-BFGS-B algorithm [6]_, [7]_ for bound
constrained minimization.
Method *TNC* uses a truncated Newton algorithm [5]_, [8]_ to minimize a
function with variables subject to bounds. This algorithm uses
gradient information; it is also called Newton Conjugate-Gradient. It
differs from the *Newton-CG* method described above as it wraps a C
implementation and allows each variable to be given upper and lower
bounds.
Method *COBYLA* uses the Constrained Optimization BY Linear
Approximation (COBYLA) method [9]_, [10]_, [11]_. The algorithm is
based on linear approximations to the objective function and each
constraint. The method wraps a FORTRAN implementation of the algorithm.
Method *SLSQP* uses Sequential Least SQuares Programming to minimize a
function of several variables with any combination of bounds, equality
and inequality constraints. The method wraps the SLSQP Optimization
subroutine originally implemented by Dieter Kraft [12]_. Note that the
wrapper handles infinite values in bounds by converting them into large
floating values.
'''
Explanation: Optimizer classifications
Constrained versus unconstrained (and importantly LP and QP)
End of explanation
import scipy.optimize as opt
# constrained: linear (i.e. A*x + b)
print(opt.cobyla.fmin_cobyla)
print(opt.linprog)
# constrained: quadratic programming (i.e. up to x**2)
print(opt.fmin_slsqp)
# http://cvxopt.org/examples/tutorial/lp.html
'''
minimize: f = 2*x0 + x1
subject to:
-x0 + x1 <= 1
x0 + x1 >= 2
x1 >= 0
x0 - 2*x1 <= 4
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
A = cvx.matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])
b = cvx.matrix([ 1.0, -2.0, 0.0, 4.0 ])
cost = cvx.matrix([ 2.0, 1.0 ])
sol = cvx_solvers.lp(cost, A, b)
print(sol['x'])
# http://cvxopt.org/examples/tutorial/qp.html
'''
minimize: f = 2*x1**2 + x2**2 + x1*x2 + x1 + x2
subject to:
x1 >= 0
x2 >= 0
x1 + x2 == 1
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
Q = 2*cvx.matrix([ [2, .5], [.5, 1] ])
p = cvx.matrix([1.0, 1.0])
G = cvx.matrix([[-1.0,0.0],[0.0,-1.0]])
h = cvx.matrix([0.0,0.0])
A = cvx.matrix([1.0, 1.0], (1,2))
b = cvx.matrix(1.0)
sol = cvx_solvers.qp(Q, p, G, h, A, b)
print(sol['x'])
Explanation: The typical optimization algorithm (local or global) is unconstrained. Constrained algorithms tend strongly to be local, and also often use LP/QP approximations. Hence, most optimization algorithms are good either for quick linear/quadratic approximation under some constraints, or are intended for nonlinear functions without constraints. Any information about the problem that impacts the potential solution can be seen as constraining information. Constraining information is typically applied as a penatly, or as a box constraint on an input. The user is thus typically forced to pick whether they want to apply constraints but treat the problem as a LP/QP approximation, or to ignore the constraining information in exchange for a nonliear solver.
End of explanation
import scipy.optimize as opt
# probabilstic solvers, that use random hopping/mutations
print(opt.differential_evolution)
print(opt.basinhopping)
import scipy.optimize as opt
# bounds instead of an initial guess
bounds = [(-10., 10)]*5
for i in range(10):
result = opt.differential_evolution(opt.rosen, bounds)
# result and number of function evaluations
print(result.x, '@ {} evals'.format(result.nfev))
Explanation: Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took in finding the solution. scipy.optimize has a version of this, with options={'retall':True}, which returns the solver trajectory.
EXERCISE: Solve the constrained programming problem by any of the means above.
Minimize: f = -1x[0] + 4x[1]
Subject to: <br>
-3x[0] + 1x[1] <= 6 <br>
1x[0] + 2x[1] <= 4 <br>
x[1] >= -3 <br>
where: -inf <= x[0] <= inf
Local versus global
End of explanation
import scipy.optimize as opt
import scipy.stats as stats
import numpy as np
# Define the function to fit.
def function(x, a, b, f, phi):
result = a * np.exp(-b * np.sin(f * x + phi))
return result
# Create a noisy data set around the actual parameters
true_params = [3, 2, 1, np.pi/4]
print("target parameters: {}".format(true_params))
x = np.linspace(0, 2*np.pi, 25)
exact = function(x, *true_params)
noisy = exact + 0.3*stats.norm.rvs(size=len(x))
# Use curve_fit to estimate the function parameters from the noisy data.
initial_guess = [1,1,1,1]
estimated_params, err_est = opt.curve_fit(function, x, noisy, p0=initial_guess)
print("solved parameters: {}".format(estimated_params))
# err_est is an estimate of the covariance matrix of the estimates
print("covarance: {}".format(err_est.diagonal()))
import matplotlib.pylab as mpl
mpl.plot(x, noisy, 'ro')
mpl.plot(x, function(x, *estimated_params))
Explanation: Global optimizers tend to be much slower than local optimizers, and often use randomness to pick points within some box constraints instead of starting with an initial guess. The choice then is between algorithms that are non-deterministic and algorithms that are deterministic but depend very strongly on the selected starting point.
Local optimization algorithms have names like "gradient descent" and "steepest descent", while global optimizations tend to use things like "stocastic" and "genetic" algorithms.
Not covered: other exotic types
Other important special cases:
Least-squares fitting
End of explanation
import numpy as np
import scipy.optimize as opt
def system(x,a,b,c):
x0, x1, x2 = x
eqs= [
3 * x0 - np.cos(x1*x2) + a, # == 0
x0**2 - 81*(x1+0.1)**2 + np.sin(x2) + b, # == 0
np.exp(-x0*x1) + 20*x2 + c # == 0
]
return eqs
# coefficients
a = -0.5
b = 1.06
c = (10 * np.pi - 3.0) / 3
# initial guess
x0 = [0.1, 0.1, -0.1]
# Solve the system of non-linear equations.
result = opt.root(system, x0, args=(a, b, c))
print("root:", result.x)
print("solution:", result.fun)
Explanation: Least-squares tends to be chosen when the user wants a measure of the covariance, typically as an error estimate.
Integer programming
Integer programming (IP) or Mixed-integer programming (MIP) requires special optimizers that only select parameter values from the set of integers. These optimizers are typically used for things like cryptography, or other optimizations over a discrete set of possible solutions.
Typical uses
Function minimization
Data fitting
Root finding
End of explanation
import numpy as np
import scipy.stats as stats
# Create clean data.
x = np.linspace(0, 4.0, 100)
y = 1.5 * np.exp(-0.2 * x) + 0.3
# Add a bit of noise.
noise = 0.1 * stats.norm.rvs(size=100)
noisy_y = y + noise
# Fit noisy data with a linear model.
linear_coef = np.polyfit(x, noisy_y, 1)
linear_poly = np.poly1d(linear_coef)
linear_y = linear_poly(x)
# Fit noisy data with a quadratic model.
quad_coef = np.polyfit(x, noisy_y, 2)
quad_poly = np.poly1d(quad_coef)
quad_y = quad_poly(x)
import matplotlib.pylab as mpl
mpl.plot(x, noisy_y, 'ro')
mpl.plot(x, linear_y)
mpl.plot(x, quad_y)
#mpl.plot(x, y)
Explanation: Parameter estimation
End of explanation
import mystic.models as models
print(models.zimmermann.__doc__)
Explanation: Standard diagnostic tools
Eyeball the plotted solution against the objective
Run several times and take the best result
Analyze a log of intermediate results, per iteration
Rare: look at the covariance matrix
Issue: how can you really be sure you have the results you were looking for?
EXERCISE: Use any of the solvers we've seen thus far to find the minimum of the zimmermann function (i.e. use mystic.models.zimmermann as the objective). Use the bounds suggested below, if your choice of solver allows it.
End of explanation |
12,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
String exercises
Step1: Fill in the code for the functions below. main() is already set up
to call the functions with a few different inputs,
printing 'OK' when each function is correct.
The starter code for each function includes a 'return'
which is just a placeholder for your code.
A. doughnuts
Given an int count of a number of doughnuts, return a string
of the form 'Number of doughnuts
Step2: B. both_ends
Given a string s, return a string made of the first 2
and the last 2 chars of the original string,
so 'spring' yields 'spng'. However, if the string length
is less than 2, return instead the empty string.
Step3: C. fix_start
Given a string s, return a string
where all occurences of its first char have
been changed to '*', except do not change
the first char itself.
e.g. 'babble' yields
ba**le
Assume that the string is length 1 or more.
Hint
Step4: D. mix_up
Given strings a and b, return a single string with a and b separated
by a space ' ', except swap the first 2 chars of each string.
Assume a and b are length 2 or more.
Step5: E. verbing
Given a string, if its length is at least 3,
add 'ing' to its end.
Unless it already ends in 'ing', in which case
add 'ly' instead.
If the string length is less than 3, leave it unchanged.
Return the resulting string.
Step6: F. not_bad
Given a string, find the first appearance of the
substring 'not' and 'bad'. If the 'bad' follows
the 'not', replace the whole 'not'...'bad' substring
with 'good'.
Return the resulting string.
So 'This dinner is not that bad!' yields
Step7: G. front_back
Consider dividing a string into two halves.
If the length is even, the front and back halves are the same length.
If the length is odd, we'll say that the extra char goes in the front half.
e.g. 'abcde', the front half is 'abc', the back half 'de'.
Given 2 strings, a and b, return a string of the form
a-front + b-front + a-back + b-back | Python Code:
# Provided simple test() function
def test(got, expected):
if got == expected:
prefix = ' OK '
else:
prefix = ' X '
print '%s got: %s expected: %s' % (prefix, repr(got), repr(expected))
Explanation: String exercises
End of explanation
def doughnuts(count):
# +++your code here+++
return
test(doughnuts(4), 'Number of doughnuts: 4')
test(doughnuts(9), 'Number of doughnuts: 9')
test(doughnuts(10), 'Number of doughnuts: many')
test(doughnuts(99), 'Number of doughnuts: many')
Explanation: Fill in the code for the functions below. main() is already set up
to call the functions with a few different inputs,
printing 'OK' when each function is correct.
The starter code for each function includes a 'return'
which is just a placeholder for your code.
A. doughnuts
Given an int count of a number of doughnuts, return a string
of the form 'Number of doughnuts: <count>', where <count> is the number
passed in. However, if the count is 10 or more, then use the word 'many'
instead of the actual count.
So doughnuts(5) returns 'Number of doughnuts: 5'
and doughnuts(23) returns 'Number of doughnuts: many'
End of explanation
def both_ends(s):
# +++your code here+++
return
test(both_ends('spring'), 'spng')
test(both_ends('Hello'), 'Helo')
test(both_ends('a'), '')
test(both_ends('xyz'), 'xyyz')
Explanation: B. both_ends
Given a string s, return a string made of the first 2
and the last 2 chars of the original string,
so 'spring' yields 'spng'. However, if the string length
is less than 2, return instead the empty string.
End of explanation
def fix_start(s):
# +++your code here+++
return
test(fix_start('babble'), 'ba**le')
test(fix_start('aardvark'), 'a*rdv*rk')
test(fix_start('google'), 'goo*le')
test(fix_start('doughnut'), 'doughnut')
Explanation: C. fix_start
Given a string s, return a string
where all occurences of its first char have
been changed to '*', except do not change
the first char itself.
e.g. 'babble' yields
ba**le
Assume that the string is length 1 or more.
Hint: s.replace(stra, strb) returns a version of string s
where all instances of stra have been replaced by strb.
End of explanation
def mix_up(a, b):
# +++your code here+++
return
test(mix_up('mix', 'pod'), 'pox mid')
test(mix_up('dog', 'dinner'), 'dig donner')
test(mix_up('gnash', 'sport'), 'spash gnort')
test(mix_up('pezzy', 'firm'), 'fizzy perm')
Explanation: D. mix_up
Given strings a and b, return a single string with a and b separated
by a space ' ', except swap the first 2 chars of each string.
Assume a and b are length 2 or more.
End of explanation
def verbing(s):
# +++your code here+++
return
test(verbing('hail'), 'hailing')
test(verbing('swimming'), 'swimmingly')
test(verbing('do'), 'do')
Explanation: E. verbing
Given a string, if its length is at least 3,
add 'ing' to its end.
Unless it already ends in 'ing', in which case
add 'ly' instead.
If the string length is less than 3, leave it unchanged.
Return the resulting string.
End of explanation
def not_bad(s):
# +++your code here+++
return
test(not_bad('This movie is not so bad'), 'This movie is good')
test(not_bad('This dinner is not that bad!'), 'This dinner is good!')
test(not_bad('This tea is not hot'), 'This tea is not hot')
test(not_bad("It's bad yet not"), "It's bad yet not")
Explanation: F. not_bad
Given a string, find the first appearance of the
substring 'not' and 'bad'. If the 'bad' follows
the 'not', replace the whole 'not'...'bad' substring
with 'good'.
Return the resulting string.
So 'This dinner is not that bad!' yields:
This dinner is good!
End of explanation
def front_back(a, b):
# +++your code here+++
return
test(front_back('abcd', 'xy'), 'abxcdy')
test(front_back('abcde', 'xyz'), 'abcxydez')
test(front_back('Kitten', 'Donut'), 'KitDontenut')
Explanation: G. front_back
Consider dividing a string into two halves.
If the length is even, the front and back halves are the same length.
If the length is odd, we'll say that the extra char goes in the front half.
e.g. 'abcde', the front half is 'abc', the back half 'de'.
Given 2 strings, a and b, return a string of the form
a-front + b-front + a-back + b-back
End of explanation |
12,381 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have this example of matrix by matrix multiplication using numpy arrays: | Problem:
from scipy import sparse
import numpy as np
sa = sparse.csr_matrix(np.array([[1,2,3],[4,5,6],[7,8,9]]))
sb = sparse.csr_matrix(np.array([0,1,2]))
result = sa.multiply(sb) |
12,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The purpose of this notebook is to determine the optimal launch azimuth and elevation angles, in order to maximize the likelihood of landing in a particular region. Eventually, this should be expanded to enable input of day-of wind measurements so that we can determine these settings on the day of the launch. We may need to optimize the trajectory simulation code for this purpose, perhaps with less degrees of freedom, since it is currently fairly slow.
This program has essentially the same structure as the MDO and differs only in how trajectory performance is evaluated and which parameters are held constant or optimized over. Examination of the differences should enable one to make similar programs for different but related purposes.
Functions of Merit
We chose to abstract all of the functions used within the merit function for increased flexibility, ease of reading, and later utility. We use convex functions four our optimization, but we don't use much of the theory of convex optimization.
objective_additive is arbitrarily constructed. It is
appropriate for vector-valued measurements,
not normalized,
squared to reward (or punish) relative to the distance from nominal value, and
divided by two for aesthetic reasons.
objective is arbitrarily constructed. It is
normalized (by a somewhat arbitrary constant) to bring it into the same range as constraints,
squared to reward (or punish) relative to the distance from nominal value, and
divided by two so that our nominal value is 0.5 instead of 1.0.
exact is less arbitrarily constructed. It is
squared to be horizontally symmetric (which could also be obtained by absolute value),
determined by the distance from a constant, and
divided by two is for aesthetic value.
exterior is not particularly arbitrary. It is
boolean so that we can specify whether it is minimizing or maximizing its variable,
0 when the inequality is satisfied, otherwise it is just as punishing as exact.
barrier comes in two flavors, one of which is not used here. It is
boolean so that we can specify whether it is a lower or an upper bound,
completely inviolable, unlike exact and exterior penalties.
Technically logarithmic barrier functions allow negative penalties (i.e. rewards), but since we use upper and lower altitude barriers, it is impossible that their sum be less than 0. If the optimizer steps outside of the apogee window, the barrier functions can attempt undefined operations (specifically, taking the logarithm of a negative number), so some error handling is required to return an infinite value in those cases. Provided that the initial design is within the feasible region, the optimizer will not become disoriented by infinite values.
Step1: Optimization Problem
Given a design vector $x$ and the iteration number $n$ our merit function cost runs a trajectory simulation and evaluates the quality of that rocket. We keep track of each design and its merit value for later visualization, hence why global variables are used.
We run an iterative sequence of optimization routines with a decreasing barrier function and increasing penalty functions so that the optimization can range over a larger portion of the design space away from its boundaries before settling into local minima closer to the boundary.
Step2: Top-Level of Optimization Routine
Here's where the magic happens. This code block runs the iterative optimization and provides details from our optimized trajectory. | Python Code:
# all of our comparisons are ratios instead of subtractions because
# it's normalized, instead of dependent on magnitudes of variables and constraints
def objective_additive(var, cons):
return np.linalg.norm(var - cons)**2 / 2
# minimize this, **2 makes it well behaved w.r.t. when var=cons
def objective(var, cons):
return (var/cons)**2 / 2
# **2 because i like it more than abs(), but that also works
def exact(var, cons):
return (var/cons - 1)**2 / 2
# this is your basic exterior penalty, either punishes for unfeasibility or is inactive
def exterior(var, cons, good_if_less_than=False):
if good_if_less_than:
return max(0, var/cons - 1)**2 / 2
else:
return max(0, -(var/cons - 1))**2 / 2
# this barrier function restricts our objective function to the strictly feasible region
# make rockets great again, build that wall, etc, watch out for undefined operations
def barrier(var, cons, int_point=False, good_if_less_than=True):
global dbz
try: # just in case we accidentally leave feasible region
if not int_point:
if good_if_less_than:
return -log(-(var/cons - 1))
else:
return -log(var/cons - 1)
elif int_point:
def interior(g): return 1/g # in case we don't like logarithms, which is a mistake
if good_if_less_than:
return -interior(var/cons - 1)
else:
return -interior(-(var/cons - 1))
except:
return float('inf') # ordinarily, this is bad practice since it could confuse the optimizer
# however, since this is a barrier function not an ordinary penalty, i think it's fine
Explanation: The purpose of this notebook is to determine the optimal launch azimuth and elevation angles, in order to maximize the likelihood of landing in a particular region. Eventually, this should be expanded to enable input of day-of wind measurements so that we can determine these settings on the day of the launch. We may need to optimize the trajectory simulation code for this purpose, perhaps with less degrees of freedom, since it is currently fairly slow.
This program has essentially the same structure as the MDO and differs only in how trajectory performance is evaluated and which parameters are held constant or optimized over. Examination of the differences should enable one to make similar programs for different but related purposes.
Functions of Merit
We chose to abstract all of the functions used within the merit function for increased flexibility, ease of reading, and later utility. We use convex functions four our optimization, but we don't use much of the theory of convex optimization.
objective_additive is arbitrarily constructed. It is
appropriate for vector-valued measurements,
not normalized,
squared to reward (or punish) relative to the distance from nominal value, and
divided by two for aesthetic reasons.
objective is arbitrarily constructed. It is
normalized (by a somewhat arbitrary constant) to bring it into the same range as constraints,
squared to reward (or punish) relative to the distance from nominal value, and
divided by two so that our nominal value is 0.5 instead of 1.0.
exact is less arbitrarily constructed. It is
squared to be horizontally symmetric (which could also be obtained by absolute value),
determined by the distance from a constant, and
divided by two is for aesthetic value.
exterior is not particularly arbitrary. It is
boolean so that we can specify whether it is minimizing or maximizing its variable,
0 when the inequality is satisfied, otherwise it is just as punishing as exact.
barrier comes in two flavors, one of which is not used here. It is
boolean so that we can specify whether it is a lower or an upper bound,
completely inviolable, unlike exact and exterior penalties.
Technically logarithmic barrier functions allow negative penalties (i.e. rewards), but since we use upper and lower altitude barriers, it is impossible that their sum be less than 0. If the optimizer steps outside of the apogee window, the barrier functions can attempt undefined operations (specifically, taking the logarithm of a negative number), so some error handling is required to return an infinite value in those cases. Provided that the initial design is within the feasible region, the optimizer will not become disoriented by infinite values.
End of explanation
# this manages all our constraints
# penalty parameters: mu -> 0 and rho -> infinity
def penalty(alt, mu, rho):
b = [#barrier(alt, CONS_ALT, int_point=False, good_if_less_than=False)
]
eq = []
ext = [exterior(alt, CONS_ALT-3000, good_if_less_than=False)
]
return mu*sum(b) + rho*(sum(eq) + sum(ext))
# Pseudo-objective merit function
# x is array of design parameters, n is index of penalty and barrier functions
def cost(x, nominal):
global allvectors, allobjfun
# get trajectory data
sim = trajectory(M_PROP, MDOT, P_E,
THROTTLE_WINDOW, MIN_THROTTLE,
RCS_MDOT, RCS_P_E, RCS_P_CH,
BALLAST, FIN_ROOT, FIN_TIP, FIN_SWEEP_ANGLE, FIN_SEMISPAN, FIN_THICKNESS, CON_NOSE_L,
LOX_TANK_P, IPA_TANK_P, RIB_T, NUM_RADL_DVSNS,
AIRFRM_IN_RAD, IPA_WT, OF, ENG_P_CH, ENG_T_CH, ENG_KE, ENG_MM,
[0, 0, x[0], x[1], False, 0, 0, 0, 0, 0, 0, True],
0.025, True, 0.005, True, True)
# either minimize the distance from nominal impact point
if TARGET:
#nominal = np.array([nominal, kludge])
obj_func = objective_additive(sim.impact, nominal)
# or maximize distance from launch point
else:
obj_func = - objective_additive(sim.impact, sim.env.launch_pt)
pen_func = penalty(sim.LV4.apogee, MU_0 / (2**1), RHO_0 * (2**1))
# add objective and penalty functions
merit_func = obj_func + pen_func
allvectors.append(x) # maintains a list of every design, side effect
allobjfun.append(merit_func)
#print("vec:", x,'\t', "impact:", sim.impact, '\t', "alt:", sim.LV4.apogee)
return merit_func
# we want to iterate our optimizer for theoretical "convergence" reasons (given some assumptions)
# n = number of sequential iterations
def iterate(func, x_0, n, nominal):
x = x_0
designs = []
for i in range(n):
print("Iteration " + str(i+1) + ":")
# this minimizer uses simplex method
res = minimize(func, x, args=(nominal), method='nelder-mead', options={'disp': True})
x = res.x # feed optimal design vec into next iteration
designs.append(res.x) # we want to compare sequential objectives
return x
# this is for experimenting with stochastic optimization, which takes much longer but may yield more global results.
def breed_rockets(func, nominal):
res = differential_evolution(func=func, bounds=[(0, 360), (-10, 1)], args=((nominal)),
strategy='best1bin', popsize=80, mutation=(.1, .8), recombination=.05,
updating='immediate', disp=True, atol=0.05, tol=0.05,
polish=True,workers=-1)
return res.x
def rbf_optimizer(func, nominal):
bb = rbfopt.RbfoptUserBlackBox(2,
np.array([0, -5]),
np.array([360, 1]),
np.array(['R']*2), lambda x: func(x, nominal))
settings = rbfopt.RbfoptSettings(minlp_solver_path='/home/cory/Downloads/Bonmin-1.8.8/build/bin/bonmin',
nlp_solver_path='/home/cory/Downloads/Bonmin-1.8.8/build/bin/ipopt',
max_evaluations=150, eps_impr=1.0e-7)
alg = rbfopt.RbfoptAlgorithm(settings, bb)
val, x, itercount, evalcount, fast_evalcount = alg.optimize()
return x
Explanation: Optimization Problem
Given a design vector $x$ and the iteration number $n$ our merit function cost runs a trajectory simulation and evaluates the quality of that rocket. We keep track of each design and its merit value for later visualization, hence why global variables are used.
We run an iterative sequence of optimization routines with a decreasing barrier function and increasing penalty functions so that the optimization can range over a larger portion of the design space away from its boundaries before settling into local minima closer to the boundary.
End of explanation
# this either maximizes distance from launch site or minimizes distance from nominal impact point
if __name__ == '__main__':
if TARGET:
target_pt = np.array([32.918255, -106.349477])
else:
target_pt = None
if SIMPLEX:
x0 = np.array([AZ_PERTURB, EL_PERTURB])
# feed initial design into iterative optimizer, get most (locally) feasible design
x = iterate(cost, x0, 1, nominal = target_pt)
else:
# probe design space, darwin style. if design space has more than 3 dimensions, you might need this. takes forever.
x = rbf_optimizer(cost, nominal = target_pt)
print("Optimization done!")
if __name__ == '__main__':
# Rename the optimized output for convenience
az_perturb = x[0]
el_perturb = x[1]
# get trajectory info from optimal design
sim = trajectory(M_PROP, MDOT, P_E,
THROTTLE_WINDOW, MIN_THROTTLE,
RCS_MDOT, RCS_P_E, RCS_P_CH,
BALLAST, FIN_ROOT, FIN_TIP, FIN_SWEEP_ANGLE, FIN_SEMISPAN, FIN_THICKNESS, CON_NOSE_L,
LOX_TANK_P, IPA_TANK_P, RIB_T, NUM_RADL_DVSNS,
AIRFRM_IN_RAD, IPA_WT, OF, ENG_P_CH, ENG_T_CH, ENG_KE, ENG_MM,
[0, 0, az_perturb, el_perturb, False, 0, 0, 0, 0, 0, 0, True],
0.025, False, 0.005, True, False)
print("Azimuth Perturbation:", az_perturb)
print("Elevation Perturbation:", el_perturb)
print("Launch point", sim.env.launch_pt)
print("Impact point", sim.impact)
print()
textlist = print_results(sim, False)
# draw pretty pictures of optimized trajectory
rocket_plot(sim.t, sim.alt, sim.v, sim.a, sim.thrust,
sim.dyn_press, sim.Ma, sim.m, sim.p_a, sim.drag, sim.throttle, sim.fin_flutter, sim, False, None, None)
# get/print info about our trajectory and rocket
for line in textlist:
print(line)
# draw more pretty pictures, but of the optimizer guts
design_grapher(allvectors)
Explanation: Top-Level of Optimization Routine
Here's where the magic happens. This code block runs the iterative optimization and provides details from our optimized trajectory.
End of explanation |
12,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
12,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Load and inspect the data
Step1: Load the recent data
Step2: Visualize the data with GraphLab Canvas
Step3: Visualize the data with matplotlib
Step4: 2. A naive baseline
Step5: 3. The autoregressive model
Create lagged features
Step6: Train the model
Step7: Get a forecast from the model
Step8: 4. The gradient-boosted trees model
Split the timestamp into parts
Step9: Create lags for observed features
To forecast tomorrow's earthqauke count
Step10: Train the model
Step11: Compute the model's forecast
Step12: 5. And the winner is... | Python Code:
import graphlab as gl
daily_stats = gl.load_timeseries('working_data/global_daily_stats.ts')
print "Number of rows:", len(daily_stats)
print "Start:", daily_stats.min_time
print "End:", daily_stats.max_time
daily_stats.print_rows(3)
Explanation: 1. Load and inspect the data: daily global earthquakes
Load the main dataset: Feb. 2, 2013 - Mar. 15, 2016
End of explanation
daily_update = gl.load_timeseries('working_data/global_daily_update.ts')
daily_update.print_rows()
Explanation: Load the recent data: Mar. 16, 2016 - Mar. 22, 2016
The first point in this dataset is our forecasting goal. Pretend it's March 15, and we don't know the count of earthquakes for March 16th.
End of explanation
daily_stats.to_sframe().show()
Explanation: Visualize the data with GraphLab Canvas
End of explanation
import matplotlib.pyplot as plt
%matplotlib notebook
plt.style.use('ggplot')
fig, ax = plt.subplots()
ax.plot(daily_stats['time'], daily_stats['count'], color='dodgerblue')
ax.set_xlabel('Date')
ax.set_ylabel('Number of earthquakes')
fig.autofmt_xdate()
fig.show()
Explanation: Visualize the data with matplotlib
End of explanation
baseline_forecast = daily_stats['count'].mean()
print baseline_forecast
Explanation: 2. A naive baseline: the grand mean
End of explanation
daily_stats['lag1_count'] = daily_stats.shift(1)['count']
daily_stats['lag2_count'] = daily_stats.shift(2)['count']
daily_stats.print_rows(3)
Explanation: 3. The autoregressive model
Create lagged features
End of explanation
train_counts = daily_stats[2:].to_sframe()
ar_model = gl.linear_regression.create(train_counts, target='count',
features=['lag1_count', 'lag2_count'],
l2_penalty=0., validation_set=None,
verbose=False)
print ar_model
train_counts.tail(5).print_rows()
Explanation: Train the model
End of explanation
## Construct the input dataset first.
sf_forecast = gl.SFrame({'lag1_count': [daily_stats['count'][-1]],
'lag2_count': [daily_stats['count'][-2]]})
## Compute the model's forecast
ar_forecast = ar_model.predict(sf_forecast)
print ar_forecast[0]
Explanation: Get a forecast from the model
End of explanation
date_parts = daily_stats.index.split_datetime(column_name_prefix='date',
limit=['year', 'month', 'day'])
Explanation: 4. The gradient-boosted trees model
Split the timestamp into parts
End of explanation
daily_stats['lag1_avg_mag'] = daily_stats.shift(1)['avg_mag']
daily_stats['lag1_max_mag'] = daily_stats.shift(1)['max_mag']
sf_train = daily_stats.to_sframe()
sf_train = sf_train.add_columns(date_parts)
sf_train.print_rows(3)
Explanation: Create lags for observed features
To forecast tomorrow's earthqauke count:
- we do know what the date will be, so no need to lag,
- we don't know what the max and average magnitude will be, so we need to lag.
End of explanation
feature_list = ['lag1_avg_mag', 'lag1_max_mag', 'lag1_count',
'date.year', 'date.month', 'date.day']
# Remove the row with no lagged features.
sf_train = sf_train[1:]
gbt_model = gl.boosted_trees_regression.create(sf_train, target='count',
features=feature_list,
max_iterations=20,
validation_set=None,
verbose=False)
print gbt_model
Explanation: Train the model
End of explanation
## Prepend the last couple rows of the training data.
ts_forecast = daily_stats[daily_update.column_names()][-2:].union(daily_update)
## Create the lagged features.
ts_forecast['lag1_avg_mag'] = ts_forecast.shift(1)['avg_mag']
ts_forecast['lag1_max_mag'] = ts_forecast.shift(1)['max_mag']
ts_forecast['lag1_count'] = ts_forecast.shift(1)['count']
## Split the timestamp into date parts.
new_date_parts = ts_forecast.index.split_datetime(column_name_prefix='date',
limit=['year', 'month', 'day'])
## Add the date parts to the dataset.
sf_forecast = ts_forecast.to_sframe().add_columns(new_date_parts)
sf_forecast.print_rows(3)
gbt_forecast = gbt_model.predict(sf_forecast)
gbt_forecast[2]
Explanation: Compute the model's forecast
End of explanation
print "Actual value for March 16:", daily_update['count'][0]
print "\nBaseline forecast:", baseline_forecast
print "AR model forecast:", ar_forecast[0]
print "GBT forecast:", gbt_forecast[2], "\t(*** winner ***)"
Explanation: 5. And the winner is...
End of explanation |
12,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 1 - Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
Step1: Problem 2 - Even Fibonacci numbers
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be
Step2: While k = 1, 2, 3...
F(3k - 2) and F(3k)
Step3: Problem 3 - Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
Step4: Problem 4 - Largest palindrome product
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Step5: Problem 5 - Smallest multiple
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Step6: Problem 6 - Sum square difference
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 55^2 = 3025
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
Sol
1^2 + 2^2 + ... + n^2 = n * (n + 1) * (2 * n + 1) / 6
1 + 2 + ... + n = n * (n + 1) /2
difference = n * (n + 1) * (n - 1) * (3 * n + 2) / 12 | Python Code:
import math
def multiple35(n):
n3 = (n - 1) // 3
sum3 = 3 * n3 * (n3 + 1) // 2
n5 = (n - 1) // 5
sum5 = 5 * n5 * (n5 + 1) // 2
n15 = (n - 1) // 15
sum15 = 15 * n15 * (n15 + 1) // 2
return sum3 + sum5 - sum15
print(multiple35(1000))
Explanation: Problem 1 - Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
End of explanation
def fib(limit):
a = 1
b = 2
while b <= limit:
a, b = b, a + b
return [a, b]
Explanation: Problem 2 - Even Fibonacci numbers
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
End of explanation
def even_sum(limit):
a, b = fib(limit)
if a % 2:
if b % 2:
## odd, odd
return (b - 1) // 2
## odd, even
return (a - 1) // 2
## even, odd
return (a + b - 1) // 2
fib(100)
even_sum(100)
print(even_sum(4000000))
Explanation: While k = 1, 2, 3...
F(3k - 2) and F(3k): odd value items
F(3k - 1): even value items
F(2) + F(5) + F(8) + ... + F(3k - 1) (even values = SE(3k - 1))
= 1 + F(1) + F(3) + F(4) + F(6) + (F7) + ... + F(3k - 3) + F(3k - 2) (odd values + 1)
Add up:
2 * SE(3k - 1) = 1 + F(1) + F(2) + ... + F(3k - 1) = 1 + S(3k - 1) = F(3k + 1) - 1
SE(3k - 1) = (F(3k + 1) - 1) / 2
End of explanation
def check_divisor(n, i):
while not n % i:
n //= i
return n
def largest_prime_factor(n):
n = check_divisor(n, 2)
if n == 1:
return 2
i = 3
while i <= math.sqrt(n):
n = check_divisor(n, i)
i += 2
if n > 2:
return n
return i - 2
print(largest_prime_factor(600851475143))
Explanation: Problem 3 - Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
End of explanation
def isPal(s):
return s == s[::-1]
def firstPal(n):
def largestPal(n):
for i in range(100, 1000):
if n % i == 0 and n / i >= 100 && n / i < 1000:
return n
return largestPal(firstPal(n - 1))
# pals = []
# for num in range(101101, 1000000):
# if not isPal(str(num)):
# continue
# for i in range(100, int(math.sqrt(num) + 1)):
# if num / i > 999:
# continue
# if num % i == 0:
# pals.append(num)
# break;
# pals = []
# for i in range(100, 1000):
# for j in range(100, 1000):
# ij = i * j
# if isPal(str(ij)):
# pals.append(ij)
n = 1000000
print(max([pal for pal in pals if pal < n]))
Explanation: Problem 4 - Largest palindrome product
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
End of explanation
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
def lcm(a, b):
return a * b // gcd(a, b)
def smallest_multiple(n):
s_m = 1
for num in range(1, n + 1):
s_m = lcm(s_m, num)
return s_m
print(smallest_multiple(20))
Explanation: Problem 5 - Smallest multiple
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
End of explanation
def sum_square_diff(n):
return n * (n + 1) * (n - 1) * (3 * n + 2) // 12
print(sum_square_diff(100))
Explanation: Problem 6 - Sum square difference
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 55^2 = 3025
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
Sol
1^2 + 2^2 + ... + n^2 = n * (n + 1) * (2 * n + 1) / 6
1 + 2 + ... + n = n * (n + 1) /2
difference = n * (n + 1) * (n - 1) * (3 * n + 2) / 12
End of explanation |
12,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análise de Comics Digitais (Python) - Marvel ou DC?
Introdução
Depois de ter feito a análise do site (post aqui) e de ter feito um scraping dos dados do site da Comixology (post aqui), agora vamos fazer uma bela análise de dados das informações de comics digitais com Python (Pandas).
Vamos descobrir quais editoras tem os melhores preços relativos à quantidade de páginas de seus comics, as editoras com as melhores avaliações médias, além de uma análise mais profunda do duelo das gigantes
Step1: Agora, vamos criar uma coluna de preço por página. Desta forma, podemos comparar preços de uma forma mais adequada, visto que o preço de um comic pode variar muito de acordo com suas páginas.
Para alguns comics, a informação de quantidade de páginas não existe, e desta forma, estes retornam um valor de infinito (inf) ao fazer o cálculo. Para estes, vamos definir seu valor como NaN
Step2: Vamos agora usar a função iterrows() do Dataframe para extrair o ano de publicação da versão impressa do comic. Esta função cria uma espécie de forloop, que itera sobre cada linha do DataFrame. Vamos usar o split para dividir a string que contém a data de publicação em uma lista. O terceiro valor será o ano. Em alguns casos, o ano retorna um valor maior que 2016. Como isto é impossível, estes valores também serão definidos como NaN
Step3: Para as análises (e avante)
A primeira análise que faremos será o cálculo dos valores médios do site em geral, como preço médio dos comics, quantidade média de páginas, entre outros. Usaremos a função nanmean() do numpy, que calcula a média excluindo os NaN, que é quando não possuímos a informação.
Step4: Agora, definiremos o número máximo de colunas de cada campo para impressão da tabela em 40 colunas. Faremos isto pois o nome de alguns comics é bastante extenso, e a impressão da tabela ficaria bem esquisita. Desta forma, pelo menos conseguimos ver algumas informações a mais.
Depois vamos listar as comics que possuem 5 estrelas, possuem mais de 20 avaliações (para captar apenas as mais representativas; comics com 5 de avaliação média mas que possuam apenas uma avaliação podem não ser uma métrica muito boa) e vamos ordena-las por preço por página. No topo teremos algumas comics que são gratuitas (as 6 primeiras). Depois, temos ótimas comics, na visão dos usuários, com um preço por página bem atrativo.
Step5: Na próxima análise, usaremos apenas comics com mais de 5 avaliações. Para isso, vamos filtrar
Step6: Vamos criar uma pivot table do Pandas, para visualizarmos a quantidade de comics com avaliação e a avaliação média deste Publisher. Depois vamos considerar como Publishers representativas aquelas que possuem pelo menos 20 comics com avaliações. Para isso faremos o filtro da pivot table. Depois, vamos ordenar esta tabela filtrada por avaliação média, em ordem decrescente. Ou seja, as primeiras Publishers são consideradas as que possuem a melhor avaliação média de seus comics. Repare que as gigantes DC Comics e Marvel ficam razoavelmente para baixo da lista.
Step7: Para ajudar na visão, um gráfico em matplotlib que representa a tabela acima
Step8: Para simplificar um pouco e ter uma tabela e gráfico mais fáceis de visualizar, vamos considerar agora comics que possuem pelo menos 300 comics com avaliações. Logo abaixo, o gráfico que representa a tabela (menos poluído e permitindo uma visão melhor da situação das publishers)
Step9: Uma coisa que eu acreditava que fosse legal de conferir era se a classificação etária faz alguma diferença na avaliação que os usuários dão a cada comic. Será que comics voltados para o público adulto possuem avaliações melhores? Ou ao contrário? Vamos checar fazendo uma nova pivot table
Step10: Como podemos perceber, as barras do gráfico ficam com alturas bem próximas. Ao que parece, a classificação etária não afeta muito significativamente a avaliação dos comics. Se formos analisar apenas matematicamente, as comics liberadas para qualquer idade ou para maiores de 9 anos possuem as melhores avaliações.
Nosso próximo passo é ver como, de certa forma, evoluiu o número de lançamentos de quadrinhos (considerando as versões impressas) ao longo dos anos. Lembrando que já criamos a coluna com o ano de lançamento da versão impressa de um comic. O próximo passo é basicamente contar a quantidade de cada ano nesta coluna que criamos. Vamos fazer uma lista com os anos de lançamento passados para inteiros para que o eixo do gráfico faça a leitura correta
Step11: Os números mostram que o crescimento era moderado, até que na década de 2000 ocorreu um boom, com um crescimento bastante considerável até 2012, quando a quantidade de lançamentos começou a oscilar. A queda mostrada no ano de 2016 ocorre, obviamente, pois ainda estamos na metade do mesmo.
Agora, para fazer uma avaliação dos comics mais baixados no site (não dá para aferir os mais comprados, visto que alguns dos comics são gratuitos). Para esta análise, vamos ver quais são os 30 comics com mais avaliações.
Step12: Walking Dead liderando com (muuuita) folga. Depois, algumas comics de Marvel e DC e mais alguns comics variados.
Agora, vamos fazer uma análise mais profunda dos comics das gigantes
Step13: Como vemos, a DC possui um preço médio e preço por página menor, enquanto possui uma avaliação média levemente maior. A quantidade média de páginas nos comics da Marvel é um pouco maior. Abaixo, os gráficos de barra representando cada uma destas comparações
Step14: O próximo passo é ver alguns números relativos a quantidade de comics de cada uma. Quantos comics cada uma possui, qual o número de comics bons (avaliação 4 ou 5) e ruins (avaliação 1 ou 2) e sua proporção perante o número total de comics. Para fazer esta análise, vamos meramente fazer alguns filtros e verificar o comprimento do DataFrame após estes filtros. Simples
Step15: Novamente, aqui, a DC Comics se mostra um pouquinho melhor. Tem uma maior proporção de comics bons e uma menor proporção de comics ruins. Ponto para a DC de novo. Abaixo, o gráfico fazendo as comparações
Step16: Apenas como curiosidade, vamos verificar o número de avaliações dadas em comics de cada uma, através de mais uma pivot table
Step17: Interessante notar que mesmo a Marvel tendo uma quantidade maior de comics, como vimos na tabela anterior, a quantidade de avaliações em comics da DC é bem maior, cerca de 55% a mais. Parece que os fãs dos comics da DC são mais propensos a avaliar os comics na Comixology que os da Marvel.
Nossa próxima avaliação será a de personagens e equipes de heróis / vilões. Primeiramente, vamos criar listas com personagens de cada uma, e igualmente para times. Dividi os personagens entre as Publishers e criei as listas na mão. Deu um trabalhinho, mas nada demais.
Step18: Agora, vamos passar por cada nome de personagem e time. Primeiramente, vamos definir um DataFrame, e faremos um filtro nos nomes das comics que possuem o nome deste personagem ou time. Depois, vamos extrair algumas informações daí. A quantidade de comics será basicamente o número de linhas do DataFrame, obtido através da função len(). Depois, as médias de avaliação, preço e quantidade de páginas. Cada uma destas informações será salva em um dictionary, que será adicionado a uma lista. No fim, teremos uma lista de dicts para os personagens e uma lista de dicts para os times
Step19: Vamos considerar apenas times e personagens que possuam mais de 20 comics onde seu nome está no título do comic.
Step20: Vamos agora verificar os maiores times e personagens em número de comics e avaliação média. Para os personagens, mesmo considerando aqueles com mais de 20 comics, ainda sobra muita coisa. Desta forma, vamos limitar a quantidade de personagens a 20, para que a lista e o gráfico não fiquem muito extensos. Depois vamos imprimir cada uma das tabelas.
Step21: Entre os personagens, temos o Batman com o maior número de comics, seguido pelo Homem-Aranha e, bem atrás o Superman completando o top 3. Depois temos uma série de outros heróis famosos, como Capitão América, Homem de Ferro, Wolverine, Flash, entre outros. Aqui, nada de muito surpreendente.
Step22: Aqui temos uma surpresa na liderança. Mesmo com a quantidade de comics não sendo tão grande, acho difícil que alguém previsse que a Mystique seria o personagem com a avaliação média mais alta no meio destes personagens todos, tão mais populares. Nas primeiras posições, outros resultados surpreendentes, com Booster Gold em segundo, Jonah Hex em terceiro, Blue Beetle em quinto. Dos super populares que vimos na lista de cima, temos o Spider-Man, Deadpool e Wonder Woman, já no fim da lista do top 20.
Step23: Entre os times com mais comics, nada de muito surpreendente também. Os eternos X-Men em primeiro, Avengers em segundo e Justice League em terceiro. Depois, seguem os outros times menos populares.
Step24: Nas avaliações, o top 3 é formado pelo All-Star Squadron, da DC, Fantastic Four e Thunderbolts, da Marvel. Surpreendentemente, X-Men, Avengers e o Suicide Squad (cujo filme está chegando em breve), ficam na parte de baixo da lista.
Abaixo plotamos estas tabelas para ajudar na visualização. | Python Code:
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
comixology_df = pd.read_csv("comixology_comics_dataset_19.04.2016.csv", encoding = "ISO-8859-1")
Explanation: Análise de Comics Digitais (Python) - Marvel ou DC?
Introdução
Depois de ter feito a análise do site (post aqui) e de ter feito um scraping dos dados do site da Comixology (post aqui), agora vamos fazer uma bela análise de dados das informações de comics digitais com Python (Pandas).
Vamos descobrir quais editoras tem os melhores preços relativos à quantidade de páginas de seus comics, as editoras com as melhores avaliações médias, além de uma análise mais profunda do duelo das gigantes: Marvel x DC Comics. Vamos começar.
Preparação Inicial
Primeiro, como de costume, vamos importar os pacotes que iremos utilizar. O pacote warnings, serve somente para desligar possíveis avisos relativos à este código no notebook, para que o código não fique extenso. Os outros pacotes já são velhos conhecidos: numpy, pandas, matplotlib e seaborn.
End of explanation
# Vamos criar uma coluna de preço por página para futuras análises
comixology_df['Price_per_page'] = pd.Series(comixology_df['Original_price'] /
comixology_df['Page Count'],
index=comixology_df.index)
# Como alguns comics estão com a contagem de páginas igual a zero, vamos
# definir para estes o Price_per_page igual a NaN
comixology_df.Price_per_page[comixology_df['Price_per_page'] == np.inf] = np.nan
Explanation: Agora, vamos criar uma coluna de preço por página. Desta forma, podemos comparar preços de uma forma mais adequada, visto que o preço de um comic pode variar muito de acordo com suas páginas.
Para alguns comics, a informação de quantidade de páginas não existe, e desta forma, estes retornam um valor de infinito (inf) ao fazer o cálculo. Para estes, vamos definir seu valor como NaN:
End of explanation
# Vamos extrair o ano da string de data de publicação da versão impressa
print_dates = []
for index, row in comixology_df.iterrows():
if type(comixology_df.ix[index]['Print Release Date']) == float:
row_year = np.nan
else:
row_year = int(comixology_df.ix[index]['Print Release Date'].split()[2])
if row_year > 2016:
row_year = np.nan
print_dates.append(row_year)
comixology_df['Print_Release_Year'] = pd.Series(print_dates,
index=comixology_df.index)
Explanation: Vamos agora usar a função iterrows() do Dataframe para extrair o ano de publicação da versão impressa do comic. Esta função cria uma espécie de forloop, que itera sobre cada linha do DataFrame. Vamos usar o split para dividir a string que contém a data de publicação em uma lista. O terceiro valor será o ano. Em alguns casos, o ano retorna um valor maior que 2016. Como isto é impossível, estes valores também serão definidos como NaN:
End of explanation
# Algumas informações médias do site
average_price = np.nanmean(comixology_df['Original_price'])
average_page_count = np.nanmean(comixology_df['Page Count'])
average_rating = np.nanmean(comixology_df['Rating'])
average_rating_quantity = np.nanmean(comixology_df['Ratings_Quantity'])
average_price_per_page = np.nanmean(comixology_df['Price_per_page'])
print("Preço médio: " + str(average_price))
print("Quantidade média de páginas: " + str(average_page_count))
print("Avaliação média: " + str(average_rating))
print("Quantidade média de avaliações por comic: " +
str(average_rating_quantity))
print("Preço por página médio por comic: " + str(average_price_per_page))
Explanation: Para as análises (e avante)
A primeira análise que faremos será o cálculo dos valores médios do site em geral, como preço médio dos comics, quantidade média de páginas, entre outros. Usaremos a função nanmean() do numpy, que calcula a média excluindo os NaN, que é quando não possuímos a informação.
End of explanation
# Definir o número de colunas para impressão de tabelas
pd.set_option('display.max_colwidth', 40)
# Vamos listar as comics com rating 5 estrelas que possuam pelo menos 20 ratings
# e ordena-las por preço por página
comics_with_5_stars = comixology_df[comixology_df.Rating == 5]
comics_with_5_stars = comics_with_5_stars[comics_with_5_stars.
Ratings_Quantity > 20]
print(comics_with_5_stars[['Name','Publisher','Price_per_page']].
sort_values(by='Price_per_page'))
Explanation: Agora, definiremos o número máximo de colunas de cada campo para impressão da tabela em 40 colunas. Faremos isto pois o nome de alguns comics é bastante extenso, e a impressão da tabela ficaria bem esquisita. Desta forma, pelo menos conseguimos ver algumas informações a mais.
Depois vamos listar as comics que possuem 5 estrelas, possuem mais de 20 avaliações (para captar apenas as mais representativas; comics com 5 de avaliação média mas que possuam apenas uma avaliação podem não ser uma métrica muito boa) e vamos ordena-las por preço por página. No topo teremos algumas comics que são gratuitas (as 6 primeiras). Depois, temos ótimas comics, na visão dos usuários, com um preço por página bem atrativo.
End of explanation
# Para a próxima análise, usaremos somente comics com mais de 5 ratings
comics_more_than_5_ratings = comixology_df[comixology_df.Ratings_Quantity > 5]
Explanation: Na próxima análise, usaremos apenas comics com mais de 5 avaliações. Para isso, vamos filtrar:
End of explanation
# Criar pivot table com média das avaliações por Publisher
publishers_avg_rating = pd.pivot_table(comics_more_than_5_ratings,
values=['Rating'],
index=['Publisher'],
aggfunc=[np.mean, np.count_nonzero])
# Primeiramente vamos avaliar qualquer publisher que tenha mais de 20 comics
# com avaliações
main_pub_avg_rating = publishers_avg_rating[publishers_avg_rating.
count_nonzero.Rating > 20]
main_pub_avg_rating = main_pub_avg_rating.sort_values(by=('mean','Rating'),
ascending=False)
print(main_pub_avg_rating)
Explanation: Vamos criar uma pivot table do Pandas, para visualizarmos a quantidade de comics com avaliação e a avaliação média deste Publisher. Depois vamos considerar como Publishers representativas aquelas que possuem pelo menos 20 comics com avaliações. Para isso faremos o filtro da pivot table. Depois, vamos ordenar esta tabela filtrada por avaliação média, em ordem decrescente. Ou seja, as primeiras Publishers são consideradas as que possuem a melhor avaliação média de seus comics. Repare que as gigantes DC Comics e Marvel ficam razoavelmente para baixo da lista.
End of explanation
# Agora, um gráfico com a avaliação média de cada editora
y_axis = main_pub_avg_rating['mean']['Rating']
x_axis = range(len(y_axis))
plt.figure(figsize=(10, 6))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis, tuple(main_pub_avg_rating.index),rotation=90)
plt.show()
Explanation: Para ajudar na visão, um gráfico em matplotlib que representa a tabela acima:
End of explanation
# E agora vamos ver as bem grandes, com mais de 300 comics com avaliações
big_pub_avg_rating = publishers_avg_rating[publishers_avg_rating.
count_nonzero.Rating > 300]
big_pub_avg_rating = big_pub_avg_rating.sort_values(by=('mean','Rating'),
ascending=False)
print(big_pub_avg_rating)
# E agora, o mesmo gráfico com a avaliação média das grandes editoras
y_axis = big_pub_avg_rating['mean']['Rating']
x_axis = np.arange(len(y_axis))
plt.figure(figsize=(10, 6))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.5, tuple(big_pub_avg_rating.index), rotation=90)
plt.show()
Explanation: Para simplificar um pouco e ter uma tabela e gráfico mais fáceis de visualizar, vamos considerar agora comics que possuem pelo menos 300 comics com avaliações. Logo abaixo, o gráfico que representa a tabela (menos poluído e permitindo uma visão melhor da situação das publishers)
End of explanation
# Vamos ver agora se a classificação etária faz alguma diferença significativa
# nas avaliações
rating_by_age = pd.pivot_table(comics_more_than_5_ratings,
values=['Rating'],
index=['Age Rating'],
aggfunc=[np.mean, np.count_nonzero])
print(rating_by_age)
# Gráfico de barras com a avaliação média por faixa etária
y_axis = rating_by_age['mean']['Rating']
x_axis = np.arange(len(y_axis))
plt.figure(figsize=(10, 6))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.25, tuple(rating_by_age.index), rotation=45)
plt.show()
Explanation: Uma coisa que eu acreditava que fosse legal de conferir era se a classificação etária faz alguma diferença na avaliação que os usuários dão a cada comic. Será que comics voltados para o público adulto possuem avaliações melhores? Ou ao contrário? Vamos checar fazendo uma nova pivot table:
End of explanation
# Cria tabela com a quantidade de quadrinhos lançados por ano, baseado na data
# de lançamento da versão impressa
print_releases_per_year = pd.pivot_table(comixology_df,
values=['Name'],
index=['Print_Release_Year'],
aggfunc=[np.count_nonzero])
print_years = []
for index, row in print_releases_per_year.iterrows():
print_year = int(index)
print_years.append(print_year)
print_releases_per_year.index = print_years
print(print_releases_per_year)
y_axis = print_releases_per_year['count_nonzero']['Name']
x_axis = print_releases_per_year['count_nonzero']['Name'].index
plt.figure(figsize=(10, 6))
plt.plot(x_axis, y_axis)
plt.show()
Explanation: Como podemos perceber, as barras do gráfico ficam com alturas bem próximas. Ao que parece, a classificação etária não afeta muito significativamente a avaliação dos comics. Se formos analisar apenas matematicamente, as comics liberadas para qualquer idade ou para maiores de 9 anos possuem as melhores avaliações.
Nosso próximo passo é ver como, de certa forma, evoluiu o número de lançamentos de quadrinhos (considerando as versões impressas) ao longo dos anos. Lembrando que já criamos a coluna com o ano de lançamento da versão impressa de um comic. O próximo passo é basicamente contar a quantidade de cada ano nesta coluna que criamos. Vamos fazer uma lista com os anos de lançamento passados para inteiros para que o eixo do gráfico faça a leitura correta:
End of explanation
# Vejamos agora as 30 comics com mais avaliações; se for mantida a proporção,
# pode-se dizer que estas são as comics mais baixadas (e não vendidas, pois
# algumas destas são gratuitas)
comics_by_ratings_quantity = comixology_df[['Name','Publisher',
'Ratings_Quantity']].sort_values(
by='Ratings_Quantity',
ascending=False)
print(comics_by_ratings_quantity.head(30))
y_axis = comics_by_ratings_quantity.head(30)['Ratings_Quantity']
x_axis = np.arange(len(y_axis))
plt.figure(figsize=(10, 6))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.5, tuple(comics_by_ratings_quantity.head(30)['Name']),
rotation=90)
plt.show()
Explanation: Os números mostram que o crescimento era moderado, até que na década de 2000 ocorreu um boom, com um crescimento bastante considerável até 2012, quando a quantidade de lançamentos começou a oscilar. A queda mostrada no ano de 2016 ocorre, obviamente, pois ainda estamos na metade do mesmo.
Agora, para fazer uma avaliação dos comics mais baixados no site (não dá para aferir os mais comprados, visto que alguns dos comics são gratuitos). Para esta análise, vamos ver quais são os 30 comics com mais avaliações.
End of explanation
# Vamos agora ver dados somente das duas maiores: Marvel e DC
marvel_dc_comics = comixology_df[(comixology_df.Publisher == 'Marvel') |
(comixology_df.Publisher == 'DC Comics')]
# Primeiro, alguns valores médios de cada uma
marvel_dc_pivot_averages = pd.pivot_table(marvel_dc_comics,
values=['Rating','Original_price','Page Count',
'Price_per_page'],
index=['Publisher'],
aggfunc=[np.mean])
print(marvel_dc_pivot_averages)
Explanation: Walking Dead liderando com (muuuita) folga. Depois, algumas comics de Marvel e DC e mais alguns comics variados.
Agora, vamos fazer uma análise mais profunda dos comics das gigantes: Marvel e DC Comics.
Primeiro, vamos filtrar o DataFrame para que sobrem apenas comics destas duas. Depois, vamos calcular através da função pivot table alguns valores médios das duas:
End of explanation
plt.figure(1,figsize=(10, 6))
plt.subplot(221) # Mean original price
y_axis = marvel_dc_pivot_averages['mean']['Original_price']
x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Original_price']))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4,
tuple(marvel_dc_pivot_averages['mean']['Original_price'].index))
plt.title('Mean Original Price')
plt.tight_layout()
plt.subplot(222) # Mean page count
y_axis = marvel_dc_pivot_averages['mean']['Page Count']
x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Page Count']))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4,
tuple(marvel_dc_pivot_averages['mean']['Page Count'].index))
plt.title('Mean Page Count')
plt.tight_layout()
plt.subplot(223) # Mean Price Per Page
y_axis = marvel_dc_pivot_averages['mean']['Price_per_page']
x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Price_per_page']))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4,
tuple(marvel_dc_pivot_averages['mean']['Price_per_page'].index))
plt.title('Mean Price Per Page')
plt.tight_layout()
plt.subplot(224) # Mean Comic Rating
y_axis = marvel_dc_pivot_averages['mean']['Rating']
x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Rating']))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4,
tuple(marvel_dc_pivot_averages['mean']['Rating'].index))
plt.title('Mean Comic Rating')
plt.tight_layout()
plt.show()
Explanation: Como vemos, a DC possui um preço médio e preço por página menor, enquanto possui uma avaliação média levemente maior. A quantidade média de páginas nos comics da Marvel é um pouco maior. Abaixo, os gráficos de barra representando cada uma destas comparações:
End of explanation
# Vamos agora verificar a quantidade de comics de cada uma e fazer uma proporção
# com a quantidade de comics de cada uma com rating maior ou igual a 4. Desta
# forma podemos ver qual delas, proporcionalmente, lança bons comics
marvel_total = len(marvel_dc_comics[marvel_dc_comics['Publisher'] == 'Marvel'])
marvel_4_or_5 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'Marvel')
& (marvel_dc_comics['Rating'] >= 4)])
marvel_proportion_4_or_5 = marvel_4_or_5 / marvel_total
marvel_1_or_2 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'Marvel')
& (marvel_dc_comics['Rating'] <= 2)])
marvel_proportion_1_or_2 = marvel_1_or_2 / marvel_total
dc_total = len(marvel_dc_comics[marvel_dc_comics['Publisher'] == 'DC Comics'])
dc_4_or_5 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'DC Comics')
& (marvel_dc_comics['Rating'] >= 4)])
dc_proportion_4_or_5 = dc_4_or_5 / dc_total
dc_1_or_2 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'DC Comics')
& (marvel_dc_comics['Rating'] <= 2)])
dc_proportion_1_or_2 = dc_1_or_2 / dc_total
print("\n")
print("Total de Comics Marvel: " + str(marvel_total))
print("Total de Comics Marvel com avaliação maior ou igual a 4: " +
str(marvel_4_or_5))
print("Proporção de Comics Marvel com avaliação maior ou igual a 4: " +
str("{0:.2f}%".format(marvel_proportion_4_or_5 * 100)))
print("Total de Comics Marvel com avaliação menor ou igual a 2: " +
str(marvel_1_or_2))
print("Proporção de Comics Marvel com avaliação menor ou igual a 2: " +
str("{0:.2f}%".format(marvel_proportion_1_or_2 * 100)))
print("\n")
print("Total de Comics DC Comics: " + str(dc_total))
print("Total de Comics DC Comics com avaliação maior ou igual a 4: " +
str(dc_4_or_5))
print("Proporção de Comics DC Comics com avaliação maior ou igual a 4: " +
str("{0:.2f}%".format(dc_proportion_4_or_5 * 100)))
print("Total de Comics DC Comics com avaliação menor ou igual a 2: " +
str(dc_1_or_2))
print("Proporção de Comics DC Comics com avaliação menor ou igual a 2: " +
str("{0:.2f}%".format(dc_proportion_1_or_2 * 100)))
print("\n")
Explanation: O próximo passo é ver alguns números relativos a quantidade de comics de cada uma. Quantos comics cada uma possui, qual o número de comics bons (avaliação 4 ou 5) e ruins (avaliação 1 ou 2) e sua proporção perante o número total de comics. Para fazer esta análise, vamos meramente fazer alguns filtros e verificar o comprimento do DataFrame após estes filtros. Simples:
End of explanation
plt.figure(2,figsize=(10, 6))
plt.subplot(221) # Total de Comics de cada editora
y_axis = [dc_total, marvel_total]
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, ('DC Comics','Marvel'))
plt.title('Comics Totais')
plt.tight_layout()
plt.subplot(222) # Proporção de Comics com avaliação 4 ou 5
y_axis = [dc_proportion_4_or_5 * 100, marvel_proportion_4_or_5 * 100]
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, ('DC Comics','Marvel'))
plt.title('Proporção de Comics com avaliação 4 ou 5')
plt.tight_layout()
plt.subplot(223) # Proporção de Comics com avaliação 1 ou 2
y_axis = [dc_proportion_1_or_2 * 100, marvel_proportion_1_or_2 * 100]
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, ('DC Comics','Marvel'))
plt.title('Proporção de Comics com avaliação 1 ou 2')
plt.tight_layout()
plt.show()
Explanation: Novamente, aqui, a DC Comics se mostra um pouquinho melhor. Tem uma maior proporção de comics bons e uma menor proporção de comics ruins. Ponto para a DC de novo. Abaixo, o gráfico fazendo as comparações:
End of explanation
# Somar a quantidade de avaliações em comics de cada editora
marvel_dc_pivot_sums = pd.pivot_table(marvel_dc_comics,
values=['Ratings_Quantity'],
index=['Publisher'],
aggfunc=[np.sum])
print(marvel_dc_pivot_sums)
Explanation: Apenas como curiosidade, vamos verificar o número de avaliações dadas em comics de cada uma, através de mais uma pivot table:
End of explanation
main_dc_characters = ['Superman','Batman','Aquaman','Wonder Woman', 'Flash',
'Robin','Arrow', 'Batgirl', 'Bane', 'Harley Queen',
'Poison Ivy', 'Joker','Firestorm','Vixen',
'Martian Manhunter','Zod','Penguin','Lex Luthor',
'Green Lantern','Supergirl','Atom','Cyborg','Hawkgirl',
'Starfire','Jonah Hex','Booster Gold','Black Canary',
'Shazam','Catwoman','Nightwing','Zatanna','Hawkman',
'Power Girl','Rorschach','Doctor Manhattan',
'Blue Beetle','Batwoman','Darkseid','Vandal Savage',
"Ra's Al Ghul",'Riddler','Reverse Flash','Black Adam',
'Deathstroke','Brainiac','Sinestro','Two-Face']
main_marvel_characters = ['Spider-Man','Captain Marvel','Hulk','Thor',
'Iron Man','Luke Cage','Black Widow','Daredevil',
'Captain America','Jessica Jones','Ghost Rider',
'Spider-Woman','Silver Surfer','Beast','Thing',
'Kitty Pride','Doctor Strange','Black Panther',
'Invisible Woman','Nick Fury','Storm','Professor X',
'Cyclops','Jean Grey','Wolverine','Scarlet Witch',
'Gambit','Rogue','X-23','Iceman','She-Hulk',
'Iron Fist','Hawkeye','Quicksilver','Vision',
'Ant-Man','Cable','Bishop','Colossus','Deadpool',
'Human Torch','Mr. Fantastic','Nightcrawler','Nova',
'Psylocke','Punisher','Rocket Raccoon','Groot',
'Star-Lord','War Machine','Gamora','Drax','Venom',
'Carnage','Octopus','Green Goblin','Abomination',
'Enchantress','Sentinel','Viper','Lady Deathstrike',
'Annihilus','Ultron','Galactus','Kang','Bullseye',
'Juggernaut','Sabretooth','Mystique','Kingpin',
'Apocalypse','Thanos','Dark Phoenix','Loki',
'Red Skull','Magneto','Doctor Doom','Ronan']
dc_teams = ['Justice League','Teen Titans','Justice Society','Lantern Corps',
'Legion of Super-Heroes','All-Star Squadron','Suicide Squad',
'Birds of Prey','Gen13', 'The League of Extraordinary Gentlemen',
'Watchmen']
marvel_teams = ['X-Men','Avengers','Fantastic Four','Asgardian Gods','Skrulls',
'S.H.I.E.L.D.','Inhumans','A.I.M.','X-Factor','X-Force',
'Defenders','New Mutants','Brotherhood of Evil Mutants',
'Thunderbolts', 'Alpha Flight','Guardians of the Galaxy',
'Nova Corps','Illuminati']
Explanation: Interessante notar que mesmo a Marvel tendo uma quantidade maior de comics, como vimos na tabela anterior, a quantidade de avaliações em comics da DC é bem maior, cerca de 55% a mais. Parece que os fãs dos comics da DC são mais propensos a avaliar os comics na Comixology que os da Marvel.
Nossa próxima avaliação será a de personagens e equipes de heróis / vilões. Primeiramente, vamos criar listas com personagens de cada uma, e igualmente para times. Dividi os personagens entre as Publishers e criei as listas na mão. Deu um trabalhinho, mas nada demais.
End of explanation
character_row = {}
characters_dicts = []
for character in main_dc_characters:
character_df = comixology_df[(comixology_df['Name'].str.contains(character)) &
(comixology_df['Publisher'] == 'DC Comics')]
character_row['Character_Name'] = character
character_row['Quantity_of_comics'] = len(character_df)
character_row['Average_Rating'] = np.nanmean(character_df['Rating'])
character_row['Average_Price'] = np.nanmean(character_df['Original_price'])
character_row['Average_Pages'] = np.nanmean(character_df['Page Count'])
character_row['Publisher'] = "DC Comics"
characters_dicts.append(character_row)
character_row = {}
for character in main_marvel_characters:
character_df = comixology_df[(comixology_df['Name'].str.contains(character)) &
(comixology_df['Publisher'] == 'Marvel')]
character_row['Character_Name'] = character
character_row['Quantity_of_comics'] = len(character_df)
character_row['Average_Rating'] = np.nanmean(character_df['Rating'])
character_row['Average_Price'] = np.nanmean(character_df['Original_price'])
character_row['Average_Pages'] = np.nanmean(character_df['Page Count'])
character_row['Publisher'] = "Marvel"
characters_dicts.append(character_row)
character_row = {}
characters_df = pd.DataFrame(characters_dicts)
team_row = {}
teams_dicts = []
for team in dc_teams:
team_df = comixology_df[(comixology_df['Name'].str.contains(team)) &
(comixology_df['Publisher'] == 'DC Comics')]
team_row['Team_Name'] = team
team_row['Quantity_of_comics'] = len(team_df)
team_row['Average_Rating'] = np.nanmean(team_df['Rating'])
team_row['Average_Price'] = np.nanmean(team_df['Original_price'])
team_row['Average_Pages'] = np.nanmean(team_df['Page Count'])
team_row['Publisher'] = "DC Comics"
teams_dicts.append(team_row)
team_row = {}
for team in marvel_teams:
team_df = comixology_df[(comixology_df['Name'].str.contains(team)) &
(comixology_df['Publisher'] == 'Marvel')]
team_row['Team_Name'] = team
team_row['Quantity_of_comics'] = len(team_df)
team_row['Average_Rating'] = np.nanmean(team_df['Rating'])
team_row['Average_Price'] = np.nanmean(team_df['Original_price'])
team_row['Average_Pages'] = np.nanmean(team_df['Page Count'])
team_row['Publisher'] = "Marvel"
teams_dicts.append(team_row)
team_row = {}
teams_df = pd.DataFrame(teams_dicts)
Explanation: Agora, vamos passar por cada nome de personagem e time. Primeiramente, vamos definir um DataFrame, e faremos um filtro nos nomes das comics que possuem o nome deste personagem ou time. Depois, vamos extrair algumas informações daí. A quantidade de comics será basicamente o número de linhas do DataFrame, obtido através da função len(). Depois, as médias de avaliação, preço e quantidade de páginas. Cada uma destas informações será salva em um dictionary, que será adicionado a uma lista. No fim, teremos uma lista de dicts para os personagens e uma lista de dicts para os times:
End of explanation
teams_df = teams_df[teams_df['Quantity_of_comics'] > 20]
characters_df = characters_df[characters_df['Quantity_of_comics'] > 20]
Explanation: Vamos considerar apenas times e personagens que possuam mais de 20 comics onde seu nome está no título do comic.
End of explanation
# Limitando a 20 o número de personagens
top_characters_by_quantity = characters_df.sort_values(by='Quantity_of_comics',
ascending=False)[['Character_Name',
'Average_Rating',
'Quantity_of_comics']].head(20)
top_characters_by_rating = characters_df.sort_values(by='Average_Rating',
ascending=False)[['Character_Name',
'Average_Rating',
'Quantity_of_comics']].head(20)
top_teams_by_quantity = teams_df.sort_values(by='Quantity_of_comics',
ascending=False)[['Team_Name',
'Average_Rating',
'Quantity_of_comics']]
top_teams_by_rating = teams_df.sort_values(by='Average_Rating',
ascending=False)[['Team_Name',
'Average_Rating',
'Quantity_of_comics']]
print(top_characters_by_quantity)
Explanation: Vamos agora verificar os maiores times e personagens em número de comics e avaliação média. Para os personagens, mesmo considerando aqueles com mais de 20 comics, ainda sobra muita coisa. Desta forma, vamos limitar a quantidade de personagens a 20, para que a lista e o gráfico não fiquem muito extensos. Depois vamos imprimir cada uma das tabelas.
End of explanation
print(top_characters_by_rating)
Explanation: Entre os personagens, temos o Batman com o maior número de comics, seguido pelo Homem-Aranha e, bem atrás o Superman completando o top 3. Depois temos uma série de outros heróis famosos, como Capitão América, Homem de Ferro, Wolverine, Flash, entre outros. Aqui, nada de muito surpreendente.
End of explanation
print(top_teams_by_quantity)
Explanation: Aqui temos uma surpresa na liderança. Mesmo com a quantidade de comics não sendo tão grande, acho difícil que alguém previsse que a Mystique seria o personagem com a avaliação média mais alta no meio destes personagens todos, tão mais populares. Nas primeiras posições, outros resultados surpreendentes, com Booster Gold em segundo, Jonah Hex em terceiro, Blue Beetle em quinto. Dos super populares que vimos na lista de cima, temos o Spider-Man, Deadpool e Wonder Woman, já no fim da lista do top 20.
End of explanation
print(top_teams_by_rating)
Explanation: Entre os times com mais comics, nada de muito surpreendente também. Os eternos X-Men em primeiro, Avengers em segundo e Justice League em terceiro. Depois, seguem os outros times menos populares.
End of explanation
plt.figure(3,figsize=(10, 6))
plt.subplot(121) # Personagem por quantidade de comics
y_axis = top_characters_by_quantity['Quantity_of_comics']
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, tuple(top_characters_by_quantity['Character_Name']),
rotation=90)
plt.title('Personagem por Qtd de Comics')
plt.tight_layout()
plt.subplot(122) # Personagem por avaliação média
y_axis = top_characters_by_rating['Average_Rating']
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, tuple(top_characters_by_rating['Character_Name']),
rotation=90)
plt.title('Personagem por Avaliação Média')
plt.tight_layout()
plt.show()
plt.figure(4,figsize=(10, 6))
plt.subplot(121) # Time por quantidade de comics
y_axis = top_teams_by_quantity['Quantity_of_comics']
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, tuple(top_teams_by_quantity['Team_Name']), rotation=90)
plt.title('Time por quantidade de comics')
plt.tight_layout()
plt.subplot(122) # Time por avaliação média
y_axis = top_teams_by_rating['Average_Rating']
x_axis = np.arange(len(y_axis))
plt.bar(x_axis, y_axis)
plt.xticks(x_axis+0.4, tuple(top_teams_by_rating['Team_Name']), rotation=90)
plt.title('Time por avaliação média')
plt.tight_layout()
plt.show()
Explanation: Nas avaliações, o top 3 é formado pelo All-Star Squadron, da DC, Fantastic Four e Thunderbolts, da Marvel. Surpreendentemente, X-Men, Avengers e o Suicide Squad (cujo filme está chegando em breve), ficam na parte de baixo da lista.
Abaixo plotamos estas tabelas para ajudar na visualização.
End of explanation |
12,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
迷你项目:蒙特卡洛方法
在此 notebook 中,你将自己编写很多蒙特卡洛 (MC) 算法的实现。
虽然我们提供了一些起始代码,但是你可以删掉这些提示并从头编写代码。
第 0 部分:探索 BlackjackEnv
请使用以下代码单元格创建 Blackjack 环境的实例。
Step1: 每个状态都是包含以下三个元素的 3 元组:
- 玩家的当前点数之和 $\in {0, 1, \ldots, 31}$,
- 庄家朝上的牌点数之和 $\in {1, \ldots, 10}$,及
- 玩家是否有能使用的王牌(no $=0$、yes $=1$)。
智能体可以执行两个潜在动作:
Step2: 通过运行以下代码单元格进行验证。
Step3: 执行以下代码单元格以按照随机策略玩二十一点。
(代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你体验当智能体与环境互动时返回的输出结果。)
Step4: 第 1 部分:MC 预测 - 状态值
在此部分,你将自己编写 MC 预测的实现(用于估算状态值函数)。
我们首先将研究以下策略:如果点数之和超过 18,玩家将始终停止出牌。函数 generate_episode_from_limit 会根据该策略抽取一个阶段。
该函数会接收以下输入:
- bj_env:这是 OpenAI Gym 的 Blackjack 环境的实例。
它会返回以下输出:
- episode:这是一个(状态、动作、奖励)元组列表,对应的是 $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, 其中 $T$ 是最终时间步。具体而言,episode[i] 返回 $(S_i, A_i, R_{i+1})$, episode[i][0]、episode[i][1]和 episode[i][2] 分别返回 $S_i$, $A_i$和 $R_{i+1}$。
Step5: 执行以下代码单元格以按照该策略玩二十一点。
(代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你熟悉 generate_episode_from_limit 函数的输出结果。)
Step6: 现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。
你的算法将有四个参数:
- env:这是 OpenAI Gym 环境的实例。
- num_episodes:这是通过智能体-环境互动生成的阶段次数。
- generate_episode:这是返回互动阶段的函数。
- gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。
该算法会返回以下输出结果:
- V:这是一个字典,其中 V[s] 是状态 s 的估算值。例如,如果代码返回以下输出结果:
Step7: 则状态 (4, 7, False) 的值估算为 -0.38775510204081631。
如果你不知道如何在 Python 中使用 defaultdict,建议查看此源代码。
Step8: 使用以下单元格计算并绘制状态值函数估算值。 (用于绘制值函数的代码来自此源代码,并且稍作了修改。)
要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
Step9: 第 2 部分:MC 预测 - 动作值
在此部分,你将自己编写 MC 预测的实现(用于估算动作值函数)。
我们首先将研究以下策略:如果点数之和超过 18,玩家将_几乎_始终停止出牌。具体而言,如果点数之和大于 18,她选择动作 STICK 的概率是 80%;如果点数之和不大于 18,她选择动作 HIT 的概率是 80%。函数 generate_episode_from_limit_stochastic 会根据该策略抽取一个阶段。
该函数会接收以下输入:
- bj_env:这是 OpenAI Gym 的 Blackjack 环境的实例。
该算法会返回以下输出结果:
- episode
Step10: 现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。
你的算法将有四个参数:
- env
Step11: 请使用以下单元格获取动作值函数估值 $Q$。我们还绘制了相应的状态值函数。
要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
Step12: 第 3 部分:MC 控制 - GLIE
在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。
你的算法将有四个参数:
env
Step13: 通过以下单元格获取估算的最优策略和动作值函数。
Step14: 接着,我们将绘制相应的状态值函数。
Step15: 最后,我们将可视化估算为最优策略的策略。
Step16: 真最优策略 $\pi_*$ 可以在该教科书的第 82 页找到(下文也提供了)。请将你的最终估算值与最优策略进行比较——它们能够有多接近?如果你对算法的效果不满意,请花时间调整 $\epsilon$ 的衰减率和/或使该算法运行更多个阶段,以获得更好的结果。
第 4 部分:MC 控制 - 常量-$\alpha$
在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。
你的算法将有三个参数:
env
Step17: 通过以下单元格获得估算的最优策略和动作值函数。
Step18: 接着,我们将绘制相应的状态值函数。
Step19: 最后,我们将可视化估算为最优策略的策略。 | Python Code:
import gym
env = gym.make('Blackjack-v0')
Explanation: 迷你项目:蒙特卡洛方法
在此 notebook 中,你将自己编写很多蒙特卡洛 (MC) 算法的实现。
虽然我们提供了一些起始代码,但是你可以删掉这些提示并从头编写代码。
第 0 部分:探索 BlackjackEnv
请使用以下代码单元格创建 Blackjack 环境的实例。
End of explanation
STICK = 0
HIT = 1
Explanation: 每个状态都是包含以下三个元素的 3 元组:
- 玩家的当前点数之和 $\in {0, 1, \ldots, 31}$,
- 庄家朝上的牌点数之和 $\in {1, \ldots, 10}$,及
- 玩家是否有能使用的王牌(no $=0$、yes $=1$)。
智能体可以执行两个潜在动作:
End of explanation
print(env.observation_space)
print(env.action_space)
Explanation: 通过运行以下代码单元格进行验证。
End of explanation
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
Explanation: 执行以下代码单元格以按照随机策略玩二十一点。
(代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你体验当智能体与环境互动时返回的输出结果。)
End of explanation
def generate_episode_from_limit(bj_env):
episode = []
state = bj_env.reset()
while True:
action = 0 if state[0] > 18 else 1
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
Explanation: 第 1 部分:MC 预测 - 状态值
在此部分,你将自己编写 MC 预测的实现(用于估算状态值函数)。
我们首先将研究以下策略:如果点数之和超过 18,玩家将始终停止出牌。函数 generate_episode_from_limit 会根据该策略抽取一个阶段。
该函数会接收以下输入:
- bj_env:这是 OpenAI Gym 的 Blackjack 环境的实例。
它会返回以下输出:
- episode:这是一个(状态、动作、奖励)元组列表,对应的是 $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, 其中 $T$ 是最终时间步。具体而言,episode[i] 返回 $(S_i, A_i, R_{i+1})$, episode[i][0]、episode[i][1]和 episode[i][2] 分别返回 $S_i$, $A_i$和 $R_{i+1}$。
End of explanation
for i in range(3):
print(generate_episode_from_limit(env))
Explanation: 执行以下代码单元格以按照该策略玩二十一点。
(代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你熟悉 generate_episode_from_limit 函数的输出结果。)
End of explanation
{(4, 7, False): -0.38775510204081631, (18, 6, False): -0.58434296365330851, (13, 2, False): -0.43409090909090908, (6, 7, False): -0.3783783783783784, ...
Explanation: 现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。
你的算法将有四个参数:
- env:这是 OpenAI Gym 环境的实例。
- num_episodes:这是通过智能体-环境互动生成的阶段次数。
- generate_episode:这是返回互动阶段的函数。
- gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。
该算法会返回以下输出结果:
- V:这是一个字典,其中 V[s] 是状态 s 的估算值。例如,如果代码返回以下输出结果:
End of explanation
from collections import defaultdict
import numpy as np
import sys
def mc_prediction_v(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionary of lists
returns = defaultdict(list)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return V
Explanation: 则状态 (4, 7, False) 的值估算为 -0.38775510204081631。
如果你不知道如何在 Python 中使用 defaultdict,建议查看此源代码。
End of explanation
from plot_utils import plot_blackjack_values
# obtain the value function
V = mc_prediction_v(env, 500000, generate_episode_from_limit)
# plot the value function
plot_blackjack_values(V)
Explanation: 使用以下单元格计算并绘制状态值函数估算值。 (用于绘制值函数的代码来自此源代码,并且稍作了修改。)
要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
End of explanation
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
Explanation: 第 2 部分:MC 预测 - 动作值
在此部分,你将自己编写 MC 预测的实现(用于估算动作值函数)。
我们首先将研究以下策略:如果点数之和超过 18,玩家将_几乎_始终停止出牌。具体而言,如果点数之和大于 18,她选择动作 STICK 的概率是 80%;如果点数之和不大于 18,她选择动作 HIT 的概率是 80%。函数 generate_episode_from_limit_stochastic 会根据该策略抽取一个阶段。
该函数会接收以下输入:
- bj_env:这是 OpenAI Gym 的 Blackjack 环境的实例。
该算法会返回以下输出结果:
- episode: 这是一个(状态、动作、奖励)元组列表,对应的是 $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, 其中 $T$ 是最终时间步。具体而言,episode[i] 返回 $(S_i, A_i, R_{i+1})$, episode[i][0]、episode[i][1]和 episode[i][2] 分别返回 $S_i$, $A_i$和 $R_{i+1}$。
End of explanation
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
Explanation: 现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。
你的算法将有四个参数:
- env: 这是 OpenAI Gym 环境的实例。
- num_episodes:这是通过智能体-环境互动生成的阶段次数。
- generate_episode:这是返回互动阶段的函数。
- gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。
该算法会返回以下输出结果:
Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。
End of explanation
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
Explanation: 请使用以下单元格获取动作值函数估值 $Q$。我们还绘制了相应的状态值函数。
要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
End of explanation
def mc_control_GLIE(env, num_episodes, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionaries of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
Explanation: 第 3 部分:MC 控制 - GLIE
在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。
你的算法将有四个参数:
env: 这是 OpenAI Gym 环境的实例。
num_episodes:这是通过智能体-环境互动生成的阶段次数。
generate_episode:这是返回互动阶段的函数。
gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。
该算法会返回以下输出结果:
Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。
policy:这是一个字典,其中 policy[s] 会返回智能体在观察状态 s 之后选择的动作。
(你可以随意定义其他函数,以帮助你整理代码。)
End of explanation
# obtain the estimated optimal policy and action-value function
policy_glie, Q_glie = mc_control_GLIE(env, 500000)
Explanation: 通过以下单元格获取估算的最优策略和动作值函数。
End of explanation
# obtain the state-value function
V_glie = dict((k,np.max(v)) for k, v in Q_glie.items())
# plot the state-value function
plot_blackjack_values(V_glie)
Explanation: 接着,我们将绘制相应的状态值函数。
End of explanation
from plot_utils import plot_policy
# plot the policy
plot_policy(policy_glie)
Explanation: 最后,我们将可视化估算为最优策略的策略。
End of explanation
def mc_control_alpha(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
Explanation: 真最优策略 $\pi_*$ 可以在该教科书的第 82 页找到(下文也提供了)。请将你的最终估算值与最优策略进行比较——它们能够有多接近?如果你对算法的效果不满意,请花时间调整 $\epsilon$ 的衰减率和/或使该算法运行更多个阶段,以获得更好的结果。
第 4 部分:MC 控制 - 常量-$\alpha$
在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。
你的算法将有三个参数:
env: 这是 OpenAI Gym 环境的实例。
num_episodes:这是通过智能体-环境互动生成的阶段次数。
generate_episode:这是返回互动阶段的函数。
alpha:这是更新步骤的步长参数。
gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。
该算法会返回以下输出结果:
Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。
policy:这是一个字典,其中 policy[s] 会返回智能体在观察状态 s 之后选择的动作。
(你可以随意定义其他函数,以帮助你整理代码。)
End of explanation
# obtain the estimated optimal policy and action-value function
policy_alpha, Q_alpha = mc_control_alpha(env, 500000, 0.008)
Explanation: 通过以下单元格获得估算的最优策略和动作值函数。
End of explanation
# obtain the state-value function
V_alpha = dict((k,np.max(v)) for k, v in Q_alpha.items())
# plot the state-value function
plot_blackjack_values(V_alpha)
Explanation: 接着,我们将绘制相应的状态值函数。
End of explanation
# plot the policy
plot_policy(policy_alpha)
Explanation: 最后,我们将可视化估算为最优策略的策略。
End of explanation |
12,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 6
Step1: How would you write a for loop to walk through this list of day numbers and temperatures, and print the temp for each day?
Step2: Hot Days
Write code that walks through this list of values, prints just the number for temps of 90 and below, but prints the number with "Hot!!!" after it for any temperature above 90.
Step3: Day Camp | Python Code:
# For loop example
for counter in range(3, 7):
print counter
print "Whee!"
print "Done."
# Arrays example
children = ["Sally", "Bobby", "Tommy", "Tonya"]
ages = [12, 9, 7, 5]
print "Sally's age is ", ages[0]
print "Bobby's age is ", ages[1]
print "Tommy's age is ", ages[2]
print "Tonya's age is ", ages[3]
for count in range(0, 4):
print "Child ", children[count], " is ", ages[count], " years old."
Explanation: Lesson 6: For Loops and Arrays
A for loop counts from a start number to an end number.
An array holds a list of related values.
End of explanation
# July temperatures
day = [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
temp = [81, 82, 80, 75, 74, 80, 78, 75, 72, 74, 76]
# print something like:
# July 10: High of 81
# July 11: High of 82
# ...
Explanation: How would you write a for loop to walk through this list of day numbers and temperatures, and print the temp for each day?
End of explanation
temp = [81, 82, 80, 95, 84, 89, 90, 95, 82, 84, 96]
Explanation: Hot Days
Write code that walks through this list of values, prints just the number for temps of 90 and below, but prints the number with "Hot!!!" after it for any temperature above 90.
End of explanation
# Here are some variable name ideas to get you started.
# You can change these lines as necessary.
total = 0.00
items = [ "", "", "" ]
prices = [ ]
quantities = [ ]
Explanation: Day Camp: Present/Absent
Alex runs a day camp for children. He wants a small program to keep track of what children are present or absent each day. The program should let the user type in a child's name, and then A or P for absent or present. It should keep track of the whole list of children this way (in an array for the names, and another array for A or P). There are exactly 6 children coming to camp this time, so the program will stop after exactly 6 names. Once the names are all entered, the program should print "Present:" and a list of the names of children who are present that day. Then it should print "Absent:" and a list of the names of children who are absent that day.
Challenge: Groceries
Zalia wants a grocery calculator. Write a program that asks the user to enter the name of a grocery item, the price of the item, and then the quantity of that item. It should store each of those in arrays. Then it asks for the next item. It keeps asking for more items like this until the user puts in "quit" as the item name.
Then, it goes through the arrays once more. It calculates the price * quantity for each item, and adds up a total cost for the entire order. Then it prints out the total cost.
This challenge will use arrays, for loops, and a while loop. You may have an easier time planning this on paper first.
End of explanation |
12,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of creating a power law distribution
The <code>distlib.MRN_dist()</code> functions return a <code>distlib.DustSpectrum</code> object.
The object contains an array of grain radii (a), the number density (nd), and total dust mass (md).
Step1: Here's a quick way to see all the keys in the DustSpectrum object.
Step2: Play with WD01 dust distributions
Dust grain size distributions from Weingartner & Draine (2001)
http
Step3: The <code>DustSpectrum</code> objects also contain an <code>integrate_dust_mass()</code> function for calculating the total mass column of dust (g cm^-2). | Python Code:
mrn_test1 = distlib.MRN_dist(0.005, 0.3, 3.5)
mrn_test2 = distlib.MRN_dist(0.005, 0.25, 3.5)
mrn_test3 = distlib.MRN_dist(0.005, 0.3, 4.0)
mrn_test4 = distlib.MRN_dist(0.005, 0.3, 3.5, na=10, log=True)
Explanation: Examples of creating a power law distribution
The <code>distlib.MRN_dist()</code> functions return a <code>distlib.DustSpectrum</code> object.
The object contains an array of grain radii (a), the number density (nd), and total dust mass (md).
End of explanation
print(type(mrn_test1))
print(mrn_test1.__dict__.keys())
plt.plot(mrn_test1.a, mrn_test1.nd, label='amax=%.3f, p=%.2f' % (mrn_test1.a[-1], 3.5))
plt.plot(mrn_test4.a, mrn_test4.nd, 'ko', label='')
plt.plot(mrn_test2.a, mrn_test2.nd, label='amax=%.3f, p=%.2f' % (mrn_test2.a[-1], 3.5))
plt.plot(mrn_test3.a, mrn_test3.nd, label='amax=%.3f, p=%.2f' % (mrn_test3.a[-1], 4.0))
plt.legend(loc='upper right', frameon=False)
plt.loglog()
plt.xlabel('grain radius (micron)')
plt.ylabel(r'$dn/da$ (cm$^{-2}$ micron$^{-1}$)')
plt.xlim(0.005,0.3)
Explanation: Here's a quick way to see all the keys in the DustSpectrum object.
End of explanation
wd_MW_gra = distlib.make_WD01_DustSpectrum(type='Graphite', verbose=False)
wd_MW_sil = distlib.make_WD01_DustSpectrum(type='Silicate', verbose=False)
wd_MW_gra_bc6 = distlib.make_WD01_DustSpectrum(type='Graphite', bc=6.0, verbose=False)
wd_MW_sil_bc6 = distlib.make_WD01_DustSpectrum(type='Silicate', bc=6.0, verbose=False)
plt.plot(wd_MW_gra.a, wd_MW_gra.nd*wd_MW_gra.a**4, label='Graphite MW dust')
plt.plot(wd_MW_sil.a, wd_MW_sil.nd*wd_MW_sil.a**4, label='Silicate MW dust')
plt.plot(wd_MW_gra_bc6.a, wd_MW_gra_bc6.nd*wd_MW_gra_bc6.a**4, 'b--', label='bc=6')
plt.plot(wd_MW_sil_bc6.a, wd_MW_sil_bc6.nd*wd_MW_sil_bc6.a**4, 'g--', label='bc=6')
plt.xlabel('grain radius (micron)')
plt.ylabel(r'$a^4 dn/da$ (cm$^{-2}$ um$^{3}$)')
plt.legend(loc='lower left', frameon=False)
plt.loglog()
plt.xlim(0.005, 1)
plt.ylim(1.e-20,1.e-15)
Explanation: Play with WD01 dust distributions
Dust grain size distributions from Weingartner & Draine (2001)
http://adsabs.harvard.edu/abs/2001ApJ...548..296W
Use the function <code>make_WD01_DustSpectrum</code> to return a <code>DustSpectrum</code> object.
End of explanation
print("Graphite dust mass = %.3e" %(wd_MW_gra.integrate_dust_mass()))
print("Silicate dust mass = %.3e" %(wd_MW_sil.integrate_dust_mass()))
Explanation: The <code>DustSpectrum</code> objects also contain an <code>integrate_dust_mass()</code> function for calculating the total mass column of dust (g cm^-2).
End of explanation |
12,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
Step1: Downloading the Data
We'll use a language dataset provided by http
Step2: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following
Step3: Sentence Integerizing
The utils_preproc.tokenize() method does the following
Step4: The outputted tokenizer can be used to get back the actual works
from the integers representing them
Step5: Creating the tf.data.Dataset
load_and_preprocess
Let's first implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones
Step6: load_and_integerize
Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple
Step7: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that
Step8: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively
Step9: Let us store the maximal sentence length of both languages into two variables
Step10: We are now using scikit-learn train_test_split to create our splits
Step11: Let's make sure the number of example in each split looks good
Step12: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0
Step13: Create tf.data dataset for train and eval
Below we implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark
Step14: Let's now create the actual train and eval dataset using the function above
Step15: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
Step16: Let's implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
Step17: We now implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important
Step18: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output
Step19: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder
Step20: Let's now train the model!
Step21: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models
Step23: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note
Step24: Now we're ready to predict!
Step25: Checkpoint Model
Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse
Step26: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info
Step27: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes. | Python Code:
pip freeze | grep nltk || pip install nltk
import os
import pickle
import sys
import nltk
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import (
Dense,
Embedding,
GRU,
Input,
)
from tensorflow.keras.models import (
load_model,
Model,
)
import utils_preproc
print(tf.__version__)
SEED = 0
MODEL_PATH = 'translate_models/baseline'
DATA_URL = 'http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip'
LOAD_CHECKPOINT = False
tf.random.set_seed(SEED)
Explanation: Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
End of explanation
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin=DATA_URL, extract=True)
path_to_file = os.path.join(
os.path.dirname(path_to_zip),
"spa-eng/spa.txt"
)
print("Translation data stored at:", path_to_file)
data = pd.read_csv(
path_to_file, sep='\t', header=None, names=['english', 'spanish'])
data.sample(3)
Explanation: Downloading the Data
We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.
End of explanation
raw = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?"
]
processed = [utils_preproc.preprocess_sentence(s) for s in raw]
processed
Explanation: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following:
1. Converts sentence to lower case
2. Adds a space between punctuation and words
3. Replaces tokens that aren't a-z or punctuation with space
4. Adds <start> and <end> tokens
For example:
End of explanation
integerized, tokenizer = utils_preproc.tokenize(processed)
integerized
Explanation: Sentence Integerizing
The utils_preproc.tokenize() method does the following:
Splits each sentence into a token list
Maps each token to an integer
Pads to length of longest sentence
It returns an instance of a Keras Tokenizer
containing the token-integer mapping along with the integerized sentences:
End of explanation
tokenizer.sequences_to_texts(integerized)
Explanation: The outputted tokenizer can be used to get back the actual works
from the integers representing them:
End of explanation
def load_and_preprocess(path, num_examples):
with open(path_to_file, 'r') as fp:
lines = fp.read().strip().split('\n')
# TODO 1a
sentence_pairs = [
[utils_preproc.preprocess_sentence(sent) for sent in line.split('\t')]
for line in lines[:num_examples]
]
return zip(*sentence_pairs)
en, sp = load_and_preprocess(path_to_file, num_examples=10)
print(en[-1])
print(sp[-1])
Explanation: Creating the tf.data.Dataset
load_and_preprocess
Let's first implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones:
End of explanation
def load_and_integerize(path, num_examples=None):
targ_lang, inp_lang = load_and_preprocess(path, num_examples)
# TODO 1b
input_tensor, inp_lang_tokenizer = utils_preproc.tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = utils_preproc.tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
Explanation: load_and_integerize
Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:
python
(input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)
where
input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences
inp_lang_tokenizer is the source language tokenizer
targ_lang_tokenizer is the target language tokenizer
End of explanation
TEST_PROP = 0.2
NUM_EXAMPLES = 30000
Explanation: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that:
End of explanation
input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(
path_to_file, NUM_EXAMPLES)
Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:
End of explanation
max_length_targ = target_tensor.shape[1]
max_length_inp = input_tensor.shape[1]
Explanation: Let us store the maximal sentence length of both languages into two variables:
End of explanation
splits = train_test_split(
input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED)
input_tensor_train = splits[0]
input_tensor_val = splits[1]
target_tensor_train = splits[2]
target_tensor_val = splits[3]
Explanation: We are now using scikit-learn train_test_split to create our splits:
End of explanation
(len(input_tensor_train), len(target_tensor_train),
len(input_tensor_val), len(target_tensor_val))
Explanation: Let's make sure the number of example in each split looks good:
End of explanation
print("Input Language; int to word mapping")
print(input_tensor_train[0])
print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), '\n')
print("Target Language; int to word mapping")
print(target_tensor_train[0])
print(utils_preproc.int2word(targ_lang, target_tensor_train[0]))
Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:
End of explanation
def create_dataset(encoder_input, decoder_input):
# TODO 1c
# shift ahead by 1
target = tf.roll(decoder_input, -1, 1)
# replace last column with 0s
zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)
target = tf.concat((target[:, :-1], zeros), axis=-1)
dataset = tf.data.Dataset.from_tensor_slices(
((encoder_input, decoder_input), target))
return dataset
Explanation: Create tf.data dataset for train and eval
Below we implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.
End of explanation
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
train_dataset = create_dataset(
input_tensor_train, target_tensor_train).shuffle(
BUFFER_SIZE).repeat().batch(BATCH_SIZE, drop_remainder=True)
eval_dataset = create_dataset(
input_tensor_val, target_tensor_val).batch(
BATCH_SIZE, drop_remainder=True)
Explanation: Let's now create the actual train and eval dataset using the function above:
End of explanation
EMBEDDING_DIM = 256
HIDDEN_UNITS = 1024
INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1
TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1
Explanation: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
End of explanation
encoder_inputs = Input(shape=(None,), name="encoder_input")
# TODO 2a
encoder_inputs_embedded = Embedding(
input_dim=INPUT_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_inp)(encoder_inputs)
encoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)
Explanation: Let's implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
End of explanation
decoder_inputs = Input(shape=(None,), name="decoder_input")
# TODO 2b
decoder_inputs_embedded = Embedding(
input_dim=TARGET_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_targ)(decoder_inputs)
decoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=encoder_state)
Explanation: We now implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!
The output of the encoder will be the decoder_outputs and the decoder_state.
End of explanation
decoder_dense = Dense(TARGET_VOCAB_SIZE, activation='softmax')
predictions = decoder_dense(decoder_outputs)
Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:
End of explanation
# TODO 2c
model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=predictions)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.summary()
Explanation: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:
End of explanation
STEPS_PER_EPOCH = len(input_tensor_train)//BATCH_SIZE
EPOCHS = 1
history = model.fit(
train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=eval_dataset,
epochs=EPOCHS
)
Explanation: Let's now train the model!
End of explanation
if LOAD_CHECKPOINT:
encoder_model = load_model(os.path.join(MODEL_PATH, 'encoder_model.h5'))
decoder_model = load_model(os.path.join(MODEL_PATH, 'decoder_model.h5'))
else:
# TODO 3a
encoder_model = Model(inputs=encoder_inputs, outputs=encoder_state)
decoder_state_input = Input(shape=(HIDDEN_UNITS,), name="decoder_state_input")
# Reuses weights from the decoder_rnn layer
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=decoder_state_input)
# Reuses weights from the decoder_dense layer
predictions = decoder_dense(decoder_outputs)
decoder_model = Model(
inputs=[decoder_inputs, decoder_state_input],
outputs=[predictions, decoder_state]
)
Explanation: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models:
an encoder model with signature encoder_inputs -> encoder_state
a decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]
This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1.
Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state.
At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word.
This process can be continued until the decoder produces the token <stop>.
This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model.
Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.
End of explanation
def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):
Arguments:
input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)
output_tokenizer: Tokenizer used to conver from int to words
Returns translated sentences
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seqs)
# Populate the first character of target sequence with the start character.
batch_size = input_seqs.shape[0]
target_seq = tf.ones([batch_size, 1])
decoded_sentences = [[] for _ in range(batch_size)]
# TODO 4: Sampling loop
for i in range(max_decode_length):
output_tokens, decoder_state = decoder_model.predict(
[target_seq, states_value])
# Sample a token
sampled_token_index = np.argmax(output_tokens[:, -1, :], axis=-1)
tokens = utils_preproc.int2word(output_tokenizer, sampled_token_index)
for j in range(batch_size):
decoded_sentences[j].append(tokens[j])
# Update the target sequence (of length 1).
target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)
# Update states
states_value = decoder_state
return decoded_sentences
Explanation: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.
End of explanation
sentences = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?"
]
reference_translations = [
"We're not eating.",
"Winter is coming.",
"Winter is coming.",
"Tom ate nothing.",
"His bad leg prevented him from winning the race.",
"Your answer is wrong.",
"How about going for a walk after lunch?"
]
machine_translations = decode_sequences(
utils_preproc.preprocess(sentences, inp_lang),
targ_lang,
max_length_targ
)
for i in range(len(sentences)):
print('-')
print('INPUT:')
print(sentences[i])
print('REFERENCE TRANSLATION:')
print(reference_translations[i])
print('MACHINE TRANSLATION:')
print(machine_translations[i])
Explanation: Now we're ready to predict!
End of explanation
if not LOAD_CHECKPOINT:
os.makedirs(MODEL_PATH, exist_ok=True)
# TODO 3b
model.save(os.path.join(MODEL_PATH, 'model.h5'))
encoder_model.save(os.path.join(MODEL_PATH, 'encoder_model.h5'))
decoder_model.save(os.path.join(MODEL_PATH, 'decoder_model.h5'))
with open(os.path.join(MODEL_PATH, 'encoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(inp_lang, fp)
with open(os.path.join(MODEL_PATH, 'decoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(targ_lang, fp)
Explanation: Checkpoint Model
Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse:
End of explanation
def bleu_1(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (1,), smoothing_function)
def bleu_4(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (.25, .25, .25, .25), smoothing_function)
Explanation: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
End of explanation
%%time
num_examples = len(input_tensor_val)
bleu_1_total = 0
bleu_4_total = 0
for idx in range(num_examples):
# TODO 5
reference_sentence = utils_preproc.int2word(
targ_lang, target_tensor_val[idx][1:])
decoded_sentence = decode_sequences(
input_tensor_val[idx:idx+1], targ_lang, max_length_targ)[0]
bleu_1_total += bleu_1(reference_sentence, decoded_sentence)
bleu_4_total += bleu_4(reference_sentence, decoded_sentence)
print('BLEU 1: {}'.format(bleu_1_total/num_examples))
print('BLEU 4: {}'.format(bleu_4_total/num_examples))
Explanation: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.
End of explanation |
12,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
Step1: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
Step2: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
Step3: 4. Enter Pearson Significance Test Function Parameters
Add function to dataset for checking if correlation is significant.
1. Specify the dataset, and the function will be added and available.
Modify the values below for your use case, can be done multiple times, then click play.
Step4: 5. Execute Pearson Significance Test Function
This does NOT need to be modified unles you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: 1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
Explanation: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
End of explanation
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
Explanation: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
End of explanation
FIELDS = {
'auth': 'service', # Credentials used for writing function.
'dataset': '', # Existing BigQuery dataset.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 4. Enter Pearson Significance Test Function Parameters
Add function to dataset for checking if correlation is significant.
1. Specify the dataset, and the function will be added and available.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'function': 'pearson_significance_test',
'to': {
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 1,'default': '','description': 'Existing BigQuery dataset.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
Explanation: 5. Execute Pearson Significance Test Function
This does NOT need to be modified unles you are changing the recipe, click play.
End of explanation |
12,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
Step1: First reload the data we generated in notmist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
L2 for logistic model
Step4: Actual training
Step5: Results
Without L2
Step6: Training the network
Step7: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
Step8: Answer
Step9: Accuracy goes up slightly with dropout (and no regularization)
Step10: Train the final model | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in notmist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
beta = 0.001
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + beta * tf.nn.l2_loss(weights)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
L2 for logistic model
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Actual training
End of explanation
batch_size = 128
num_hidden_nodes = 1024
g = tf.Graph()
with g.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables, input layer
w1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
b1 = tf.Variable(tf.zeros([num_hidden_nodes]))
# Variables, output layer
w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
b2 = tf.Variable(tf.zeros([num_labels]))
# Forward propagation
# To get the prediction, apply softmax to the output of this
def forward_prop(dataset, w1, b1, w2, b2):
o1 = tf.matmul(dataset, w1) + b1
output_hidden = tf.nn.relu(o1)
return tf.matmul(output_hidden, w2) + b2
train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2)
beta = 0.01
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels)) + beta * (tf.nn.l2_loss(w1) + tf.nn.l2_loss(w2))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(train_output)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2))
Explanation: Results
Without L2:
Validation accuracy: 79.2%
Test accuracy: 86.4%
With L2, β=2:
Validation accuracy: 30.4%
Test accuracy: 32.5%
β = 0.01:
Validation accuracy: 81.3%
Test accuracy: 87.4%
L2 for neural network model
Graph:
End of explanation
num_steps = 3001
with tf.Session(graph=g) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (small_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = small_dataset[offset:(offset + batch_size), :]
batch_labels = small_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Training the network:
End of explanation
# use only 4 batches
small_dataset = train_dataset[0:128*4, :]
small_labels = train_labels[0:128*4, :]
Explanation: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
End of explanation
# With support for dropout
batch_size = 128
num_hidden_nodes = 1024
g = tf.Graph()
with g.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables, input layer
w1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
b1 = tf.Variable(tf.zeros([num_hidden_nodes]))
# Variables, output layer
w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
b2 = tf.Variable(tf.zeros([num_labels]))
# Forward propagation
# To get the prediction, apply softmax to the output of this
def forward_prop(dataset, w1, b1, w2, b2, dropout=False):
o1 = tf.matmul(dataset, w1) + b1
output_hidden = tf.nn.relu(o1)
if dropout:
output_hidden = tf.nn.dropout(output_hidden, 0.5)
return tf.matmul(output_hidden, w2) + b2
train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2)
beta = 0.01
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels)) + beta * tf.nn.l2_loss(w1)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(train_output)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2))
Explanation: Answer: The minibatch accuracy is very good but both validation and test accuracy are much lower.
Minibatch accuracy: 89.8%
Validation accuracy: 51.8%
Test accuracy: 58.5%
Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
End of explanation
# With support for dropout
batch_size = 128
num_hidden_nodes_1 = 1024
num_hidden_nodes_2 = 300
g = tf.Graph()
with g.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# transform input layer -> hidden layer 1
w1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes_1]))
b1 = tf.Variable(tf.zeros([num_hidden_nodes_1]))
# transform hidden layer 1 -> hidden layer 2
w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes_1, num_hidden_nodes_2]))
b2 = tf.Variable(tf.zeros([num_hidden_nodes_2]))
# transform hidden layer 2 -> output layer
w3 = tf.Variable(tf.truncated_normal([num_hidden_nodes_2, num_labels]))
b3 = tf.Variable(tf.zeros([num_labels]))
# Forward propagation
# To get the prediction, apply softmax to the output of this
def forward_prop(dataset, w1, b1, w2, b2, w3, b3, dropout=False):
o1 = tf.nn.tanh(tf.matmul(dataset, w1) + b1)
o2 = tf.nn.tanh(tf.matmul(o1, w2) + b2)
if dropout:
o1 = tf.nn.dropout(o1, 0.5)
return tf.matmul(o2, w3) + b3
train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2, w3, b3)
beta = 0.005
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels)) + beta * (tf.nn.l2_loss(w1) + tf.nn.l2_loss(w2))
p = tf.Print(loss, [loss])
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(0.1, global_step, 500, 0.96)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(p, global_step=global_step)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(train_output)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2, w3, b3))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2, w3, b3))
Explanation: Accuracy goes up slightly with dropout (and no regularization):
Minibatch accuracy: 93.8%
Validation accuracy: 54.1%
Test accuracy: 61.3%
With both L2 and dropout:
Minibatch accuracy: 96.9%
Validation accuracy: 74.8%
Test accuracy: 82.0%
Problem 4
Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.
One avenue you can explore is to add multiple layers.
Another one is to use learning rate decay:
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
Final model
First let's setup a multi-layer network.
End of explanation
num_steps = 9001
with tf.Session(graph=g) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Train the final model
End of explanation |
12,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Intuition for OOP
'OOP' stands for Object Orientated Programming. Today my aim to provide a quick overview of the topic which will help you develop an 'intuition' for what objects are and how methods work.
What is an Object?
In Python almost everything is an object! In real terms, what this means is that every 'item' in Python has a set of properties and a special set functions that only work on items of that type.
What are methods?
Above I said that "every 'item' in Python has a set of properties and a special set of commands or functions that only work on objects of that same type." Well, the technical name for that special set of commands/properties is "methods".
Let me give you a simple example
Step1: In the above example we can see that we can replace the letter "c" with a "b" using the replace 'method'. What happens if I have the number 711 and want to change the 7 to a 9 to make 911?
Step6: When we try to use replace with integers we get an error; "'int' object has no attribute 'replace'". Basically this is Python's way of telling us that 'replace' is not a method belonging to the integer class.
In the next lecture I will explain how methods work in a bit more detail. However, the purpose of this lecture is to introduce to you the concepts of objects. So lets get started!
Let's think about balls...
X is a football. There is nothing special about X, its just a normal football. The question I want you to think about all various ways you might interact with a football. Go ahead, make a mental a list, I can wait...
<img src="https
Step12: The above code doesn't work, but it should give you a feel for what a "ball UI" could look like in Python. Now lets do something similar for a football player.
Step13: Let's suppose you are a developer on this project and a colleague of yours has implemented these methods already. The really cool thing is this UI adds a layer of abstraction that allows you to meaningfully interact with the ball/player without needing to actually understand exactly how this stuff works.
The reason why this is so damn cool is basically because we can now write some game logic, despite not knowing anything about the system the game is using to run physics, etc.
Step16: So what does the above code do? Well, even though we haven't covered indentation and if/else statements (yet) my well-chosen variables names and comments were probably sufficient for you to figure out what is going on.
Basically, this code creates two objects (a ball and a football player called "Messi"), we then check if Messi is close to the ball. If he is, he kicks it up the pitch. If he isn't near the ball Messi starts running towards it.
In short, objects and object methods in Python allow us to write code at a 'high-level', we can leave all the 'low-level' stuff for another developer (or Python itself) to handle. And that leaves us all the time in the world to do the fun stuff!
Building A Time Object..
I am going to finish today's lecture with an actual example of Object Orientated Programming in practice; I'm going to quickly build a 'Time class'. It is going to be a class that allows us to add times together and print them. For example
Step17: Okay, so what does this code to? Well, it creates an object which I call ‘Time’. A Time object takes two integers as input (hours and minutes). Let’s create a Time object and see what happens when I print it.
Step18: When we print time objects they are represented as a string that looks just like an alarm clock; Time(12, 30) returns "12
Step19: As a minor implementation detail my code adds Time to other Time objects, if we try to add an integer (e.g 4) to the time 10 | Python Code:
x = "I love cats." # <= x is a string...
print(x.upper()) # converts string to upper case
print(x.replace("c", "b")) # cats? I'm a bat kinda guy myself!
print(x.__add__(x)) # x.__add__(x) is EXACTLY the same as x + x.
print(x.__mul__(3)) # Equivalent to x * 3
Explanation: An Intuition for OOP
'OOP' stands for Object Orientated Programming. Today my aim to provide a quick overview of the topic which will help you develop an 'intuition' for what objects are and how methods work.
What is an Object?
In Python almost everything is an object! In real terms, what this means is that every 'item' in Python has a set of properties and a special set functions that only work on items of that type.
What are methods?
Above I said that "every 'item' in Python has a set of properties and a special set of commands or functions that only work on objects of that same type." Well, the technical name for that special set of commands/properties is "methods".
Let me give you a simple example:
End of explanation
z = 711
print(z.replace(7, 1))
# With this said, integers do have some of the "same" methods as well.
# Please note however these methods are the "same" in name only, behind the scenes they work very differently from each other.
i = 5
print(i.__add__(i)) # i + i => 5 + 5
print(i.__mul__(i)) # i * i => 5 * 5
Explanation: In the above example we can see that we can replace the letter "c" with a "b" using the replace 'method'. What happens if I have the number 711 and want to change the 7 to a 9 to make 911?
End of explanation
# Note the following code doesn't work, it is for demonstration purposes only!
class Ball():
def get_speed():
The get_speed method returns the current speed of the ball.
# magic goes here...
return speed
def get_direction():
The get_direction method returns the direction the ball is currently traveling in (in 3 dimensions).
# magic goes here...
return direction
def get_position():
The get_position method returns the current position of the ball.
# magic goes here...
return position
def bounce_off_of_object(other_object):
This method calculates a new speed and position of the ball after colliding with another object.
# magic goes here...
return # something
wilson = Ball() # creates a ball called Wilson.
Explanation: When we try to use replace with integers we get an error; "'int' object has no attribute 'replace'". Basically this is Python's way of telling us that 'replace' is not a method belonging to the integer class.
In the next lecture I will explain how methods work in a bit more detail. However, the purpose of this lecture is to introduce to you the concepts of objects. So lets get started!
Let's think about balls...
X is a football. There is nothing special about X, its just a normal football. The question I want you to think about all various ways you might interact with a football. Go ahead, make a mental a list, I can wait...
<img src="https://www.maxim.com/.image/t_share/MTM1MTQ2MDg3MDYxNzU2Mzgy/placeholder-title.jpg" style="width:300px;height:213px;" ALIGN="right">
There are a number of 'operations' we could perform on or with a ball. For example, we could name it 'Wilson'. We could also kick the ball, bounce the ball and so on.
Similarly, there are lot of operations that might make sense with other objects but not a ball. For example, it makes sense to 'plug in' a kettle or toaster, but it’s not clear what 'plugging in' a ball actually means.Likewise, we can make sense of "subtract 7 from 9", but it is not clear what is meant when somebody says "subtract 7 from a ball".
If you want to know why an object in Python has method ‘Y’ and another object (of a different type) does not have a method ‘Y’ the ball example above hopefully helps you make sense of it. In the case of strings, we can add them together because we have a clear idea how that should work. But we can’t subtract strings because it isn’t clear what should happen in a number of cases. For example, “AB” subtract “B” we can make some sense of, the result should probably just be “A”. But what about following cases:
“AB” subtract “cat”?
“AB” subtract “BA” ?
“A” subtract “a” ?
In each case it is NOT intuitively clear what should happen. The Python developers could have implemented subtraction for strings and handled all these cases in one way or another. But why would they do that? Surely the time and energy required would be better spent on other projects, such as implementing well-defined methods for other objects.
Let's Design a Ball & Footballer UI
Okay so let's imagine we are implementing a ball object in a computer game. If it is a football game we probably do not want to name footballs but we probably do want player characters to be able to interact with the ball by kicking it. We also want the ball bounce off of the ground and other objects too.
After some thought, we might come up with a list of interactions we want to build into methods, we might also start thinking about what arguments these functions (methods) might require.
End of explanation
# Note the following code doesn't work, it is for demonstration purposes only!
class Football_Player():
def name(name):
This method gives the footballer a name.
# magic goes here...
return name
def get_position():
The get_position method returns the current position of the player.
# magic goes here...
return position
def get_speed():
The get_speed method returns the current speed of the player.
# magic goes here...
return speed
def move(direction, speed):
The move method makes the player run in X direction at Y speed.
# magic goes here...
return # new_speed of self
def kick_ball(ball, power, direction):
This method kicks ball Z with X power in Y direction.
# magic goes here...
return # new_speed of ball, direction of ball.
Explanation: The above code doesn't work, but it should give you a feel for what a "ball UI" could look like in Python. Now lets do something similar for a football player.
End of explanation
# Note the following code doesn't work, it is for demonstration purposes only!
Messi = Football_Player().name("Messi") # This line creates a footballer called 'Messi'
football = Ball() # create a ball object.
if ball.get_position == Messi.get_position: # This line asks if Messi and the football are at the same position in space.
Messi.kick_ball(football, "100mph", direction="NorthWest") # Messi kicks the balls 100mph to the northwest.
else: # If Messi is NOT near the ball then...
target_position = football.get_position() # Messi wants to know where the football is.
Messi.move(target_position, "12mph") # Messi is now running at 12mph towards the ball.
Explanation: Let's suppose you are a developer on this project and a colleague of yours has implemented these methods already. The really cool thing is this UI adds a layer of abstraction that allows you to meaningfully interact with the ball/player without needing to actually understand exactly how this stuff works.
The reason why this is so damn cool is basically because we can now write some game logic, despite not knowing anything about the system the game is using to run physics, etc.
End of explanation
class Time(object):
# Do you remember in the variable names lecture I said that
# you shouldn't name a variable '__something' unless you know what you are doing?
# Well, thats because double underscore is usually reserved for "hidden" class properties/functions.
# In this particular case, __init__, __repr__, __add__, all have very special meanings in Python.
def __init__(self, hours, mins=0):
assert isinstance(hours, int) and 0<= hours <=24 # Hours should be between 0 and 24 for a 24-hour clock.
assert isinstance(mins, int) and 0<= mins <=60
self.hours = hours
self.mins = mins
def __repr__(self):
return format_time(self.hours, self.mins)
def __add__(self, other):
This function takes itself and another Time object and adds them together.
a = self.mins
b = other.mins
x = self.hours
y = other.hours
c = str( (a + b) % 60 ).zfill(2) # Integer division and Modulo Arithmetic. You have seen this before!
z = str((((a+b) // 60) + x + y) % 25).zfill(2)
return format_time(z, c)
def format_time(h, m):
This function takes hours (int) and mins(int) and returns them in a pretty format. 2, 59 => 02:59
return "{}:{}".format(str(h).zfill(2), str(m).zfill(2))
Explanation: So what does the above code do? Well, even though we haven't covered indentation and if/else statements (yet) my well-chosen variables names and comments were probably sufficient for you to figure out what is going on.
Basically, this code creates two objects (a ball and a football player called "Messi"), we then check if Messi is close to the ball. If he is, he kicks it up the pitch. If he isn't near the ball Messi starts running towards it.
In short, objects and object methods in Python allow us to write code at a 'high-level', we can leave all the 'low-level' stuff for another developer (or Python itself) to handle. And that leaves us all the time in the world to do the fun stuff!
Building A Time Object..
I am going to finish today's lecture with an actual example of Object Orientated Programming in practice; I'm going to quickly build a 'Time class'. It is going to be a class that allows us to add times together and print them. For example: 6:00 + 30 minutes should equal 6:30 and 10:59 + 1:01 should equal 12:00.
Please be aware that I DO NOT expect you to understand the code below nor am I going to explain in detail how it works either. I just wanted to end this lecture on a real example, simple as that.
End of explanation
a = Time(4,50)
b = Time(0, 1)
c = Time(24, 59)
print(a, b, c, sep="\n")
Explanation: Okay, so what does this code to? Well, it creates an object which I call ‘Time’. A Time object takes two integers as input (hours and minutes). Let’s create a Time object and see what happens when I print it.
End of explanation
print( Time(10,40) + Time(0,20) ) # 10:40 + 00:20 => 11:00
print( Time(0,0) + Time(7,30) ) # 00:00 + 07:30 => 7:30
print( Time(20,20) + 10 ) # Error, can't add integer to a Time object
Explanation: When we print time objects they are represented as a string that looks just like an alarm clock; Time(12, 30) returns "12:30"
Now, the code above also defines a method "add". This method adds Time Y to Time X which is effectively asking:
“If the time now is X what time is in Y hours and Z minutes from now?”
Let's try adding some times together now:
End of explanation
# Notice that, just like my "Time" object, integers, strings, floats, etc are all implemented in Python as classes.
print(type(Time(10,0)),type("hello"),type(10), type(5.67), sep="\n")
Explanation: As a minor implementation detail my code adds Time to other Time objects, if we try to add an integer (e.g 4) to the time 10:30 we get an error. The reason for this is if we get handed an integer it is not entirely clear what should to happen; should we assume the integer is hours? But couldn’t that integer just as easily be minutes or total elapsed time (e.g 65 means 1:05) ? Since it's not clear, we don't guess. Instead we just yield an error.
“In the face of ambiguity, refuse the temptation to guess.” ~ Zen of Python
I could expand my Time object by adding more methods; maybe I could define subtraction, or a method for converting time-zones. But I wouldn’t add a ‘kick’ or a ‘bounce’ method because we can’t make sense of kicking or bouncing “12:30”.
Conclusion
It is important to remember that more or less everything in Python is an object. Strings are an object, Integers are an object and so on. Objects have methods (which are usually, although not always, functions) that can be called on them. Moreover, as we have seen above we can use the "class" keyword in Python to build our own objects with a corresponding set of methods.
Hopefully I have also demonstrated to you the power of OOP; by using objects we can think about code at a 'higher-level' of abstraction. And that's powerful stuff.
In the rest of this lecture series I do not mention classes and that's because in my opinion classes are a better topic for an intermediate guide; Classes can be tricky to get working but there are an undeniably powerful tool you should endeavour to learn more about.
End of explanation |
12,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dual CRISPR Screen Analysis
Step 6
Step1: Automated Set-Up
Step2: Scoring-Ready File Preparation | Python Code:
g_dataset_name = "Notebook6Test"
g_library_fp = '~/dual_crispr/library_definitions/test_library_2.txt'
g_count_fps_or_dirs = '/home/ec2-user/dual_crispr/test_data/test_set_6a,/home/ec2-user/dual_crispr/test_data/test_set_6b'
g_time_prefixes = "T,D"
g_prepped_counts_run_prefix = ""
g_prepped_counts_dir = '~/dual_crispr/test_outputs/test_set_6'
Explanation: Dual CRISPR Screen Analysis
Step 6: Scoring Preparation
Amanda Birmingham, CCBB, UCSD ([email protected])
Instructions
To run this notebook reproducibly, follow these steps:
1. Click Kernel > Restart & Clear Output
2. When prompted, click the red Restart & clear all outputs button
3. Fill in the values for your analysis for each of the variables in the Input Parameters section
4. Click Cell > Run All
Input Parameters
End of explanation
import inspect
import ccbb_pyutils.analysis_run_prefixes as ns_runs
import ccbb_pyutils.files_and_paths as ns_files
import ccbb_pyutils.notebook_logging as ns_logs
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
ns_logs.set_stdout_info_logger()
import dual_crispr.count_combination as ns_combine
print(inspect.getsource(ns_combine.get_combined_counts_file_suffix))
import ccbb_pyutils.string_utils as ns_string
print(inspect.getsource(ns_string.split_delimited_string_to_list))
import os
def get_count_file_fps(comma_sep_fps_or_dirs_str):
result = []
fps_or_dirs = comma_sep_fps_or_dirs_str.split(",")
for curr_fp_or_dir in fps_or_dirs:
trimmed_curr = curr_fp_or_dir.strip()
trimmed_curr = ns_files.expand_path(trimmed_curr)
if os.path.isdir(trimmed_curr):
combined_counts_fps = ns_files.get_filepaths_from_wildcard(trimmed_curr,
ns_combine.get_combined_counts_file_suffix())
result.extend(combined_counts_fps)
else:
result.append(trimmed_curr)
return result
g_library_fp = ns_files.expand_path(g_library_fp)
g_count_file_fps = get_count_file_fps(g_count_fps_or_dirs)
g_prepped_counts_run_prefix = ns_runs.check_or_set(g_prepped_counts_run_prefix,
ns_runs.generate_run_prefix(g_dataset_name))
g_time_prefixes_list = ns_string.split_delimited_string_to_list(g_time_prefixes)
g_prepped_counts_dir = ns_files.expand_path(g_prepped_counts_dir)
print(describe_var_list(['g_library_fp', 'g_count_file_fps', 'g_prepped_counts_run_prefix', 'g_time_prefixes_list']))
ns_files.verify_or_make_dir(g_prepped_counts_dir)
Explanation: Automated Set-Up
End of explanation
import dual_crispr.scoring_prep as ns_prep
print(inspect.getsource(ns_prep))
def merge_and_write_timepoint_counts(count_file_fps, constructs_fp, run_prefix, dataset_name, time_prefixes_list,
output_dir, disregard_order=True):
joined_df = ns_prep.merge_and_annotate_counts(count_file_fps, constructs_fp, dataset_name,
time_prefixes_list, disregard_order=True)
prepped_file_suffix = ns_prep.get_prepped_file_suffix()
output_fp = ns_files.build_multipart_fp(output_dir, [run_prefix, prepped_file_suffix])
joined_df.to_csv(output_fp, index=False, sep='\t')
merge_and_write_timepoint_counts(g_count_file_fps, g_library_fp, g_prepped_counts_run_prefix, g_dataset_name,
g_time_prefixes_list, g_prepped_counts_dir, True)
print(ns_files.check_file_presence(g_prepped_counts_dir, g_prepped_counts_run_prefix,
ns_prep.get_prepped_file_suffix(),
check_failure_msg="Scoring preparation failed to produce an output file."))
Explanation: Scoring-Ready File Preparation
End of explanation |
12,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian filtering in the frequency domain
This notebook contains an exploration of the characteristic function as a way to represent a probability distribution, yielding an efficient implementation of a Bayes filter.
Copyright 2015 Allen Downey
MIT License
Step1: Suppose you are tracking a rotating part and want to estimate its angle, in degrees, as a function of time, given noisy measurements of its position and velocity.
I'll represent possible positions using a vector of 360 values in degrees.
Step2: I'll represent distributions using a Numpy array of probabilities. The following function takes a Numpy array and normalizes it so the probabilities add to 1.
Step3: The following function creates a discrete approximation of a Gaussian distribution with the given parameters, evaluated at the given positions, xs
Step4: Suppose that initially we believe that the position of the part is 180 degrees, with uncertainty represented by a Gaussian distribution with $\sigma=4$.
Here's what that looks like
Step5: And suppose that we believe the part is rotating at an angular velocity of 15 degrees per time unit, with uncertainty represented by a Gaussian distribution with $\sigma=3$
Step6: The predict step
What should we believe about the position of the part after one time unit has elapsed?
A simple way to estimate the answer is to draw samples from the distributions of position and velocity, and add them together.
The following function draws a sample from a distribution (again, represented by a Numpy array of probabilities). I'm using a Pandas series because it provides a function that computes weighted samples.
Step7: As a quick check, the sample from the position distribution has the mean and standard deviation we expect.
Step8: And so does the sample from the distribution of velocities
Step9: When we add them together, we get a sample from the distribution of positions after one time unit.
The mean is the sum of the means, and the standard deviation is the hypoteneuse of a triangle with the other two standard deviations. In this case, it's a 3-4-5 triangle
Step10: Based on the samples, we can estimate the distribution of the sum.
To compute the distribution of the sum exactly, we can iterate through all possible values from both distributions, computing the sum of each pair and the product of their probabilities
Step11: This algorithm is slow (taking time proportional to $N^2$, where $N$ is the length of xs), but it works
Step12: Here's what the result looks like
Step13: And we can check the mean and standard deviation
Step14: The mean of the sum is the sum of the means
Step15: And the standard deviation of the sum is the hypoteneuse of the standard deviations
Step16: Which should be 5
Step17: What we just computed is the convolution of the two distributions.
Now we get to the fun part. The characteristic function is an alternative way to represent a distribution. It is the Fourier transform of the probability density function, or for discrete distributions, the DFT of the probability mass function.
I'll use Numpy's implementation of FFT. The result is an array of complex, so I'll just plot the magnitude and ignore the phase.
Step18: The Fourier transform of a Gaussian is also a Gaussian, which we can see more clearly if we rotate the characteristic function before plotting it.
Step19: We can also compute the characteristic function of the velocity distribution
Step20: You might notice that the narrower (more certain) the distribution is in the space domain, the wider (less certain) it is in the frequency domain. As it turns out, the product of the two standard deviations is constant.
We can see that more clearly by plotting the distribution side-by-side in the space and frequency domains
Step21: The following function plots Gaussian distributions with given parameters.
Step22: Here's a simple example
Step23: Now we can make sliders to control mu and sigma.
Step24: As you increase sigma, the distribution gets wider in the space domain and narrower in the frequency domain.
As you vary mu, the location changes in the space domain, and the phase changes in the frequency domain, but the magnitudes are unchanged.
But enough of that; I still haven't explained why characteristic functions are useful. Here it is
Step25: If we compute the inverse FFT of the characteristic function, we get the PMF of the new position
Step26: We can check the mean and standard deviation of the result
Step27: Yup, that's what we expected (forgiving some floating-point errors)
Step28: We can encapsulate this process in a function that computes the convolution of two distributions
Step29: Since FFT is $N \log N$, and elementwise multiplication is linear, the whole function is $N \log N$, which is better than the $N^2$ algorithm we started with.
The results from the function are the same
Step30: The update step
Now suppose that after the move we measure the position of the rotating part with a noisy instrument. If the measurement is 197 and the standard deviation of measurement error is 4, the following distribution shows the likelihood of the observed measurement for each possible, actual, position of the part
Step31: Now we can take our belief about the position of the part and update it using the observed measurement. By Bayes's theorem, we compute the product of the prior distribution and the likelihood, then renormalize to get the posterior distribution
Step32: The prior mean was 195 and the measurement was 197, so the posterior mean is in between, at 196.2 (closer to the measurement because the measurement error is 4 and the standard deviation of the prior is 5).
The posterior standard deviation is 3.1, so the measurement decreased our uncertainty about the location.
Step33: We can encapsulate the prediction step in a function
Step34: And likewise the update function
Step35: The following function takes a prior distribution, velocity, and a measurement, and performs one predict-update step.
(The uncertainty of the velocity and measurement are hard-coded in this function, but could be parameters.)
Step36: In the figure below, pos1 is the initial belief about the position, pos2 is the belief after the predict step, and pos3 is the posterior belief after the measurement.
The taller the distribution, the narrower it is, indicating more certainty about position. In general, the predict step makes us less certain, and the update makes us more certain.
Step37: So far I've been using Gaussian distributions for everything, but in that case we could skip all the computation and get the results analytically.
The implementation I showed generalizes to arbitrary distribitions. For example, suppose our initial beliefs are multimodal, for example, if we can't tell whether the part has been rotated 90 degrees.
Step38: Now we can do the same predict-update step
Step39: After the predict step, our belief is still multimodal.
Then I chose a measurement, 151, that's halfway between two modes. The result is bimodal (with the other two modes practically eliminated).
If we perform one more step | Python Code:
from __future__ import print_function, division
import thinkstats2
import thinkplot
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="white", palette="muted", color_codes=True)
%matplotlib inline
Explanation: Bayesian filtering in the frequency domain
This notebook contains an exploration of the characteristic function as a way to represent a probability distribution, yielding an efficient implementation of a Bayes filter.
Copyright 2015 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
n = 360
xs = np.arange(n)
Explanation: Suppose you are tracking a rotating part and want to estimate its angle, in degrees, as a function of time, given noisy measurements of its position and velocity.
I'll represent possible positions using a vector of 360 values in degrees.
End of explanation
def normalize(dist):
dist /= sum(dist)
Explanation: I'll represent distributions using a Numpy array of probabilities. The following function takes a Numpy array and normalizes it so the probabilities add to 1.
End of explanation
def gaussian(xs, mu, sigma):
dist = stats.norm.pdf(xs, loc=180, scale=sigma)
dist = np.roll(dist, mu-180)
normalize(dist)
return dist
Explanation: The following function creates a discrete approximation of a Gaussian distribution with the given parameters, evaluated at the given positions, xs:
End of explanation
pos = gaussian(xs, mu=180, sigma=4)
plt.plot(xs, pos);
Explanation: Suppose that initially we believe that the position of the part is 180 degrees, with uncertainty represented by a Gaussian distribution with $\sigma=4$.
Here's what that looks like:
End of explanation
move = gaussian(xs, mu=15, sigma=3)
plt.plot(xs, move);
Explanation: And suppose that we believe the part is rotating at an angular velocity of 15 degrees per time unit, with uncertainty represented by a Gaussian distribution with $\sigma=3$:
End of explanation
def sample_dist(xs, dist, n=1000):
series = pd.Series(xs)
return series.sample(n=n, weights=dist, replace=True).values
Explanation: The predict step
What should we believe about the position of the part after one time unit has elapsed?
A simple way to estimate the answer is to draw samples from the distributions of position and velocity, and add them together.
The following function draws a sample from a distribution (again, represented by a Numpy array of probabilities). I'm using a Pandas series because it provides a function that computes weighted samples.
End of explanation
pos_sample = sample_dist(xs, pos)
pos_sample.mean(), pos_sample.std()
Explanation: As a quick check, the sample from the position distribution has the mean and standard deviation we expect.
End of explanation
move_sample = sample_dist(xs, move)
move_sample.mean(), move_sample.std()
Explanation: And so does the sample from the distribution of velocities:
End of explanation
sample = pos_sample + move_sample
sample.mean(), sample.std()
Explanation: When we add them together, we get a sample from the distribution of positions after one time unit.
The mean is the sum of the means, and the standard deviation is the hypoteneuse of a triangle with the other two standard deviations. In this case, it's a 3-4-5 triangle:
End of explanation
def add_dist(xs, dist1, dist2):
res = np.zeros_like(dist1)
for x1, p1 in zip(xs, dist1):
for x2, p2 in zip(xs, dist2):
x = (x1 + x2) % 360
res[x] = res[x] + p1 * p2
return res
Explanation: Based on the samples, we can estimate the distribution of the sum.
To compute the distribution of the sum exactly, we can iterate through all possible values from both distributions, computing the sum of each pair and the product of their probabilities:
End of explanation
new_pos = add_dist(xs, pos, move)
Explanation: This algorithm is slow (taking time proportional to $N^2$, where $N$ is the length of xs), but it works:
End of explanation
plt.plot(xs, new_pos);
Explanation: Here's what the result looks like:
End of explanation
def mean_dist(xs, dist):
return sum(xs * dist)
Explanation: And we can check the mean and standard deviation:
End of explanation
mu = mean_dist(xs, new_pos)
mu
Explanation: The mean of the sum is the sum of the means:
End of explanation
def std_dist(xs, dist, mu):
return np.sqrt(sum((xs - mu)**2 * dist))
Explanation: And the standard deviation of the sum is the hypoteneuse of the standard deviations:
End of explanation
sigma = std_dist(xs, new_pos, mu)
sigma
Explanation: Which should be 5:
End of explanation
from numpy.fft import fft
char_pos = fft(pos)
plt.plot(xs, np.abs(char_pos));
Explanation: What we just computed is the convolution of the two distributions.
Now we get to the fun part. The characteristic function is an alternative way to represent a distribution. It is the Fourier transform of the probability density function, or for discrete distributions, the DFT of the probability mass function.
I'll use Numpy's implementation of FFT. The result is an array of complex, so I'll just plot the magnitude and ignore the phase.
End of explanation
plt.plot(xs, np.roll(np.abs(char_pos), 180));
Explanation: The Fourier transform of a Gaussian is also a Gaussian, which we can see more clearly if we rotate the characteristic function before plotting it.
End of explanation
char_move = fft(move)
plt.plot(xs, abs(char_move));
Explanation: We can also compute the characteristic function of the velocity distribution:
End of explanation
def plot_dist_and_char(dist):
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(7, 4))
ax1.plot(xs, dist)
ax1.set_xlabel('space domain')
char = fft(dist)
ax2.plot(xs, np.roll(abs(char), 180))
ax2.set_xlabel('frequency domain')
Explanation: You might notice that the narrower (more certain) the distribution is in the space domain, the wider (less certain) it is in the frequency domain. As it turns out, the product of the two standard deviations is constant.
We can see that more clearly by plotting the distribution side-by-side in the space and frequency domains:
End of explanation
def plot_gaussian_dist_and_char(mu=180, sigma=3):
dist = gaussian(xs, mu, sigma)
plot_dist_and_char(dist)
Explanation: The following function plots Gaussian distributions with given parameters.
End of explanation
plot_gaussian_dist_and_char(mu=180, sigma=3)
Explanation: Here's a simple example:
End of explanation
from IPython.html.widgets import interact, fixed
from IPython.html import widgets
slider1 = widgets.IntSliderWidget(min=0, max=360, value=180)
slider2 = widgets.FloatSliderWidget(min=0, max=100, value=3)
interact(plot_gaussian_dist_and_char, mu=slider1, sigma=slider2);
Explanation: Now we can make sliders to control mu and sigma.
End of explanation
char_new_pos = char_pos * char_move
plt.plot(xs, abs(char_new_pos));
Explanation: As you increase sigma, the distribution gets wider in the space domain and narrower in the frequency domain.
As you vary mu, the location changes in the space domain, and the phase changes in the frequency domain, but the magnitudes are unchanged.
But enough of that; I still haven't explained why characteristic functions are useful. Here it is: If the characteristic function of X is $\phi_X$ and the characteristic function of Y is $\phi_Y$, the characteristic function of the sum X+Y is the elementwise product of $\phi_X$ and $\phi_Y$.
So the characteristic function of the new position (after one time step) is the product of the two characteristic functions we just computed:
End of explanation
from numpy.fft import ifft
new_pos = ifft(char_new_pos).real
plt.plot(xs, new_pos);
Explanation: If we compute the inverse FFT of the characteristic function, we get the PMF of the new position:
End of explanation
def mean_std_dist(xs, dist):
xbar = mean_dist(xs, dist)
s = std_dist(xs, dist, xbar)
return xbar, s
Explanation: We can check the mean and standard deviation of the result:
End of explanation
mean_std_dist(xs, new_pos)
Explanation: Yup, that's what we expected (forgiving some floating-point errors):
End of explanation
def fft_convolve(dist1, dist2):
prod = fft(dist1) * fft(dist2)
dist = ifft(prod).real
return dist
Explanation: We can encapsulate this process in a function that computes the convolution of two distributions:
End of explanation
new_pos = fft_convolve(pos, move)
mean_std_dist(xs, new_pos)
Explanation: Since FFT is $N \log N$, and elementwise multiplication is linear, the whole function is $N \log N$, which is better than the $N^2$ algorithm we started with.
The results from the function are the same:
End of explanation
likelihood = gaussian(xs, mu=197, sigma=4)
plt.plot(xs, new_pos);
Explanation: The update step
Now suppose that after the move we measure the position of the rotating part with a noisy instrument. If the measurement is 197 and the standard deviation of measurement error is 4, the following distribution shows the likelihood of the observed measurement for each possible, actual, position of the part:
End of explanation
new_pos = new_pos * likelihood
normalize(new_pos)
plt.plot(xs, new_pos);
Explanation: Now we can take our belief about the position of the part and update it using the observed measurement. By Bayes's theorem, we compute the product of the prior distribution and the likelihood, then renormalize to get the posterior distribution:
End of explanation
mean_std_dist(xs, new_pos)
Explanation: The prior mean was 195 and the measurement was 197, so the posterior mean is in between, at 196.2 (closer to the measurement because the measurement error is 4 and the standard deviation of the prior is 5).
The posterior standard deviation is 3.1, so the measurement decreased our uncertainty about the location.
End of explanation
def predict(xs, pos, move):
new_pos = fft_convolve(pos, move)
return new_pos
Explanation: We can encapsulate the prediction step in a function:
End of explanation
def update(xs, pos, likelihood):
new_pos = pos * likelihood
normalize(new_pos)
return new_pos
Explanation: And likewise the update function:
End of explanation
def predict_update(xs, pos1, velocity, measure):
# predict
move = gaussian(xs, velocity, 3)
pos2 = predict(xs, pos1, move)
#update
likelihood = gaussian(xs, measure, 4)
pos3 = update(xs, pos2, likelihood)
#plot
plt.plot(xs, pos1, label='pos1')
plt.plot(xs, pos2, label='pos2')
plt.plot(xs, pos3, label='pos3')
plt.legend()
return pos3
Explanation: The following function takes a prior distribution, velocity, and a measurement, and performs one predict-update step.
(The uncertainty of the velocity and measurement are hard-coded in this function, but could be parameters.)
End of explanation
pos1 = gaussian(xs, 180, 4)
pos3 = predict_update(xs, pos1, velocity=15, measure=197)
Explanation: In the figure below, pos1 is the initial belief about the position, pos2 is the belief after the predict step, and pos3 is the posterior belief after the measurement.
The taller the distribution, the narrower it is, indicating more certainty about position. In general, the predict step makes us less certain, and the update makes us more certain.
End of explanation
pos1 = (gaussian(xs, 0, 4) + gaussian(xs, 90, 4) +
gaussian(xs, 180, 4) + gaussian(xs, 270, 4))
normalize(pos1)
plt.plot(xs, pos1);
Explanation: So far I've been using Gaussian distributions for everything, but in that case we could skip all the computation and get the results analytically.
The implementation I showed generalizes to arbitrary distribitions. For example, suppose our initial beliefs are multimodal, for example, if we can't tell whether the part has been rotated 90 degrees.
End of explanation
pos3 = predict_update(xs, pos1, velocity=15, measure=151)
Explanation: Now we can do the same predict-update step:
End of explanation
pos5 = predict_update(xs, pos3, velocity=15, measure=185)
Explanation: After the predict step, our belief is still multimodal.
Then I chose a measurement, 151, that's halfway between two modes. The result is bimodal (with the other two modes practically eliminated).
If we perform one more step:
End of explanation |
12,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This streams data directly into dataframes. See also
Step1: Add your custom code to read_csv_lines for processing your datafile
Step2: Code to connect to BigInsights on Cloud via WebHDFS - don't change this
Step3: Add your code here to work with the imported dataframe, df | Python Code:
# Cluster number, e.g. 10000
cluster = ''
# Cluster username
username = ''
# Cluster password
password = ''
# file path in HDFS
filepath = 'yourpath/yourfile.csv'
Explanation: This streams data directly into dataframes. See also: https://github.com/snowch/biginsight-examples/blob/master/misc/WebHDFS_Example_local_storage.ipynb
Credentials - keep this secret!
End of explanation
import pandas as pd
def read_csv_lines(lines, is_first_chunk = False):
''' returns a pandas dataframe '''
if is_first_chunk:
# you will want to set the header here if your datafile has a header record
return pd.read_csv(lines, sep='|', header=None)
else:
return pd.read_csv(lines, sep='|', header=None)
host = 'ehaasp-{0}-mastermanager.bi.services.bluemix.net'.format(cluster)
Explanation: Add your custom code to read_csv_lines for processing your datafile
End of explanation
import requests
import numpy as np
import sys
import datetime
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
print('SCRIPT START: {0}'.format(datetime.datetime.now()))
chunk_size = 10000000 # Read in 100 Mb chunks
url = "https://{0}:8443/gateway/default/webhdfs/v1/{1}?op=OPEN".format(host, filepath)
# note SSL verification is been disabled
r = requests.get(url,
auth=(username, password),
verify=False,
allow_redirects=True,
stream=True)
df = None
chunk_num = 1
remainder = ''
for chunk in r.iter_content(chunk_size):
if chunk: # filter out keep-alive new chunks
# Show progress by printing a dot - useful when chunk size is quite small
# sys.stdout.write('.')
# sys.stdout.flush()
txt = remainder + chunk
if '\n' in txt:
[lines, remainder] = txt.rsplit('\n', 1)
else:
lines = txt
if chunk_num == 1:
pdf = read_csv_lines(StringIO(lines), True)
df = sqlContext.createDataFrame(pdf)
else:
pdf = read_csv_lines(StringIO(lines), False)
df2 = sqlContext.createDataFrame(pdf)
df = df.sql_ctx.createDataFrame(
df._sc.union([df.rdd, df2.rdd]), df.schema
)
print('Imported chunk: {0} record count: {1} df count: {2}'.format(chunk_num, len(pdf), df.count()))
chunk_num = chunk_num + 1
print '\nTotal record import count: {0}'.format(df.count())
print('SCRIPT END: {0}'.format(datetime.datetime.now()))
df.cache()
Explanation: Code to connect to BigInsights on Cloud via WebHDFS - don't change this
End of explanation
df.show()
Explanation: Add your code here to work with the imported dataframe, df
End of explanation |
12,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create BigQuery stored procedures
This notebook is the second of two notebooks that guide you through completing the prerequisites for running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to create the following stored procedures that are needed by the solution
Step1: Import libraries
Step2: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step3: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step4: Create the stored procedure dependencies
Step5: Create the stored procedures
Run the scripts that create the BigQuery stored procedures.
Step6: List the stored procedures | Python Code:
!pip install -q -U google-cloud-bigquery pyarrow
Explanation: Create BigQuery stored procedures
This notebook is the second of two notebooks that guide you through completing the prerequisites for running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to create the following stored procedures that are needed by the solution:
sp_ComputePMI - Computes pointwise mutual information (PMI) from item co-occurence data. This data is used by a matrix factorization model to learn item embeddings.
sp_TrainItemMatchingModel - Creates the item_embedding_model matrix factorization model. This model learns item embeddings based on the PMI data computed by sp_ComputePMI.
sp_ExractEmbeddings - Extracts the item embedding values from the item_embedding_model model, aggregates these values to produce a single embedding vector for each item, and stores these vectors in the item_embeddings table. The vector data is later exported to Cloud Storage to be used for item embedding lookup.
Before starting this notebook, you must run the 00_prep_bq_and_datastore notebook to complete the first part of the prerequisites.
After completing this notebook, you can run the solution either step-by-step or with a TFX pipeline:
To start running the solution step-by-step, run the 01_train_bqml_mf_pmi notebook to create item embeddings.
To run the solution by using a TFX pipeline, run the tfx01_interactive notebook to create the pipeline.
Setup
Install the required Python packages, configure the environment variables, and authenticate your GCP account.
End of explanation
import os
from google.cloud import bigquery
Explanation: Import libraries
End of explanation
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucketName" # Change to the bucket you created.
SQL_SCRIPTS_DIR = "sql_scripts"
BQ_DATASET_NAME = "recommendations"
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
%%bigquery --project $PROJECT_ID
CREATE TABLE IF NOT EXISTS recommendations.item_cooc
AS SELECT 0 AS item1_Id, 0 AS item2_Id, 0 AS cooc, 0 AS pmi;
%%bigquery --project $PROJECT_ID
CREATE MODEL IF NOT EXISTS recommendations.item_matching_model
OPTIONS(
MODEL_TYPE='matrix_factorization',
USER_COL='item1_Id',
ITEM_COL='item2_Id',
RATING_COL='score'
)
AS
SELECT 0 AS item1_Id, 0 AS item2_Id, 0 AS score;
Explanation: Create the stored procedure dependencies
End of explanation
client = bigquery.Client(project=PROJECT_ID)
sql_scripts = dict()
for script_file in [file for file in os.listdir(SQL_SCRIPTS_DIR) if ".sql" in file]:
script_file_path = os.path.join(SQL_SCRIPTS_DIR, script_file)
sql_script = open(script_file_path, "r").read()
sql_script = sql_script.replace("@DATASET_NAME", BQ_DATASET_NAME)
sql_scripts[script_file] = sql_script
for script_file in sql_scripts:
print(f"Executing {script_file} script...")
query = sql_scripts[script_file]
query_job = client.query(query)
result = query_job.result()
print("Done.")
Explanation: Create the stored procedures
Run the scripts that create the BigQuery stored procedures.
End of explanation
query = f"SELECT * FROM {BQ_DATASET_NAME}.INFORMATION_SCHEMA.ROUTINES;"
query_job = client.query(query)
query_job.result().to_dataframe()
Explanation: List the stored procedures
End of explanation |
12,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Understanding TensorFlow Distributions Shapes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Basics
There are three important concepts associated with TensorFlow Distributions shapes
Step3: In this section we'll explore scalar distributions
Step4: The Poisson distribution is a scalar distribution, so its event shape is always []. If we specify more rates, these show up in the batch shape. The final pair of examples is interesting
Step5: The interesting example above is the Broadcasting Scale distribution. The loc parameter has shape [4], and the scale parameter has shape [2, 1]. Using Numpy broadcasting rules, the batch shape is [2, 4]. An equivalent (but less elegant and not-recommended) way to define the "Broadcasting Scale" distribution would be
Step6: We can see why the broadcasting notation is useful, although it's also a source of headaches and bugs.
Sampling Scalar Distributions
There are two main things we can do with distributions
Step7: That's about all there is to say about sample
Step8: Note how in the first example, the input and output have shape [2, 3] and in the second example they have shape [1, 1, 2, 3].
That would be all there was to say, if it weren't for broadcasting. Here are the rules once we take broadcasting into account. We describe it in full generality and note simplifications for scalar distributions
Step9: The tensor [10.] (with shape [1]) is broadcast across the batch_shape of 3, so we evaluate all three Poissons' log probability at the value 10.
Step10: In the above example, the input tensor has shape [2, 2, 1], while the distributions object has a batch shape of 3. So for each of the [2, 2] sample dimensions, the single value provided gets broadcats to each of the three Poissons.
A possibly useful way to think of it
Step11: The above examples involved broadcasting over the batch, but the sample shape was empty. Suppose we have a collection of values, and we want to get the log probability of each value at each point in the batch. We could do it manually
Step12: Or we could let broadcasting handle the last batch dimension
Step13: We can also (perhaps somewhat less naturally) let broadcasting handle just the first batch dimension
Step14: Or we could let broadcasting handle both batch dimensions
Step15: The above worked fine when we had only two values we wanted, but suppose we had a long list of values we wanted to evaluate at every batch point. For that, the following notation, which adds extra dimensions of size 1 to the right side of the shape, is extremely useful
Step16: This is an instance of strided slice notation, which is worth knowing.
Going back to three_poissons for completeness, the same example looks like
Step17: Multivariate distributions
We now turn to multivariate distributions, which have non-empty event shape. Let's look at multinomial distributions.
Step18: Note how in the last three examples, the batch_shape is always [2], but we can use broadcasting to either have a shared total_count or a shared probs (or neither), because under the hood they are broadcast to have the same shape.
Sampling is straightforward, given what we know already
Step19: Computing log probabilities is equally straightforward. Let's work an example with diagonal Multivariate Normal distributions. (Multinomials are not very broadcast friendly, since the constraints on the counts and probabilities mean broadcasting will often produce inadmissible values.) We'll use a batch of 2 3-dimensional distributions with the same mean but different scales (standard deviations)
Step20: (Note that although we used distributions where the scales were multiples of the identity, this is not a restriction on; we could pass scale instead of scale_identity_multiplier.)
Now let's evaluate the log probability of each batch point at its mean and at a shifted mean
Step21: Exactly equivalently, we can use https
Step22: On the other hand, if we don't insert the extra dimension, we pass [1., 2., 3.] to the first batch point and [3., 4., 5.] to the second
Step23: Shape Manipulation Techniques
The Reshape Bijector
The Reshape bijector can be used to reshape the event_shape of a distribution. Let's see an example
Step24: We created a multinomial with an event shape of [6]. The Reshape Bijector allows us to treat this as a distribution with an event shape of [2, 3].
A Bijector represents a differentiable, one-to-one function on an open subset of ${\mathbb R}^n$. Bijectors are used in conjunction with TransformedDistribution, which models a distribution $p(y)$ in terms of a base distribution $p(x)$ and a Bijector that represents $Y = g(X)$.
Let's see it in action
Step25: This is the only thing the Reshape bijector can do
Step26: We can think of this as two-by-five array of coins with the associated probabilities of heads. Let's evaluate the probability of a particular, arbitrary set of ones-and-zeros
Step27: We can use Independent to turn this into two different "sets of five Bernoulli's", which is useful if we want to consider a "row" of coin flips coming up in a given pattern as a single outcome
Step28: Mathematically, we're computing the log probability of each "set" of five by summing the log probabilities of the five "independent" coin flips in the set, which is where the distribution gets its name
Step29: We can go even further and use Independent to create a distribution where individual events are a set of two-by-five Bernoulli's
Step30: It's worth noting that from the perspective of sample, using Independent changes nothing | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import collections
import tensorflow as tf
tf.compat.v2.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
Explanation: Understanding TensorFlow Distributions Shapes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Understanding_TensorFlow_Distributions_Shapes"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
End of explanation
def describe_distributions(distributions):
print('\n'.join([str(d) for d in distributions]))
Explanation: Basics
There are three important concepts associated with TensorFlow Distributions shapes:
- Event shape describes the shape of a single draw from the distribution; it may be dependent across dimensions. For scalar distributions, the event shape is []. For a 5-dimensional MultivariateNormal, the event shape is [5].
- Batch shape describes independent, not identically distributed draws, aka a "batch" of distributions.
- Sample shape describes independent, identically distributed draws of batches from the distribution family.
The event shape and the batch shape are properties of a Distribution object, whereas the sample shape is associated with a specific call to sample or log_prob.
This notebook's purpose is to illustrate these concepts through examples, so if this isn't immediately obvious, don't worry!
For another conceptual overview of these concepts, see this blog post.
A note on TensorFlow Eager.
This entire notebook is written using TensorFlow Eager. None of the concepts presented rely on Eager, although with Eager, distribution batch and event shapes are evaluated (and therefore known) when the Distribution object is created in Python, whereas in graph (non-Eager mode), it is possible to define distributions whose event and batch shapes are undetermined until the graph is run.
Scalar Distributions
As we noted above, a Distribution object has defined event and batch shapes. We'll start with a utility to describe distributions:
End of explanation
poisson_distributions = [
tfd.Poisson(rate=1., name='One Poisson Scalar Batch'),
tfd.Poisson(rate=[1., 10., 100.], name='Three Poissons'),
tfd.Poisson(rate=[[1., 10., 100.,], [2., 20., 200.]],
name='Two-by-Three Poissons'),
tfd.Poisson(rate=[1.], name='One Poisson Vector Batch'),
tfd.Poisson(rate=[[1.]], name='One Poisson Expanded Batch')
]
describe_distributions(poisson_distributions)
Explanation: In this section we'll explore scalar distributions: distributions with an event shape of []. A typical example is the Poisson distribution, specified by a rate:
End of explanation
normal_distributions = [
tfd.Normal(loc=0., scale=1., name='Standard'),
tfd.Normal(loc=[0.], scale=1., name='Standard Vector Batch'),
tfd.Normal(loc=[0., 1., 2., 3.], scale=1., name='Different Locs'),
tfd.Normal(loc=[0., 1., 2., 3.], scale=[[1.], [5.]],
name='Broadcasting Scale')
]
describe_distributions(normal_distributions)
Explanation: The Poisson distribution is a scalar distribution, so its event shape is always []. If we specify more rates, these show up in the batch shape. The final pair of examples is interesting: there's only a single rate, but because that rate is embedded in a numpy array with non-empty shape, that shape becomes the batch shape.
The standard Normal distribution is also a scalar. It's event shape is [], just like for the Poisson, but we'll play with it to see our first example of broadcasting. The Normal is specified using loc and scale parameters:
End of explanation
describe_distributions(
[tfd.Normal(loc=[[0., 1., 2., 3], [0., 1., 2., 3.]],
scale=[[1., 1., 1., 1.], [5., 5., 5., 5.]])])
Explanation: The interesting example above is the Broadcasting Scale distribution. The loc parameter has shape [4], and the scale parameter has shape [2, 1]. Using Numpy broadcasting rules, the batch shape is [2, 4]. An equivalent (but less elegant and not-recommended) way to define the "Broadcasting Scale" distribution would be:
End of explanation
def describe_sample_tensor_shape(sample_shape, distribution):
print('Sample shape:', sample_shape)
print('Returned sample tensor shape:',
distribution.sample(sample_shape).shape)
def describe_sample_tensor_shapes(distributions, sample_shapes):
started = False
for distribution in distributions:
print(distribution)
for sample_shape in sample_shapes:
describe_sample_tensor_shape(sample_shape, distribution)
print()
sample_shapes = [1, 2, [1, 5], [3, 4, 5]]
describe_sample_tensor_shapes(poisson_distributions, sample_shapes)
describe_sample_tensor_shapes(normal_distributions, sample_shapes)
Explanation: We can see why the broadcasting notation is useful, although it's also a source of headaches and bugs.
Sampling Scalar Distributions
There are two main things we can do with distributions: we can sample from them and we can compute log_probs. Let's explore sampling first. The basic rule is that when we sample from a distribution, the resulting Tensor has shape [sample_shape, batch_shape, event_shape], where batch_shape and event_shape are provided by the Distribution object, and sample_shape is provided by the call to sample. For scalar distributions, event_shape = [], so the Tensor returned from sample will have shape [sample_shape, batch_shape]. Let's try it:
End of explanation
three_poissons = tfd.Poisson(rate=[1., 10., 100.], name='Three Poissons')
three_poissons
three_poissons.log_prob([[1., 10., 100.], [100., 10., 1]]) # sample_shape is [2].
three_poissons.log_prob([[[[1., 10., 100.], [100., 10., 1.]]]]) # sample_shape is [1, 1, 2].
Explanation: That's about all there is to say about sample: returned sample tensors have shape [sample_shape, batch_shape, event_shape].
Computing log_prob For Scalar Distributions
Now let's take a look at log_prob, which is somewhat trickier. log_prob takes as input a (non-empty) tensor representing the location(s) at which to compute the log_prob for the distribution. In the most straightforward case, this tensor will have a shape of the form [sample_shape, batch_shape, event_shape], where batch_shape and event_shape match the batch and event shapes of the distribution. Recall once more that for scalar distributions, event_shape = [], so the input tensor has shape [sample_shape, batch_shape] In this case, we get back a tensor of shape [sample_shape, batch_shape]:
End of explanation
three_poissons.log_prob([10.])
Explanation: Note how in the first example, the input and output have shape [2, 3] and in the second example they have shape [1, 1, 2, 3].
That would be all there was to say, if it weren't for broadcasting. Here are the rules once we take broadcasting into account. We describe it in full generality and note simplifications for scalar distributions:
1. Define n = len(batch_shape) + len(event_shape). (For scalar distributions, len(event_shape)=0.)
2. If the input tensor t has fewer than n dimensions, pad its shape by adding dimensions of size 1 on the left until it has exactly n dimensions. Call the resulting tensor t'.
3. Broadcast the n rightmost dimensions of t' against the [batch_shape, event_shape] of the distribution you're computing a log_prob for. In more detail: for the dimensions where t' already matches the distribution, do nothing, and for the dimensions where t' has a singleton, replicate that singleton the appropriate number of times. Any other situation is an error. (For scalar distributions, we only broadcast against batch_shape, since event_shape = [].)
4. Now we're finally able to compute the log_prob. The resulting tensor will have shape [sample_shape, batch_shape], where sample_shape is defined to be any dimensions of t or t' to the left of the n-rightmost dimensions: sample_shape = shape(t)[:-n].
This might be a mess if you don't know what it means, so let's work some examples:
End of explanation
three_poissons.log_prob([[[1.], [10.]], [[100.], [1000.]]])
Explanation: The tensor [10.] (with shape [1]) is broadcast across the batch_shape of 3, so we evaluate all three Poissons' log probability at the value 10.
End of explanation
poisson_2_by_3 = tfd.Poisson(
rate=[[1., 10., 100.,], [2., 20., 200.]],
name='Two-by-Three Poissons')
poisson_2_by_3.log_prob(1.)
poisson_2_by_3.log_prob([1.]) # Exactly equivalent to above, demonstrating the scalar special case.
poisson_2_by_3.log_prob([[1., 1., 1.], [1., 1., 1.]]) # Another way to write the same thing. No broadcasting.
poisson_2_by_3.log_prob([[1., 10., 100.]]) # Input is [1, 3] broadcast to [2, 3].
poisson_2_by_3.log_prob([[1., 10., 100.], [1., 10., 100.]]) # Equivalent to above. No broadcasting.
poisson_2_by_3.log_prob([[1., 1., 1.], [2., 2., 2.]]) # No broadcasting.
poisson_2_by_3.log_prob([[1.], [2.]]) # Equivalent to above. Input shape [2, 1] broadcast to [2, 3].
Explanation: In the above example, the input tensor has shape [2, 2, 1], while the distributions object has a batch shape of 3. So for each of the [2, 2] sample dimensions, the single value provided gets broadcats to each of the three Poissons.
A possibly useful way to think of it: because three_poissons has batch_shape = [2, 3], a call to log_prob must take a Tensor whose last dimension is either 1 or 3; anything else is an error. (The numpy broadcasting rules treat the special case of a scalar as being totally equivalent to a Tensor of shape [1].)
Let's test our chops by playing with the more complex Poisson distribution with batch_shape = [2, 3]:
End of explanation
poisson_2_by_3.log_prob([[[1., 1., 1.], [1., 1., 1.]], [[2., 2., 2.], [2., 2., 2.]]]) # Input shape [2, 2, 3].
Explanation: The above examples involved broadcasting over the batch, but the sample shape was empty. Suppose we have a collection of values, and we want to get the log probability of each value at each point in the batch. We could do it manually:
End of explanation
poisson_2_by_3.log_prob([[[1.], [1.]], [[2.], [2.]]]) # Input shape [2, 2, 1].
Explanation: Or we could let broadcasting handle the last batch dimension:
End of explanation
poisson_2_by_3.log_prob([[[1., 1., 1.]], [[2., 2., 2.]]]) # Input shape [2, 1, 3].
Explanation: We can also (perhaps somewhat less naturally) let broadcasting handle just the first batch dimension:
End of explanation
poisson_2_by_3.log_prob([[[1.]], [[2.]]]) # Input shape [2, 1, 1].
Explanation: Or we could let broadcasting handle both batch dimensions:
End of explanation
poisson_2_by_3.log_prob(tf.constant([1., 2.])[..., tf.newaxis, tf.newaxis])
Explanation: The above worked fine when we had only two values we wanted, but suppose we had a long list of values we wanted to evaluate at every batch point. For that, the following notation, which adds extra dimensions of size 1 to the right side of the shape, is extremely useful:
End of explanation
three_poissons.log_prob([[1.], [10.], [50.], [100.]])
three_poissons.log_prob(tf.constant([1., 10., 50., 100.])[..., tf.newaxis]) # Equivalent to above.
Explanation: This is an instance of strided slice notation, which is worth knowing.
Going back to three_poissons for completeness, the same example looks like:
End of explanation
multinomial_distributions = [
# Multinomial is a vector-valued distribution: if we have k classes,
# an individual sample from the distribution has k values in it, so the
# event_shape is `[k]`.
tfd.Multinomial(total_count=100., probs=[.5, .4, .1],
name='One Multinomial'),
tfd.Multinomial(total_count=[100., 1000.], probs=[.5, .4, .1],
name='Two Multinomials Same Probs'),
tfd.Multinomial(total_count=100., probs=[[.5, .4, .1], [.1, .2, .7]],
name='Two Multinomials Same Counts'),
tfd.Multinomial(total_count=[100., 1000.],
probs=[[.5, .4, .1], [.1, .2, .7]],
name='Two Multinomials Different Everything')
]
describe_distributions(multinomial_distributions)
Explanation: Multivariate distributions
We now turn to multivariate distributions, which have non-empty event shape. Let's look at multinomial distributions.
End of explanation
describe_sample_tensor_shapes(multinomial_distributions, sample_shapes)
Explanation: Note how in the last three examples, the batch_shape is always [2], but we can use broadcasting to either have a shared total_count or a shared probs (or neither), because under the hood they are broadcast to have the same shape.
Sampling is straightforward, given what we know already:
End of explanation
two_multivariate_normals = tfd.MultivariateNormalDiag(loc=[1., 2., 3.], scale_identity_multiplier=[1., 2.])
two_multivariate_normals
Explanation: Computing log probabilities is equally straightforward. Let's work an example with diagonal Multivariate Normal distributions. (Multinomials are not very broadcast friendly, since the constraints on the counts and probabilities mean broadcasting will often produce inadmissible values.) We'll use a batch of 2 3-dimensional distributions with the same mean but different scales (standard deviations):
End of explanation
two_multivariate_normals.log_prob([[[1., 2., 3.]], [[3., 4., 5.]]]) # Input has shape [2,1,3].
Explanation: (Note that although we used distributions where the scales were multiples of the identity, this is not a restriction on; we could pass scale instead of scale_identity_multiplier.)
Now let's evaluate the log probability of each batch point at its mean and at a shifted mean:
End of explanation
two_multivariate_normals.log_prob(
tf.constant([[1., 2., 3.], [3., 4., 5.]])[:, tf.newaxis, :]) # Equivalent to above.
Explanation: Exactly equivalently, we can use https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice to insert an extra shape=1 dimension in the middle of a constant:
End of explanation
two_multivariate_normals.log_prob(tf.constant([[1., 2., 3.], [3., 4., 5.]]))
Explanation: On the other hand, if we don't insert the extra dimension, we pass [1., 2., 3.] to the first batch point and [3., 4., 5.] to the second:
End of explanation
six_way_multinomial = tfd.Multinomial(total_count=1000., probs=[.3, .25, .2, .15, .08, .02])
six_way_multinomial
Explanation: Shape Manipulation Techniques
The Reshape Bijector
The Reshape bijector can be used to reshape the event_shape of a distribution. Let's see an example:
End of explanation
transformed_multinomial = tfd.TransformedDistribution(
distribution=six_way_multinomial,
bijector=tfb.Reshape(event_shape_out=[2, 3]))
transformed_multinomial
six_way_multinomial.log_prob([500., 100., 100., 150., 100., 50.])
transformed_multinomial.log_prob([[500., 100., 100.], [150., 100., 50.]])
Explanation: We created a multinomial with an event shape of [6]. The Reshape Bijector allows us to treat this as a distribution with an event shape of [2, 3].
A Bijector represents a differentiable, one-to-one function on an open subset of ${\mathbb R}^n$. Bijectors are used in conjunction with TransformedDistribution, which models a distribution $p(y)$ in terms of a base distribution $p(x)$ and a Bijector that represents $Y = g(X)$.
Let's see it in action:
End of explanation
two_by_five_bernoulli = tfd.Bernoulli(
probs=[[.05, .1, .15, .2, .25], [.3, .35, .4, .45, .5]],
name="Two By Five Bernoulli")
two_by_five_bernoulli
Explanation: This is the only thing the Reshape bijector can do: it cannot turn event dimensions into batch dimensions or vice-versa.
The Independent Distribution
The Independent distribution is used to treat a collection of independent, not-necessarily-identical (aka a batch of) distributions as a single distribution. More concisely, Independent allows to convert dimensions in batch_shape to dimensions in event_shape. We'll illustrate by example:
End of explanation
pattern = [[1., 0., 0., 1., 0.], [0., 0., 1., 1., 1.]]
two_by_five_bernoulli.log_prob(pattern)
Explanation: We can think of this as two-by-five array of coins with the associated probabilities of heads. Let's evaluate the probability of a particular, arbitrary set of ones-and-zeros:
End of explanation
two_sets_of_five = tfd.Independent(
distribution=two_by_five_bernoulli,
reinterpreted_batch_ndims=1,
name="Two Sets Of Five")
two_sets_of_five
Explanation: We can use Independent to turn this into two different "sets of five Bernoulli's", which is useful if we want to consider a "row" of coin flips coming up in a given pattern as a single outcome:
End of explanation
two_sets_of_five.log_prob(pattern)
Explanation: Mathematically, we're computing the log probability of each "set" of five by summing the log probabilities of the five "independent" coin flips in the set, which is where the distribution gets its name:
End of explanation
one_set_of_two_by_five = tfd.Independent(
distribution=two_by_five_bernoulli, reinterpreted_batch_ndims=2,
name="One Set Of Two By Five")
one_set_of_two_by_five.log_prob(pattern)
Explanation: We can go even further and use Independent to create a distribution where individual events are a set of two-by-five Bernoulli's:
End of explanation
describe_sample_tensor_shapes(
[two_by_five_bernoulli,
two_sets_of_five,
one_set_of_two_by_five],
[[3, 5]])
Explanation: It's worth noting that from the perspective of sample, using Independent changes nothing:
End of explanation |
12,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Written by Bharath Ramsundar and Evan Feinberg
Copyright 2016, Stanford University
Computationally predicting molecular solubility through is useful for drug-discovery. In this tutorial, we will use the deepchem library to fit a simple statistical model that predicts the solubility of drug-like compounds. The process of fitting this model involves four steps
Step3: To gain a visual understanding of compounds in our dataset, let's draw them using rdkit. We define a couple of helper functions to get started.
Step4: Now, we display some compounds from the dataset
Step5: Analyzing the distribution of solubilities shows us a nice spread of data.
Step6: With our preliminary analysis completed, we return to the original goal of constructing a predictive statistical model of molecular solubility using deepchem. The first step in creating such a molecule is translating each compound into a vectorial format that can be understood by statistical learning techniques. This process is commonly called featurization. deepchem packages a number of commonly used featurization for user convenience. In this tutorial, we will use ECPF4 fingeprints [3].
deepchem offers an object-oriented API for featurization. To get started with featurization, we first construct a Featurizer object. deepchem provides the CircularFingeprint class (a subclass of Featurizer that performs ECFP4 featurization).
Step7: Now, let's perform the actual featurization. deepchem provides the DataFeaturizer class for this purpose. The featurize() method for this class loads data from disk and uses provided Featurizerinstances to transform the provided data into feature vectors. The method constructs an instance of class FeaturizedSamples that has useful methods, such as an iterator, over the featurized data.
Step8: When constructing statistical models, it's necessary to separate the provided data into train/test subsets. The train subset is used to learn the statistical model, while the test subset is used to evaluate the learned model. In practice, it's often useful to elaborate this split further and perform a train/validation/test split. The validation set is used to perform model selection. Proposed models are evaluated on the validation-set, and the best performed model is at the end tested on the test-set.
Choosing the proper method of performing a train/validation/test split can be challenging. Standard machine learning practice is to perform a random split of the data into train/validation/test, but random splits are not well suited for the purposes of chemical informatics. For our predictive models to be useful, we require them to have predictive power in portions of chemical space beyond the set of molecules in the training data. Consequently, our models should use splits of the data that separate compounds in the training set from those in the validation and test-sets. We use Bemis-Murcko scaffolds [5] to perform this separation (all compounds that share an underlying molecular scaffold will be placed into the same split in the train/test/validation split).
Step9: Let's visually inspect some of the molecules in the separate splits to verify that they appear structurally dissimilar. The FeaturizedSamples class provides an itersamples method that lets us obtain the underlying compounds in each split.
Step10: Notice the visual distinction between the train/validation splits. The most-common scaffolds are reserved for the train split, with the rarer scaffolds allotted to validation/test.
To perform machine learning upon these datasets, we need to convert the samples into datasets suitable for machine-learning (that is, into data matrix $X \in \mathbb{R}^{n\times d}$ where $n$ is the number of samples and $d$ the dimensionality of the feature vector, and into label vector $y \in \mathbb{R}^n$). deepchem provides the Dataset class to facilitate this transformation. We simply need to instantiate separate instances of the Dataset() class, one corresponding to each split of the data. This style lends itself easily to validation-set hyperparameter searches, which we illustate below.
Step11: The performance of common machine-learning algorithms can be very sensitive to preprocessing of the data. One common transformation applied to data is to normalize it to have zero-mean and unit-standard-deviation. We will apply this transformation to the log-solubility (as seen above, the log-solubility ranges from -12 to 2).
Step12: The next step after processing the data is to start fitting simple learning models to our data. deepchem provides a number of machine-learning model classes.
In particular, deepchem provides a convenience class, SklearnModel that wraps any machine-learning model available in scikit-learn [6]. Consequently, we will start by building a simple random-forest regressor that attempts to predict the log-solubility from our computed ECFP4 features. To train the model, we instantiate the SklearnModel object, then call the fit() method on the train_dataset we constructed above. We then save the model to disk.
Step13: We next evaluate the model on the validation set to see its predictive power. deepchem provides the Evaluator class to facilitate this process. To evaluate the constructed model object, create a new Evaluator instance and call the compute_model_performance() method.
Step14: The performance of this basic random-forest model isn't very strong. To construct stronger models, let's attempt to optimize the hyperparameters (choices made in the model-specification) to achieve better performance. For random forests, we can tweak n_estimators which controls the number of trees in the forest, and max_features which controls the number of features to consider when performing a split. We now build a series of SklearnModels with different choices for n_estimators and max_features and evaluate performance on the validation set.
Step15: The best model achieves significantly higher $R^2$ on the validation set than the first model we constructed. Now, let's perform the same sort of hyperparameter search, but with a simple deep-network instead.
Step16: Now that we have a reasonable choice of hyperparameters, let's evaluate the performance of our best models on the test-set.
Step17: Now, let's plot the predicted $R^2$ scores versus the true $R^2$ scores for the constructed model. | Python Code:
%autoreload 2
%pdb off
from deepchem.utils.save import load_from_disk
dataset_file= "../datasets/delaney-processed.csv"
dataset = load_from_disk(dataset_file)
print("Columns of dataset: %s" % str(dataset.columns.values))
print("Number of examples in dataset: %s" % str(dataset.shape[0]))
Explanation: Written by Bharath Ramsundar and Evan Feinberg
Copyright 2016, Stanford University
Computationally predicting molecular solubility through is useful for drug-discovery. In this tutorial, we will use the deepchem library to fit a simple statistical model that predicts the solubility of drug-like compounds. The process of fitting this model involves four steps:
Loading a chemical dataset, consisting of a series of compounds along with aqueous solubility measurements.
Transforming each compound into a feature vector $v \in \mathbb{R}^n$ comprehensible to statistical learning methods.
Fitting a simple model that maps feature vectors to estimates of aqueous solubility.
Visualizing the results.
We need to load a dataset of estimated aqueous solubility measurements [1] into deepchem. The data is in CSV format and contains SMILES strings, predicted aqueaous solubilities, and a number of extraneous (for our purposes) molecular properties. Here is an example line from the dataset:
|Compound ID|ESOL predicted log solubility in mols per litre|Minimum Degree|Molecular Weight|Number of H-Bond Donors|Number of Rings|Number of Rotatable Bonds|Polar Surface Area|measured log solubility in mols per litre|smiles|
|-----------|-----------------------------------------------|--------------|----------------|----------------------------|---------------|-------------------------|-----------------------------------------------------------------------|------|
|benzothiazole|-2.733|2|135.191|0|2|0|12.89|-1.5|c2ccc1scnc1c2|
Most of these fields are not useful for our purposes. The two fields that we will need are the "smiles" field and the "measured log solubility in mols per litre". The "smiles" field holds a SMILES string [2] that specifies the compound in question. Before we load this data into deepchem, we will load the data into python and do some simple preliminary analysis to gain some intuition for the dataset.
End of explanation
import tempfile
from rdkit import Chem
from rdkit.Chem import Draw
from itertools import islice
from IPython.display import Image, HTML, display
def display_images(filenames):
Helper to pretty-print images.
imagesList=''.join(
["<img style='width: 140px; margin: 0px; float: left; border: 1px solid black;' src='%s' />"
% str(s) for s in sorted(filenames)])
display(HTML(imagesList))
def mols_to_pngs(mols, basename="test"):
Helper to write RDKit mols to png files.
filenames = []
for i, mol in enumerate(mols):
filename = "%s%d.png" % (basename, i)
Draw.MolToFile(mol, filename)
filenames.append(filename)
return filenames
Explanation: To gain a visual understanding of compounds in our dataset, let's draw them using rdkit. We define a couple of helper functions to get started.
End of explanation
num_to_display = 14
molecules = []
for _, data in islice(dataset.iterrows(), num_to_display):
molecules.append(Chem.MolFromSmiles(data["smiles"]))
display_images(mols_to_pngs(molecules))
Explanation: Now, we display some compounds from the dataset:
End of explanation
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
solubilities = np.array(dataset["measured log solubility in mols per litre"])
n, bins, patches = plt.hist(solubilities, 50, facecolor='green', alpha=0.75)
plt.xlabel('Measured log-solubility in mols/liter')
plt.ylabel('Number of compounds')
plt.title(r'Histogram of solubilities')
plt.grid(True)
plt.show()
Explanation: Analyzing the distribution of solubilities shows us a nice spread of data.
End of explanation
from deepchem.featurizers.fingerprints import CircularFingerprint
featurizers = [CircularFingerprint(size=1024)]
Explanation: With our preliminary analysis completed, we return to the original goal of constructing a predictive statistical model of molecular solubility using deepchem. The first step in creating such a molecule is translating each compound into a vectorial format that can be understood by statistical learning techniques. This process is commonly called featurization. deepchem packages a number of commonly used featurization for user convenience. In this tutorial, we will use ECPF4 fingeprints [3].
deepchem offers an object-oriented API for featurization. To get started with featurization, we first construct a Featurizer object. deepchem provides the CircularFingeprint class (a subclass of Featurizer that performs ECFP4 featurization).
End of explanation
import tempfile, shutil
from deepchem.featurizers.featurize import DataFeaturizer
#Make directories to store the raw and featurized datasets.
feature_dir = tempfile.mkdtemp()
samples_dir = tempfile.mkdtemp()
featurizer = DataFeaturizer(tasks=["measured log solubility in mols per litre"],
smiles_field="smiles",
compound_featurizers=featurizers)
featurized_samples = featurizer.featurize(dataset_file, feature_dir, samples_dir)
Explanation: Now, let's perform the actual featurization. deepchem provides the DataFeaturizer class for this purpose. The featurize() method for this class loads data from disk and uses provided Featurizerinstances to transform the provided data into feature vectors. The method constructs an instance of class FeaturizedSamples that has useful methods, such as an iterator, over the featurized data.
End of explanation
splittype = "scaffold"
train_dir = tempfile.mkdtemp()
valid_dir = tempfile.mkdtemp()
test_dir = tempfile.mkdtemp()
train_samples, valid_samples, test_samples = featurized_samples.train_valid_test_split(
splittype, train_dir, valid_dir, test_dir)
Explanation: When constructing statistical models, it's necessary to separate the provided data into train/test subsets. The train subset is used to learn the statistical model, while the test subset is used to evaluate the learned model. In practice, it's often useful to elaborate this split further and perform a train/validation/test split. The validation set is used to perform model selection. Proposed models are evaluated on the validation-set, and the best performed model is at the end tested on the test-set.
Choosing the proper method of performing a train/validation/test split can be challenging. Standard machine learning practice is to perform a random split of the data into train/validation/test, but random splits are not well suited for the purposes of chemical informatics. For our predictive models to be useful, we require them to have predictive power in portions of chemical space beyond the set of molecules in the training data. Consequently, our models should use splits of the data that separate compounds in the training set from those in the validation and test-sets. We use Bemis-Murcko scaffolds [5] to perform this separation (all compounds that share an underlying molecular scaffold will be placed into the same split in the train/test/validation split).
End of explanation
train_mols = [Chem.MolFromSmiles(str(compound["smiles"]))
for compound in islice(train_samples.itersamples(), num_to_display)]
display_images(mols_to_pngs(train_mols, basename="train"))
valid_mols = [Chem.MolFromSmiles(str(compound["smiles"]))
for compound in islice(valid_samples.itersamples(), num_to_display)]
display_images(mols_to_pngs(valid_mols, basename="valid"))
Explanation: Let's visually inspect some of the molecules in the separate splits to verify that they appear structurally dissimilar. The FeaturizedSamples class provides an itersamples method that lets us obtain the underlying compounds in each split.
End of explanation
from deepchem.utils.dataset import Dataset
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=featurizers, tasks=["measured log solubility in mols per litre"])
valid_dataset = Dataset(data_dir=valid_dir, samples=valid_samples,
featurizers=featurizers, tasks=["measured log solubility in mols per litre"])
test_dataset = Dataset(data_dir=test_dir, samples=test_samples,
featurizers=featurizers, tasks=["measured log solubility in mols per litre"])
Explanation: Notice the visual distinction between the train/validation splits. The most-common scaffolds are reserved for the train split, with the rarer scaffolds allotted to validation/test.
To perform machine learning upon these datasets, we need to convert the samples into datasets suitable for machine-learning (that is, into data matrix $X \in \mathbb{R}^{n\times d}$ where $n$ is the number of samples and $d$ the dimensionality of the feature vector, and into label vector $y \in \mathbb{R}^n$). deepchem provides the Dataset class to facilitate this transformation. We simply need to instantiate separate instances of the Dataset() class, one corresponding to each split of the data. This style lends itself easily to validation-set hyperparameter searches, which we illustate below.
End of explanation
from deepchem.transformers import NormalizationTransformer
input_transformers = []
output_transformers = [NormalizationTransformer(transform_y=True, dataset=train_dataset)]
transformers = input_transformers + output_transformers
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(valid_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
Explanation: The performance of common machine-learning algorithms can be very sensitive to preprocessing of the data. One common transformation applied to data is to normalize it to have zero-mean and unit-standard-deviation. We will apply this transformation to the log-solubility (as seen above, the log-solubility ranges from -12 to 2).
End of explanation
from sklearn.ensemble import RandomForestRegressor
from deepchem.models.standard import SklearnModel
model_dir = tempfile.mkdtemp()
task_types = {"measured log solubility in mols per litre": "regression"}
model_params = {"data_shape": train_dataset.get_data_shape()}
model = SklearnModel(task_types, model_params, model_instance=RandomForestRegressor())
model.fit(train_dataset)
model.save(model_dir)
shutil.rmtree(model_dir)
Explanation: The next step after processing the data is to start fitting simple learning models to our data. deepchem provides a number of machine-learning model classes.
In particular, deepchem provides a convenience class, SklearnModel that wraps any machine-learning model available in scikit-learn [6]. Consequently, we will start by building a simple random-forest regressor that attempts to predict the log-solubility from our computed ECFP4 features. To train the model, we instantiate the SklearnModel object, then call the fit() method on the train_dataset we constructed above. We then save the model to disk.
End of explanation
from deepchem.utils.evaluate import Evaluator
valid_csv_out = tempfile.NamedTemporaryFile()
valid_stats_out = tempfile.NamedTemporaryFile()
evaluator = Evaluator(model, valid_dataset, output_transformers)
df, r2score = evaluator.compute_model_performance(
valid_csv_out, valid_stats_out)
print(r2score)
Explanation: We next evaluate the model on the validation set to see its predictive power. deepchem provides the Evaluator class to facilitate this process. To evaluate the constructed model object, create a new Evaluator instance and call the compute_model_performance() method.
End of explanation
import itertools
n_estimators_list = [100]
max_features_list = ["auto", "sqrt", "log2", None]
hyperparameters = [n_estimators_list, max_features_list]
best_validation_score = -np.inf
best_hyperparams = None
best_model, best_model_dir = None, None
for hyperparameter_tuple in itertools.product(*hyperparameters):
n_estimators, max_features = hyperparameter_tuple
model_dir = tempfile.mkdtemp()
model = SklearnModel(
task_types, model_params,
model_instance=RandomForestRegressor(n_estimators=n_estimators,
max_features=max_features))
model.fit(train_dataset)
model.save(model_dir)
evaluator = Evaluator(model, valid_dataset, output_transformers)
df, r2score = evaluator.compute_model_performance(
valid_csv_out, valid_stats_out)
valid_r2_score = r2score.iloc[0]["r2_score"]
print("n_estimators %d, max_features %s => Validation set R^2 %f" %
(n_estimators, str(max_features), valid_r2_score))
if valid_r2_score > best_validation_score:
best_validation_score = valid_r2_score
best_hyperparams = hyperparameter_tuple
if best_model_dir is not None:
shutil.rmtree(best_model_dir)
best_model_dir = model_dir
best_model = model
else:
shutil.rmtree(model_dir)
print("Best hyperparameters: %s" % str(best_hyperparams))
best_rf_hyperparams = best_hyperparams
best_rf = best_model
Explanation: The performance of this basic random-forest model isn't very strong. To construct stronger models, let's attempt to optimize the hyperparameters (choices made in the model-specification) to achieve better performance. For random forests, we can tweak n_estimators which controls the number of trees in the forest, and max_features which controls the number of features to consider when performing a split. We now build a series of SklearnModels with different choices for n_estimators and max_features and evaluate performance on the validation set.
End of explanation
from deepchem.models.deep import SingleTaskDNN
import numpy.random
model_params = {"activation": "relu",
"dropout": 0.5,
"momentum": .9, "nesterov": True,
"decay": 1e-4, "batch_size": 5,
"nb_epoch": 10,
"init": "glorot_uniform",
"data_shape": train_dataset.get_data_shape()}
lr_list = np.power(10., np.random.uniform(-5, -1, size=1))
nb_hidden_list = [100]
nb_epoch_list = [10]
nesterov_list = [False]
dropout_list = [.25]
nb_layers_list = [1]
batchnorm_list = [False]
hyperparameters = [lr_list, nb_layers_list, nb_hidden_list, nb_epoch_list, nesterov_list, dropout_list, batchnorm_list]
best_validation_score = -np.inf
best_hyperparams = None
best_model, best_model_dir = None, None
for hyperparameter_tuple in itertools.product(*hyperparameters):
print("Testing %s" % str(hyperparameter_tuple))
lr, nb_layers, nb_hidden, nb_epoch, nesterov, dropout, batchnorm = hyperparameter_tuple
model_params["nb_hidden"] = nb_hidden
model_params["nb_layers"] = nb_layers
model_params["learning_rate"] = lr
model_params["nb_epoch"] = nb_epoch
model_params["nesterov"] = nesterov
model_params["dropout"] = dropout
model_params["batchnorm"] = batchnorm
model_dir = tempfile.mkdtemp()
model = SingleTaskDNN(task_types, model_params)
model.fit(train_dataset)
model.save(model_dir)
evaluator = Evaluator(model, valid_dataset, output_transformers)
df, r2score = evaluator.compute_model_performance(
valid_csv_out, valid_stats_out)
valid_r2_score = r2score.iloc[0]["r2_score"]
print("learning_rate %f, nb_hidden %d, nb_epoch %d, nesterov %s, dropout %f => Validation set R^2 %f" %
(lr, nb_hidden, nb_epoch, str(nesterov), dropout, valid_r2_score))
if valid_r2_score > best_validation_score:
best_validation_score = valid_r2_score
best_hyperparams = hyperparameter_tuple
if best_model_dir is not None:
shutil.rmtree(best_model_dir)
best_model_dir = model_dir
best_model = model
else:
shutil.rmtree(model_dir)
print("Best hyperparameters: %s" % str(best_hyperparams))
print("best_validation_score: %f" % best_validation_score)
best_dnn = best_model
Explanation: The best model achieves significantly higher $R^2$ on the validation set than the first model we constructed. Now, let's perform the same sort of hyperparameter search, but with a simple deep-network instead.
End of explanation
rf_test_csv_out = tempfile.NamedTemporaryFile()
rf_test_stats_out = tempfile.NamedTemporaryFile()
rf_test_evaluator = Evaluator(best_rf, test_dataset, output_transformers)
rf_test_df, rf_test_r2score = rf_test_evaluator.compute_model_performance(
rf_test_csv_out, rf_test_stats_out)
rf_test_r2_score = rf_test_r2score.iloc[0]["r2_score"]
print("RF Test set R^2 %f" % (rf_test_r2_score))
dnn_test_csv_out = tempfile.NamedTemporaryFile()
dnn_test_stats_out = tempfile.NamedTemporaryFile()
dnn_test_evaluator = Evaluator(best_dnn, test_dataset, output_transformers)
dnn_test_df, dnn_test_r2score = dnn_test_evaluator.compute_model_performance(
dnn_test_csv_out, dnn_test_stats_out)
dnn_test_r2_score = dnn_test_r2score.iloc[0]["r2_score"]
print("DNN Test set R^2 %f" % (dnn_test_r2_score))
Explanation: Now that we have a reasonable choice of hyperparameters, let's evaluate the performance of our best models on the test-set.
End of explanation
task = "measured log solubility in mols per litre"
rf_predicted_test = np.array(rf_test_df[task + "_pred"])
rf_true_test = np.array(rf_test_df[task])
plt.scatter(rf_predicted_test, rf_true_test)
plt.xlabel('Predicted log-solubility in mols/liter')
plt.ylabel('True log-solubility in mols/liter')
plt.title(r'RF- predicted vs. true log-solubilities')
plt.show()
task = "measured log solubility in mols per litre"
predicted_test = np.array(dnn_test_df[task + "_pred"])
true_test = np.array(dnn_test_df[task])
plt.scatter(predicted_test, true_test)
plt.xlabel('Predicted log-solubility in mols/liter')
plt.ylabel('True log-solubility in mols/liter')
plt.title(r'DNN predicted vs. true log-solubilities')
plt.show()
Explanation: Now, let's plot the predicted $R^2$ scores versus the true $R^2$ scores for the constructed model.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.