Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quickstart
This notebook was made with the following version of emcee
Step1: The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.
How to sample a multi-dimensional Gaussian
We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by
Step2: Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob
Step3: It is important that the first argument of the probability function is
the position of a single "walker" (a N dimensional
numpy array). The following arguments are going to be constant every
time the function is called and the values come from the args parameter
of our
Step4: and where cov is $\Sigma$.
How about we use 32 walkers? Before we go on, we need to guess a starting point for each
of the 32 walkers. This position will be a 5-dimensional vector so the
initial guess should be a 32-by-5 array.
It's not a very good guess but we'll just guess a
random number between 0 and 1 for each component
Step5: Now that we've gotten past all the bookkeeping stuff, we can move on to
the fun stuff. The main interface provided by emcee is the
Step6: Remember how our function log_prob required two extra arguments when it
was called? By setting up our sampler with the args argument, we're
saying that the probability function should be called as
Step7: If we didn't provide any
args parameter, the calling sequence would be log_prob(p0[0]) instead.
It's generally a good idea to run a few "burn-in" steps in your MCMC
chain to let the walkers explore the parameter space a bit and get
settled into the maximum of the density. We'll run a burn-in of 100
steps (yep, I just made that number up... it's hard to really know
how many steps of burn-in you'll need before you start) starting from
our initial guess p0
Step8: You'll notice that I saved the final position of the walkers (after the
100 steps) to a variable called pos. You can check out what will be
contained in the other output variables by looking at the documentation for
the
Step9: The samples can be accessed using the
Step10: Another good test of whether or not the sampling went well is to check
the mean acceptance fraction of the ensemble using the
Step11: and the integrated autocorrelation time (see the | Python Code:
import emcee
emcee.__version__
Explanation: Quickstart
This notebook was made with the following version of emcee:
End of explanation
import numpy as np
Explanation: The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.
How to sample a multi-dimensional Gaussian
We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by:
$$
p(\vec{x}) \propto \exp \left [ - \frac{1}{2} (\vec{x} -
\vec{\mu})^\mathrm{T} \, \Sigma ^{-1} \, (\vec{x} - \vec{\mu})
\right ]
$$
where $\vec{\mu}$ is an $N$-dimensional vector position of the mean of the density and $\Sigma$ is the square N-by-N covariance matrix.
The first thing that we need to do is import the necessary modules:
End of explanation
def log_prob(x, mu, cov):
diff = x - mu
return -0.5*np.dot(diff, np.linalg.solve(cov,diff))
Explanation: Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob:
End of explanation
ndim = 5
np.random.seed(42)
means = np.random.rand(ndim)
cov = 0.5 - np.random.rand(ndim ** 2).reshape((ndim, ndim))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov,cov)
Explanation: It is important that the first argument of the probability function is
the position of a single "walker" (a N dimensional
numpy array). The following arguments are going to be constant every
time the function is called and the values come from the args parameter
of our :class:EnsembleSampler that we'll see soon.
Now, we'll set up the specific values of those "hyperparameters" in 5
dimensions:
End of explanation
nwalkers = 32
p0 = np.random.rand(nwalkers, ndim)
Explanation: and where cov is $\Sigma$.
How about we use 32 walkers? Before we go on, we need to guess a starting point for each
of the 32 walkers. This position will be a 5-dimensional vector so the
initial guess should be a 32-by-5 array.
It's not a very good guess but we'll just guess a
random number between 0 and 1 for each component:
End of explanation
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[means, cov])
Explanation: Now that we've gotten past all the bookkeeping stuff, we can move on to
the fun stuff. The main interface provided by emcee is the
:class:EnsembleSampler object so let's get ourselves one of those:
End of explanation
log_prob(p0[0], means, cov)
Explanation: Remember how our function log_prob required two extra arguments when it
was called? By setting up our sampler with the args argument, we're
saying that the probability function should be called as:
End of explanation
pos, prob, state = sampler.run_mcmc(p0, 100)
sampler.reset()
Explanation: If we didn't provide any
args parameter, the calling sequence would be log_prob(p0[0]) instead.
It's generally a good idea to run a few "burn-in" steps in your MCMC
chain to let the walkers explore the parameter space a bit and get
settled into the maximum of the density. We'll run a burn-in of 100
steps (yep, I just made that number up... it's hard to really know
how many steps of burn-in you'll need before you start) starting from
our initial guess p0:
End of explanation
sampler.run_mcmc(pos, 10000);
Explanation: You'll notice that I saved the final position of the walkers (after the
100 steps) to a variable called pos. You can check out what will be
contained in the other output variables by looking at the documentation for
the :func:EnsembleSampler.run_mcmc function. The call to the
:func:EnsembleSampler.reset method clears all of the important bookkeeping
parameters in the sampler so that we get a fresh start. It also clears the
current positions of the walkers so it's a good thing that we saved them
first.
Now, we can do our production run of 10000 steps:
End of explanation
import matplotlib.pyplot as plt
samples = sampler.get_chain(flat=True)
plt.hist(samples[:, 0], 100, color="k", histtype="step")
plt.xlabel(r"$\theta_1$")
plt.ylabel(r"$p(\theta_1)$")
plt.gca().set_yticks([]);
Explanation: The samples can be accessed using the :func:EnsembleSampler.get_chain method.
This will return an array
with the shape (10000, 32, 5) giving the parameter values for each walker
at each step in the chain.
Take note of that shape and make sure that you know where each of those numbers come from.
You can make histograms of these samples to get an estimate of the density that you were sampling:
End of explanation
print("Mean acceptance fraction: {0:.3f}"
.format(np.mean(sampler.acceptance_fraction)))
Explanation: Another good test of whether or not the sampling went well is to check
the mean acceptance fraction of the ensemble using the
:func:EnsembleSampler.acceptance_fraction property:
End of explanation
print("Mean autocorrelation time: {0:.3f} steps"
.format(np.mean(sampler.get_autocorr_time())))
Explanation: and the integrated autocorrelation time (see the :ref:autocorr tutorial for more details)
End of explanation |
13,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip
Step1: 2. What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4. What's the difference between the high and low temperatures for today?
Step4: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000? | Python Code:
#https://api.forecast.io/forecast/APIKEY/LATITUDE,LONGITUDE,TIME
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/12.971599,77.594563')
data = response.json()
#print(data)
#print(data.keys())
print("Bangalore is in", data['timezone'], "timezone")
timezone_find = data.keys()
#find representation
Explanation: 1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
print("The current windspeed at New York is", data['currently']['windSpeed'])
#print(data['currently']) - find how much warmer
Explanation: 2. What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
#print(data['daily']['data'])
now_moon = data['daily']['data']
for i in now_moon:
print("The visibility of moon today in New York is", i['moonPhase'])
Explanation: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
TemMax = data['daily']['data']
for i in TemMax:
tem_diff = i['temperatureMax'] - i['temperatureMin']
print("The temparature difference for today approximately is", round(tem_diff))
Explanation: 4. What's the difference between the high and low temperatures for today?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941')
data = response.json()
temp = data['daily']['data']
#print(temp)
count = 0
for i in temp:
count = count+1
print("The high temperature for the day", count, "is", i['temperatureMax'], "and the low temperature is", i['temperatureMin'])
if float(i['temperatureMin']) < 40:
print("it's a cold weather")
elif (float(i['temperatureMin']) > 40) & (float(i['temperatureMin']) < 60):
print("It's a warm day!")
else:
print("It's very hot weather")
Explanation: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/25.761680,-80.191790, 2016-06-09T12:01:00-0400')
data = response.json()
#print(data['hourly']['data'])
Tem = data['hourly']['data']
count = 0
for i in Tem:
count = count +1
print("The temperature in Miami, Florida on 9th June in the", count, "hour is", i['temperature'])
if float(i['cloudCover']) > 0.5:
print("and is cloudy")
Explanation: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1980-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1980 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1990-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1990 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 2000-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 2000 was", Temp)
Explanation: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
End of explanation |
13,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 2
Imports
Step1: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 2
Imports
End of explanation
!head -n 30 open_exoplanet_catalogue.txt
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',')
assert data.shape==(1993,24)
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
plt.hist(data[:,2], bins=24, range=(0,12))
plt.xlabel('M_JUP')
plt.ylabel('Number of Planets')
assert True # leave for grading
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
plt.scatter(data[:,5],data[:,6])
plt.xscale('symlog', subsx=[1,2,3,4,5])
plt.xlim(0,1)
plt.xlabel('Semi-major axis (AU)')
plt.ylim(0,1)
plt.ylabel('Eccentricity')
assert True # leave for grading
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
13,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
===================================================================
Support Vector Regression (SVR) using linear and non-linear kernels
===================================================================
Toy example of 1D regression using linear, polynomial and RBF kernels.
Step1: Generate sample data
Step2: Add noise to targets
Step3: Fit regression model
Step4: look at the results | Python Code:
print(__doc__)
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt
Explanation: ===================================================================
Support Vector Regression (SVR) using linear and non-linear kernels
===================================================================
Toy example of 1D regression using linear, polynomial and RBF kernels.
End of explanation
X = np.sort(5 * np.random.rand(40, 1), axis=0)
y = np.sin(X).ravel()
Explanation: Generate sample data
End of explanation
y[::5] += 3 * (0.5 - np.random.rand(8))
Explanation: Add noise to targets
End of explanation
svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr_lin = SVR(kernel='linear', C=1e3)
svr_poly = SVR(kernel='poly', C=1e3, degree=2)
y_rbf = svr_rbf.fit(X, y).predict(X)
y_lin = svr_lin.fit(X, y).predict(X)
y_poly = svr_poly.fit(X, y).predict(X)
Explanation: Fit regression model
End of explanation
lw = 2
plt.scatter(X, y, color='darkorange', label='data')
plt.hold('on')
plt.plot(X, y_rbf, color='navy', lw=lw, label='RBF model')
plt.plot(X, y_lin, color='c', lw=lw, label='Linear model')
plt.plot(X, y_poly, color='cornflowerblue', lw=lw, label='Polynomial model')
plt.xlabel('data')
plt.ylabel('target')
plt.title('Support Vector Regression')
plt.legend()
plt.show()
Explanation: look at the results
End of explanation |
13,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stock Prices
Step1: Ridge as Linear Regressor | Python Code:
from sklearn.linear_model import RidgeCV
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
import numpy as np
import matplotlib.pyplot as plt
import os
data = np.loadtxt(fname = 'data.txt', delimiter = ',')
X, y = data[:,:5], data[:,5]
print("Features sample: {}".format(X[1]))
print("Result: {}".format(y[1]))
m = X.shape[0] #number of samples
#training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print("Train shape: {}".format(X_train.shape))
print("Test shape: {}".format(X_test.shape))
Explanation: Stock Prices
End of explanation
clf = RidgeCV(alphas = [0.1, 1.0, 10.0], normalize=True)
clf.fit(X_train, y_train)
#predict
prediction = clf.predict(X_test);
print("Expected is: {}".format(y_test[0]))
print("Prediction is: {}".format(prediction[0]))
print("Score: {}".format(clf.score(X_test, y_test)))
print("Alpha: {}".format(clf.alpha_))
#plotting all data
plt.figure(1)
real, = plt.plot(np.arange(m), y, 'b-', label='real')
predicted, = plt.plot(np.arange(m), clf.predict(X), 'r-', label='predicted')
plt.ylabel('Stock')
plt.xlabel('Time')
plt.legend([real, predicted], ['Real', 'Predicted'])
plt.show()
#plotting only test
mtest = X_test.shape[0]
real, = plt.plot(np.arange(mtest), y_test, 'b-', label='real')
test, = plt.plot(np.arange(mtest), clf.predict(X_test), 'g-', label='test')
plt.ylabel('Stock')
plt.xlabel('Time')
plt.legend([real, test], ['Real', 'Test'])
plt.show()
Explanation: Ridge as Linear Regressor
End of explanation |
13,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GLM
Step1: Generating data
Create some toy data to play around with and scatter-plot it.
Essentially we are creating a regression line defined by intercept and slope and add data points by sampling from a Normal with the mean set to the regression line.
Step2: Estimating the model
Lets fit a Bayesian linear regression model to this data. As you can see, model specifications in PyMC3 are wrapped in a with statement.
Here we use the awesome new NUTS sampler (our Inference Button) to draw 2000 posterior samples.
Step3: This should be fairly readable for people who know probabilistic programming. However, would my non-statistican friend know what all this does? Moreover, recall that this is an extremely simple model that would be one line in R. Having multiple, potentially transformed regressors, interaction terms or link-functions would also make this much more complex and error prone.
The new glm() function instead takes a Patsy linear model specifier from which it creates a design matrix. glm() then adds random variables for each of the coefficients and an appopriate likelihood to the model.
Step4: Much shorter, but this code does the exact same thing as the above model specification (you can change priors and everything else too if we wanted). glm() parses the Patsy model string, adds random variables for each regressor (Intercept and slope x in this case), adds a likelihood (by default, a Normal is chosen), and all other variables (sigma). Finally, glm() then initializes the parameters to a good starting point by estimating a frequentist linear model using statsmodels.
If you are not familiar with R's syntax, 'y ~ x' specifies that we have an output variable y that we want to estimate as a linear function of x.
Analyzing the model
Bayesian inference does not give us only one best fitting line (as maximum likelihood does) but rather a whole posterior distribution of likely parameters. Lets plot the posterior distribution of our parameters and the individual samples we drew.
Step5: The left side shows our marginal posterior -- for each parameter value on the x-axis we get a probability on the y-axis that tells us how likely that parameter value is.
There are a couple of things to see here. The first is that our sampling chains for the individual parameters (left side) seem well converged and stationary (there are no large drifts or other odd patterns).
Secondly, the maximum posterior estimate of each variable (the peak in the left side distributions) is very close to the true parameters used to generate the data (x is the regression coefficient and sigma is the standard deviation of our normal).
In the GLM we thus do not only have one best fitting regression line, but many. A posterior predictive plot takes multiple samples from the posterior (intercepts and slopes) and plots a regression line for each of them. Here we are using the plot_posterior_predictive_glm() convenience function for this. | Python Code:
%matplotlib inline
from pymc3 import *
import numpy as np
import matplotlib.pyplot as plt
Explanation: GLM: Linear regression
Author: Thomas Wiecki
This tutorial is adapted from a blog post by Thomas Wiecki called "The Inference Button: Bayesian GLMs made easy with PyMC3".
This tutorial appeared as a post in a small series on Bayesian GLMs on my blog:
The Inference Button: Bayesian GLMs made easy with PyMC3
This world is far from Normal(ly distributed): Robust Regression in PyMC3
The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3
In this blog post I will talk about:
How the Bayesian Revolution in many scientific disciplines is hindered by poor usability of current Probabilistic Programming languages.
A gentle introduction to Bayesian linear regression and how it differs from the frequentist approach.
A preview of PyMC3 (currently in alpha) and its new GLM submodule I wrote to allow creation and estimation of Bayesian GLMs as easy as frequentist GLMs in R.
Ready? Lets get started!
There is a huge paradigm shift underway in many scientific disciplines: The Bayesian Revolution.
While the theoretical benefits of Bayesian over Frequentist stats have been discussed at length elsewhere (see Further Reading below), there is a major obstacle that hinders wider adoption -- usability (this is one of the reasons DARPA wrote out a huge grant to improve Probabilistic Programming).
This is mildly ironic because the beauty of Bayesian statistics is their generality. Frequentist stats have a bazillion different tests for every different scenario. In Bayesian land you define your model exactly as you think is appropriate and hit the Inference Button(TM) (i.e. running the magical MCMC sampling algorithm).
Yet when I ask my colleagues why they use frequentist stats (even though they would like to use Bayesian stats) the answer is that software packages like SPSS or R make it very easy to run all those individuals tests with a single command (and more often then not, they don't know the exact model and inference method being used).
While there are great Bayesian software packages like JAGS, BUGS, Stan and PyMC, they are written for Bayesians statisticians who know very well what model they want to build.
Unfortunately, "the vast majority of statistical analysis is not performed by statisticians" -- so what we really need are tools for scientists and not for statisticians.
In the interest of putting my code where my mouth is I wrote a submodule for the upcoming PyMC3 that makes construction of Bayesian Generalized Linear Models (GLMs) as easy as Frequentist ones in R.
Linear Regression
While future blog posts will explore more complex models, I will start here with the simplest GLM -- linear regression.
In general, frequentists think about Linear Regression as follows:
$$ Y = X\beta + \epsilon $$
where $Y$ is the output we want to predict (or dependent variable), $X$ is our predictor (or independent variable), and $\beta$ are the coefficients (or parameters) of the model we want to estimate. $\epsilon$ is an error term which is assumed to be normally distributed.
We can then use Ordinary Least Squares or Maximum Likelihood to find the best fitting $\beta$.
Probabilistic Reformulation
Bayesians take a probabilistic view of the world and express this model in terms of probability distributions. Our above linear regression can be rewritten to yield:
$$ Y \sim \mathcal{N}(X \beta, \sigma^2) $$
In words, we view $Y$ as a random variable (or random vector) of which each element (data point) is distributed according to a Normal distribution. The mean of this normal distribution is provided by our linear predictor with variance $\sigma^2$.
While this is essentially the same model, there are two critical advantages of Bayesian estimation:
Priors: We can quantify any prior knowledge we might have by placing priors on the paramters. For example, if we think that $\sigma$ is likely to be small we would choose a prior with more probability mass on low values.
Quantifying uncertainty: We do not get a single estimate of $\beta$ as above but instead a complete posterior distribution about how likely different values of $\beta$ are. For example, with few data points our uncertainty in $\beta$ will be very high and we'd be getting very wide posteriors.
Bayesian GLMs in PyMC3
With the new GLM module in PyMC3 it is very easy to build this and much more complex models.
First, lets import the required modules.
End of explanation
size = 200
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=.5, size=size)
data = dict(x=x, y=y)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
ax.plot(x, y, 'x', label='sampled data')
ax.plot(x, true_regression_line, label='true regression line', lw=2.)
plt.legend(loc=0);
Explanation: Generating data
Create some toy data to play around with and scatter-plot it.
Essentially we are creating a regression line defined by intercept and slope and add data points by sampling from a Normal with the mean set to the regression line.
End of explanation
with Model() as model: # model specifications in PyMC3 are wrapped in a with-statement
# Define priors
sigma = HalfCauchy('sigma', beta=10, testval=1.)
intercept = Normal('Intercept', 0, sd=20)
x_coeff = Normal('x', 0, sd=20)
# Define likelihood
likelihood = Normal('y', mu=intercept + x_coeff * x,
sd=sigma, observed=y)
# Inference!
trace = sample(3000, njobs=2) # draw 3000 posterior samples using NUTS sampling
Explanation: Estimating the model
Lets fit a Bayesian linear regression model to this data. As you can see, model specifications in PyMC3 are wrapped in a with statement.
Here we use the awesome new NUTS sampler (our Inference Button) to draw 2000 posterior samples.
End of explanation
with Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
glm.GLM.from_formula('y ~ x', data)
trace = sample(3000, njobs=2) # draw 3000 posterior samples using NUTS sampling
Explanation: This should be fairly readable for people who know probabilistic programming. However, would my non-statistican friend know what all this does? Moreover, recall that this is an extremely simple model that would be one line in R. Having multiple, potentially transformed regressors, interaction terms or link-functions would also make this much more complex and error prone.
The new glm() function instead takes a Patsy linear model specifier from which it creates a design matrix. glm() then adds random variables for each of the coefficients and an appopriate likelihood to the model.
End of explanation
plt.figure(figsize=(7, 7))
traceplot(trace[100:])
plt.tight_layout();
Explanation: Much shorter, but this code does the exact same thing as the above model specification (you can change priors and everything else too if we wanted). glm() parses the Patsy model string, adds random variables for each regressor (Intercept and slope x in this case), adds a likelihood (by default, a Normal is chosen), and all other variables (sigma). Finally, glm() then initializes the parameters to a good starting point by estimating a frequentist linear model using statsmodels.
If you are not familiar with R's syntax, 'y ~ x' specifies that we have an output variable y that we want to estimate as a linear function of x.
Analyzing the model
Bayesian inference does not give us only one best fitting line (as maximum likelihood does) but rather a whole posterior distribution of likely parameters. Lets plot the posterior distribution of our parameters and the individual samples we drew.
End of explanation
plt.figure(figsize=(7, 7))
plt.plot(x, y, 'x', label='data')
plot_posterior_predictive_glm(trace, samples=100,
label='posterior predictive regression lines')
plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y')
plt.title('Posterior predictive regression lines')
plt.legend(loc=0)
plt.xlabel('x')
plt.ylabel('y');
Explanation: The left side shows our marginal posterior -- for each parameter value on the x-axis we get a probability on the y-axis that tells us how likely that parameter value is.
There are a couple of things to see here. The first is that our sampling chains for the individual parameters (left side) seem well converged and stationary (there are no large drifts or other odd patterns).
Secondly, the maximum posterior estimate of each variable (the peak in the left side distributions) is very close to the true parameters used to generate the data (x is the regression coefficient and sigma is the standard deviation of our normal).
In the GLM we thus do not only have one best fitting regression line, but many. A posterior predictive plot takes multiple samples from the posterior (intercepts and slopes) and plots a regression line for each of them. Here we are using the plot_posterior_predictive_glm() convenience function for this.
End of explanation |
13,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive Network Exploration with pynucastro
This notebook shows off the interactive RateCollection network plot.
You must have widgets enabled.
Jupyter notebook
Step1: This collection of rates has the main CNO rates plus a breakout rate into the hot CNO cycle
Step2: To evaluate the rates, we need a composition. This is defined using a list of Nuceli objects.
Step3: Interactive exploration is enabled through the Explorer class, which takes a RateCollection and a Composition | Python Code:
%matplotlib inline
import pynucastro as pyrl
Explanation: Interactive Network Exploration with pynucastro
This notebook shows off the interactive RateCollection network plot.
You must have widgets enabled.
Jupyter notebook:
jupyter nbextension enable --py --user widgetsnbextension
for a user install or
jupyter nbextension enable --py --sys-prefix widgetsnbextension
for a system-wide installation
Jupyter lab:
jupyter labextension install @jupyter-widgets/jupyterlab-manager
End of explanation
files = ["c12-pg-n13-ls09",
"c13-pg-n14-nacr",
"n13--c13-wc12",
"n13-pg-o14-lg06",
"n14-pg-o15-im05",
"n15-pa-c12-nacr",
"o14--n14-wc12",
"o15--n15-wc12",
"o14-ap-f17-Ha96c",
"f17-pg-ne18-cb09",
"ne18--f18-wc12",
"f18-pa-o15-il10"]
rc = pyrl.RateCollection(files)
Explanation: This collection of rates has the main CNO rates plus a breakout rate into the hot CNO cycle
End of explanation
comp = pyrl.Composition(rc.get_nuclei())
comp.set_solar_like()
Explanation: To evaluate the rates, we need a composition. This is defined using a list of Nuceli objects.
End of explanation
re = pyrl.Explorer(rc, comp)
re.explore()
Explanation: Interactive exploration is enabled through the Explorer class, which takes a RateCollection and a Composition
End of explanation |
13,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate terminated results Exiobase v.3.3.11b1 exc. iLUC, electricity markets and social extensions
Investments are not integrated in the MR_HIOT table. They are accounted for in the Final Demand activities.
This tutorial is divided in 4 sections.
1. Extract numbers from Excel files
2. Replace 0s with 1s in norm0 file
3. Read all the csv files as matrices
4. Run operations
1. Extract numbers from Excel files
Give the name and location of the excel file containing the HIOT and the FD tables.
Step1: From MR_HIOT_2011_v3.3.11.xlsx (MR_HIOT.csv & FD.csv), we create
Step2: Give the name of the excel file from which the extensions should be extracted
The file from Stefano cannot be used as it is.
Water extensions should be corrected, biogenic carbon relocated, biogenic methane recalculated and land occupation flows summed up.
Step3: We create Bn_tonorm.csv including the extensions for the 7872 producing activities
Step4: We create FD_ext.csv including the extensions for the 288 Final Demand activities.
Step5: 2. Replace 0s with 1s in norm0 file
Replace 0s with 1s in norm.csv (matrices can't be divided by 0)
Step6: 3. Read CSV files as matrices
To make operations with the numpy package, read the following files extracted previously
Step7: 4. Run operations
To obtain Zn and Bn, Zn_tonorm and Bn_tonorm needs to be didvided by the norm vector.
Step8: We create the identity matrice | Python Code:
HIOT_FD = "/Users/marie/Desktop/MR_HIOT_2011_v3.3.11.xlsx"
import pandas as pd
import csv
### MR-HIOT.csv is created because the excel is too heavy
data_xls = pd.read_excel(HIOT_FD, 'HIOT', index_col=None)
data_xls.to_csv('MR_HIOT.csv', encoding='utf-8')
### FD.csv is created because the excel is too heavy
data_xls = pd.read_excel(HIOT_FD, 'FD', index_col=None, header = None)
data_xls.to_csv('FD.csv', encoding='utf-8')
Explanation: Calculate terminated results Exiobase v.3.3.11b1 exc. iLUC, electricity markets and social extensions
Investments are not integrated in the MR_HIOT table. They are accounted for in the Final Demand activities.
This tutorial is divided in 4 sections.
1. Extract numbers from Excel files
2. Replace 0s with 1s in norm0 file
3. Read all the csv files as matrices
4. Run operations
1. Extract numbers from Excel files
Give the name and location of the excel file containing the HIOT and the FD tables.
End of explanation
outfile1 ="/Users/marie/Desktop/Zn_tonorm.csv"
source = pd.read_csv('MR_HIOT.csv', index_col = None, header = None, low_memory = False)
Zn_tonorm = source.iloc[7:7879, 5:7877]
Zn_tonorm.to_csv(outfile1, header = None, index = None)
outfile2 ="/Users/marie/Desktop/norm.csv"
norm = source.iloc[1:2, 5:7877]
norm.to_csv(outfile2, header = None, index = None)
outfile3 ="/Users/marie/Desktop/FD.csv"
source1 = pd.read_csv('FD.csv', index_col = None, header = None, low_memory = False)
FD = source1.iloc[8:7880, 6:294]
FD.to_csv(outfile3, header = None, index = None)
Explanation: From MR_HIOT_2011_v3.3.11.xlsx (MR_HIOT.csv & FD.csv), we create:
- Zn_tonorm.csv (7872 columns & rows)
- norm.csv (7872 columns)
- FD.csv (288 columns & 7872 rows)
End of explanation
extensions = "/Users/marie/Desktop/MR_HIOT_2011_v3.3.11_extensions_MS.xlsx"
Explanation: Give the name of the excel file from which the extensions should be extracted
The file from Stefano cannot be used as it is.
Water extensions should be corrected, biogenic carbon relocated, biogenic methane recalculated and land occupation flows summed up.
End of explanation
import pandas as pd
data_xls = pd.read_excel(extensions, 'resource_act', index_col=None, header = None, encoding='utf-8')
##outfile4 ="/Users/marie/Desktop/Bn_tonorm_resource.csv"
Bn_tonorm_resource = data_xls.iloc[7:37, 5:7877]
##Bn_tonorm_resource.to_csv(outfile4, header = None, index = None)
data_xls = pd.read_excel(extensions, 'Land_act', index_col=None, header = None, encoding='utf-8')
##outfile5 ="/Users/marie/Desktop/Bn_tonorm_land.csv"
Bn_tonorm_land = data_xls.iloc[247:251, 5:7877]
##Bn_tonorm_land.to_csv(outfile5, header = None, Index = None)
data_xls = pd.read_excel(extensions, 'Emiss_act', index_col=None, header = None, encoding='utf-8')
##outfile6 ="/Users/marie/Desktop/Bn_tonorm_emiss.csv"
Bn_tonorm_emiss = data_xls.iloc[7:70, 5:7877]
##Bn_tonorm_emiss.to_csv(outfile6, header = None, index = None)
outfile ="/Users/marie/Desktop/Bn_tonorm.csv"
frame = [Bn_tonorm_resource, Bn_tonorm_land, Bn_tonorm_emiss]
Bn_tonorm = pd.concat(frame)
Bn_tonorm.to_csv(outfile, header = None, index = None)
Explanation: We create Bn_tonorm.csv including the extensions for the 7872 producing activities:
- 30 resource flows (green water was excluded)
- 240 land occupation flows
- 62 direct emissions to Air, Water and Soil
End of explanation
data_xls = pd.read_excel(extensions, 'resource_FD', index_col=None, header = None, encoding='utf-8')
##outfile8 ="/Users/marie/Desktop/FD_resource.csv"
FD_resource = data_xls.iloc[7:37, 5:293]
##FD_resource.to_csv(outfile8, header = None, index = None)
data_xls = pd.read_excel(extensions, 'Land_FD', index_col=None, header = None, encoding='utf-8')
##outfile9 ="/Users/marie/Desktop/FD_land.csv"
FD_land = data_xls.iloc[247:251, 5:293]
##FD_land.to_csv(outfile9, header = None, index = None)
data_xls = pd.read_excel(extensions, 'Emiss_FD', index_col=None, header = None,encoding='utf-8')
##outfile10 ="/Users/marie/Desktop/FD_emiss.csv"
FD_emiss = data_xls.iloc[7:70, 5:293]
##FD_emiss.to_csv(outfile10, header = None, Index = None)
outfile ="/Users/marie/Desktop/FD_ext.csv"
frame = [FD_resource, FD_land, FD_emiss]
FD_ext = pd.concat(frame)
FD_ext.to_csv(outfile, header = None, index = None)
Explanation: We create FD_ext.csv including the extensions for the 288 Final Demand activities.
End of explanation
def replace_0with1(source, result):
with open(source,"r") as source:
rdr = csv.reader(source)
with open (result, "w") as result:
wtr = csv.writer(result)
for row in rdr:
row = [x.replace('0', '1') if x == '0' else x for x in row]
wtr.writerow(row)
replace_0with1("norm0.csv", "norm1.csv")
Explanation: 2. Replace 0s with 1s in norm0 file
Replace 0s with 1s in norm.csv (matrices can't be divided by 0)
End of explanation
import csv
import numpy as np
with open('norm1.csv','r') as dest_f:
data_iter = csv.reader(dest_f,
delimiter = ',',
quotechar = '"')
data = [data for data in data_iter]
nor = np.asarray(data, dtype='float')
with open("Zn_tonorm.csv",'r') as dest_f:
data_iter = csv.reader(dest_f,
delimiter = ',',
quotechar = '"')
data = [data for data in data_iter]
Zn_tonorm = np.array(list(data)).astype('float')
with open("Bn_tonorm.csv",'r') as dest_f:
data_iter = csv.reader(dest_f,
delimiter = ',',
quotechar = '"')
data = [data for data in data_iter]
Bn_tonorm = np.array(list(data)).astype('float')
with open("FD.csv",'r') as dest_f:
data_iter = csv.reader(dest_f,
delimiter = ',',
quotechar = '"')
data = [data for data in data_iter]
f_cons = np.array(list(data)).astype('float')
with open("FD_ext.csv",'r') as dest_f:
data_iter = csv.reader(dest_f,
delimiter = ',',
quotechar = '"')
data = [data for data in data_iter]
f_em = np.array(list(data)).astype('float')
Explanation: 3. Read CSV files as matrices
To make operations with the numpy package, read the following files extracted previously:
- Zn_tonorm.csv as a matrice
- norm1.csv as a vector
- Bn_tonorm.csv as a matrice
- FD.csv as a matrice
- FD_ext.csv as a matrice
End of explanation
Zn = Zn_tonorm/nor
Bn = Bn_tonorm/nor
Explanation: 4. Run operations
To obtain Zn and Bn, Zn_tonorm and Bn_tonorm needs to be didvided by the norm vector.
End of explanation
identity = np.matrix(np.identity(7872), copy=False)
An = identity-Zn
S = np.linalg.inv(An)
BLCI = Bn*S
from io import StringIO
import numpy as np
s=StringIO()
np.savetxt('BLCI.csv', BLCI, fmt='%.10f', delimiter=',', newline="\n")
F = BLCI*f_cons
F2 = F+f_em
from io import StringIO
import numpy as np
s=StringIO()
np.savetxt('F2.csv', F2, fmt='%.10f', delimiter=',', newline="\n")
Explanation: We create the identity matrice
End of explanation |
13,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Generating-synthetic-data" data-toc-modified-id="Generating-synthetic-data-1"><span class="toc-item-num">1 </span>Generating synthetic data</a></div><div class="lev1 toc-item"><a href="#Line-fitting-using-Bayes'-theorem" data-toc-modified-id="Line-fitting-using-Bayes'-theorem-2"><span class="toc-item-num">2 </span>Line fitting using Bayes' theorem</a></div><div class="lev1 toc-item"><a href="#Quantifying-the-probability-of-a-fixed-model
Step1: Generating synthetic data
First, we will generate the data. We will pick evenly spaced x-values. The y-values will be picked according to the equation $y=-\frac{1}{2}x$ but we will add Gaussian noise to each point. Each y-coordinate will have an associated error. The size of the error bar will be selected randomly.
After we have picked the data, we will plot it to visualize it. It looks like a fairly straight line.
Step2: Line fitting using Bayes' theorem
Now that we have generated our data, we would like to find the line of best fit given our data. To do this, we will perform a Bayesian regression. Briefly, Bayes equation is,
$$
P(\alpha~|D, M_1) \propto P(D~|\alpha, M_1)P(\alpha~|M_1).
$$
In other words, the probability of the slope given that Model 1 (a line with unknown slope) and the data is proportional to the probability of the data given the model and alpha times the probability of alpha given the model.
Some necessary nomenclature at this point
Step3: Specificity is necessary for credibility. Let's show that by optimizing the posterior function, we can fit a line.
We optimize the line by using the function scipy.optimize.minimize. However, minimizing the logarithm of the posterior does not achieve anything! We are looking for the place at which the equation we derived above is maximal. That's OK. We will simply multiply the logarithm of the posterior by -1 and minimize that.
Step4: We can see that the model is very close to the model we drew the data from. It works!
However, the probability of this model is not very large. Why? Well, that's because the posterior probability is spread out over a large number of parameters. Bayesians like to think that a parameter is actually a number plus or minutes some jitter. Therefore, the probability of the parameter being exactly one number is usually smaller the larger the jitter. In thise case, the jitter is not terribly a lot, but the probability of this one parameter being exactly -0.5005 is quite low, even though it is the best guess for the slope given the data.
Quantifying the probability of a fixed model
Step5: We can see that the probability of this model is very similar to the probability of the alternative model we fit above. How can we pick which one to use?
Selecting between two models
An initial approach to selecting between these two models would be to take the probability of each model given the data and to find the quotient, like so
Step6: We performed the Odds Ratio calculation on logarithmic space, so negative values show that the simpler (fixed slope) model is preferred, whereas if the values are positive and large, the free-slope model is preferred.
As a guide, Bayesian statisticians usually suggest that 10^2 or above is a good ratio to abandon one model completely in favor of another.
Step7: Different datasets will prefer different models
Let's try this again. Maybe the answer will change sign this time.
Step9: Indeed, the answer changed sign. Odds Ratios, p-values and everything else should always be interpreted conservatively. I prefer odds ratios that are very large, larger than 1,000 before stating that one model is definitively preferred. Otherwise, I tend to prefer the simpler model.
The larger the dataset, the more resolving power
What distribution of answers would you get if you obtained five points? Ten? Fifteen? I've written a couple of short functions to help us find out.
In the functions below, I simulate two datasets. One datasets is being plucked from points that obey the model
$$
y = -\frac{1}{2}x,
$$
whereas the second model is being plucked from
$$
y = -0.46x.
$$
Clearly, the fixed model $y=-0.5x$ should only be preferred for the first dataset, and the free model is the correct one to use for the second model. Now let us find out if this is the case.
By the way, the function below trims odds ratios to keep them from becoming too large. If an odds ratio is bigger than 10, we set it equal to 10 for plotting purposes.
Step10: Here we can see that with five data points, the odds ratio will tend to prefer the simpler model. We do not have too much information---why request the extra information? Note that for the second dataset in some cases the deviations are great enough that the alternative model is strongly preferred (right panel, extra bump at 10). However, this is rare. | Python Code:
# important stuff:
import os
import pandas as pd
import numpy as np
import statsmodels.tools.numdiff as smnd
import scipy
# Graphics
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
# other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline.
# There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Generating-synthetic-data" data-toc-modified-id="Generating-synthetic-data-1"><span class="toc-item-num">1 </span>Generating synthetic data</a></div><div class="lev1 toc-item"><a href="#Line-fitting-using-Bayes'-theorem" data-toc-modified-id="Line-fitting-using-Bayes'-theorem-2"><span class="toc-item-num">2 </span>Line fitting using Bayes' theorem</a></div><div class="lev1 toc-item"><a href="#Quantifying-the-probability-of-a-fixed-model:" data-toc-modified-id="Quantifying-the-probability-of-a-fixed-model:-3"><span class="toc-item-num">3 </span>Quantifying the probability of a fixed model:</a></div><div class="lev1 toc-item"><a href="#Selecting-between-two-models" data-toc-modified-id="Selecting-between-two-models-4"><span class="toc-item-num">4 </span>Selecting between two models</a></div><div class="lev2 toc-item"><a href="#Different-datasets-will-prefer-different-models" data-toc-modified-id="Different-datasets-will-prefer-different-models-4.1"><span class="toc-item-num">4.1 </span>Different datasets will prefer different models</a></div><div class="lev1 toc-item"><a href="#The-larger-the-dataset,-the-more-resolving-power" data-toc-modified-id="The-larger-the-dataset,-the-more-resolving-power-5"><span class="toc-item-num">5 </span>The larger the dataset, the more resolving power</a></div>
Welcome to our primer on Bayesian Model Selection.
As always, we begin by loading our required libraries.
End of explanation
n = 50 # number of data points
x = np.linspace(-10, 10, n)
yerr = np.abs(np.random.normal(0, 2, n))
y = np.linspace(5, -5, n) + np.random.normal(0, yerr, n)
plt.scatter(x, y)
Explanation: Generating synthetic data
First, we will generate the data. We will pick evenly spaced x-values. The y-values will be picked according to the equation $y=-\frac{1}{2}x$ but we will add Gaussian noise to each point. Each y-coordinate will have an associated error. The size of the error bar will be selected randomly.
After we have picked the data, we will plot it to visualize it. It looks like a fairly straight line.
End of explanation
# bayes model fitting:
def log_prior(theta):
beta = theta
return -1.5 * np.log(1 + beta ** 2)
def log_likelihood(beta, x, y, yerr):
sigma = yerr
y_model = beta * x
return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)
def log_posterior(theta, x, y, yerr):
return log_prior(theta) + log_likelihood(theta, x, y, yerr)
def neg_log_prob_free(theta, x, y, yerr):
return -log_posterior(theta, x, y, yerr)
Explanation: Line fitting using Bayes' theorem
Now that we have generated our data, we would like to find the line of best fit given our data. To do this, we will perform a Bayesian regression. Briefly, Bayes equation is,
$$
P(\alpha~|D, M_1) \propto P(D~|\alpha, M_1)P(\alpha~|M_1).
$$
In other words, the probability of the slope given that Model 1 (a line with unknown slope) and the data is proportional to the probability of the data given the model and alpha times the probability of alpha given the model.
Some necessary nomenclature at this point:
* $P(D~|\alpha, M_1)\cdot P(\alpha|M_1)$ is called the posterior probability
* $P(\alpha~|M_1)$ is called the prior
* $P(D~|\alpha, M_1)$ is called the likelihood
I claim that a functional form that will allow me to fit a line through this data is:
$$
P(X|D) \propto \prod_{Data} \mathrm{exp}(-{\frac{(y_{Obs} - \alpha x)^2}{2\sigma_{Obs}^2}})\cdot (1 + \alpha^2)^{-3/2}
$$
The first term in the equation measures the deviation between the observed y-coordinates and the predicted y-coordinates from a theoretical linear model, where $\alpha$ remains to be determined. We weight the result by the observed error, $\sigma_{Obs}$. Then, we multiply by a prior that tells us what values of $\alpha$ should be considered. How to pick a good prior is somewhat difficult and a bit of an artform. One way is to pick a prior that is uninformative for a given parameter. In this case, we want to make sure that we sample slopes between [0,1] as densely as we sample [1,$\infty$]. For a more thorough derivation and explanation, please see this excellent blog post by Jake Vanderplas.
The likelihood is the first term, and the prior is the second. We code it up in the next functions, with a minor difference. It is often computationally much more tractable to compute the natural logarithm of the posterior, and we do so here.
We can now use this equation to find the model we are looking for. How? Well, the equation above basically tells us what model is most likely given that data and the prior information on the model. If we maximize the probability of the model, whatever parameter combination can satisfy that is a model that we are interested in!
End of explanation
# calculate probability of free model:
res = scipy.optimize.minimize(neg_log_prob_free, 0, args=(x, y, yerr), method='Powell')
plt.scatter(x, y)
plt.plot(x, x*res.x, '-', color='g')
print('The probability of this model is {0:.2g}'.format(np.exp(log_posterior(res.x, x, y, yerr))))
print('The optimized probability is {0:.4g}x'.format(np.float64(res.x)))
Explanation: Specificity is necessary for credibility. Let's show that by optimizing the posterior function, we can fit a line.
We optimize the line by using the function scipy.optimize.minimize. However, minimizing the logarithm of the posterior does not achieve anything! We are looking for the place at which the equation we derived above is maximal. That's OK. We will simply multiply the logarithm of the posterior by -1 and minimize that.
End of explanation
# bayes model fitting:
def log_likelihood_fixed(x, y, yerr):
sigma = yerr
y_model = -1/2*x
return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)
def log_posterior_fixed(x, y, yerr):
return log_likelihood_fixed(x, y, yerr)
plt.scatter(x, y)
plt.plot(x, -0.5*x, '-', color='purple')
print('The probability of this model is {0:.2g}'.format(np.exp(log_posterior_fixed(x, y, yerr))))
Explanation: We can see that the model is very close to the model we drew the data from. It works!
However, the probability of this model is not very large. Why? Well, that's because the posterior probability is spread out over a large number of parameters. Bayesians like to think that a parameter is actually a number plus or minutes some jitter. Therefore, the probability of the parameter being exactly one number is usually smaller the larger the jitter. In thise case, the jitter is not terribly a lot, but the probability of this one parameter being exactly -0.5005 is quite low, even though it is the best guess for the slope given the data.
Quantifying the probability of a fixed model:
Suppose now that we had a powerful theoretical tool that allowed us to make a very, very good guess as to what line the points should fall on. Suppose this powerful theory now tells us that the line should be:
$$
y = -\frac{1}{2}x.
$$
Using Bayes' theorem, we could quantify the probability that the model is correct, given the data. Now, the prior is simply going to be 1 when the slope is -0.5, and 0 otherwise. This makes the equation:
$$
P(X|D) \propto \prod_{Data}\mathrm{exp}({-\frac{(y_{Obs} + 0.5x)^2}{2\sigma_{Obs}}})
$$
Notice that this equation cannot be minimized. It is a fixed statement, and its value depends only on the data.
End of explanation
def model_selection(X, Y, Yerr, **kwargs):
guess = kwargs.pop('guess', -0.5)
# calculate probability of free model:
res = scipy.optimize.minimize(neg_log_prob_free, guess, args=(X, Y, Yerr), method='Powell')
# Compute error bars
second_derivative = scipy.misc.derivative(log_posterior, res.x, dx=1.0, n=2, args=(X, Y, Yerr), order=3)
cov_free = -1/second_derivative
alpha_free = np.float64(res.x)
log_free = log_posterior(alpha_free, X, Y, Yerr)
# log goodness of fit for fixed models
log_MAP = log_posterior_fixed(X, Y, Yerr)
good_fit = log_free - log_MAP
# occam factor - only the free model has a penalty
log_occam_factor =(-np.log(2 * np.pi) + np.log(cov_free)) / 2 + log_prior(alpha_free)
# give more standing to simpler models. but just a little bit!
lg = log_free - log_MAP + log_occam_factor - 2
return lg
Explanation: We can see that the probability of this model is very similar to the probability of the alternative model we fit above. How can we pick which one to use?
Selecting between two models
An initial approach to selecting between these two models would be to take the probability of each model given the data and to find the quotient, like so:
$$
OR = \frac{P(M_1~|D)}{P(M_2~|D)} = \frac{P(D~|M_1)P(M_1)}{P(D~|M_2)P(M_1)}
$$
However, this is tricky to evaluate. First of all, the equations we derived above are not solely in terms of $M_1$ and $D$. They also include $\alpha$ for the undetermined slope model. We can get rid of this parameter via a technique known as marginalization (basically, integrating the equations over $\alpha$). Even more philosophically difficult are the terms $P(M_i)$. How is one to evaluate the probability of a model being true? The usual solution to this is to set $P(M_i) \sim 1$ and let those terms cancel out. However, in the case of models that have been tested before or where there is a powerful theoretical reason to believe one is more likely than the other, it may be entirely reasonable to specify that one model is several times more likely than the other. For now, we set the $P(M_i)$ to unity.
We can approximate the odds-ratio for our case as follows:
$$
OR = \frac{P(D|\alpha^)}{P(D|M_2)} \cdot \frac{P(\alpha^|M_1) (2\pi)^{1/2} \sigma_\alpha^*}{1},
$$
where $\alpha^$ is the parameter we found when we minimized the probability function earlier. Here, the second term we added represents the complexity of each model. The denominator in the second term is 1 because the fixed model cannot become any simpler. On the other hand, we penalize the model with free slope by multiplying the probability of the observed slope by the square root of two pi and then multiplying all of this by the uncertainty in the parameter $\alpha$. This is akin to saying that the less likely we think $\alpha$ should be a priori*, or the more uncertain we are that $\alpha$ is actually a given number, then we should give points to the simpler model.
End of explanation
model_selection(x, y, yerr)
Explanation: We performed the Odds Ratio calculation on logarithmic space, so negative values show that the simpler (fixed slope) model is preferred, whereas if the values are positive and large, the free-slope model is preferred.
As a guide, Bayesian statisticians usually suggest that 10^2 or above is a good ratio to abandon one model completely in favor of another.
End of explanation
n = 50 # number of data points
x = np.linspace(-10, 10, n)
yerr = np.abs(np.random.normal(0, 2, n))
y = x*-0.55 + np.random.normal(0, yerr, n)
plt.scatter(x, y)
model_selection(x, y, yerr)
Explanation: Different datasets will prefer different models
Let's try this again. Maybe the answer will change sign this time.
End of explanation
def simulate_many_odds_ratios(n):
Given a number `n` of data points, simulate 1,000 data points drawn from a null model and an alternative model and
compare the odds ratio for each.
iters = 1000
lg1 = np.zeros(iters)
lg2 = np.zeros(iters)
for i in range(iters):
x = np.linspace(-10, 10, n)
yerr = np.abs(np.random.normal(0, 2, n))
# simulate two models: only one matches the fixed model
y1 = -0.5*x + np.random.normal(0, yerr, n)
y2 = -0.46*x + np.random.normal(0, yerr, n)
lg1[i] = model_selection(x, y1, yerr)
m2 = model_selection(x, y2, yerr)
# Truncate OR for ease of plotting
if m2 < 10:
lg2[i] = m2
else:
lg2[i] = 10
return lg1, lg2
def make_figures(n):
lg1, lg2 = simulate_many_odds_ratios(n)
lg1 = np.sort(lg1)
lg2 = np.sort(lg2)
fifty_point1 = lg1[int(np.floor(len(lg1)/2))]
fifty_point2 = lg2[int(np.floor(len(lg2)/2))]
fig, ax = plt.subplots(ncols=2, figsize=(15, 7), sharey=True)
fig.suptitle('Log Odds Ratio for n={0} data points'.format(n), fontsize=20)
sns.kdeplot(lg1, label='slope=-0.5', ax=ax[0], cumulative=False)
ax[0].axvline(x=fifty_point1, ls='--', color='k')
ax[0].set_title('Data drawn from null model')
ax[0].set_ylabel('Density')
sns.kdeplot(lg2, label='slope=-0.46', ax=ax[1], cumulative=False)
ax[1].axvline(x=fifty_point2, ls='--', color='k')
ax[1].set_title('Data drawn from alternative model')
fig.text(0.5, 0.04, 'Log Odds Ratio', ha='center', size=18)
return fig, ax
fig, ax = make_figures(n=5)
Explanation: Indeed, the answer changed sign. Odds Ratios, p-values and everything else should always be interpreted conservatively. I prefer odds ratios that are very large, larger than 1,000 before stating that one model is definitively preferred. Otherwise, I tend to prefer the simpler model.
The larger the dataset, the more resolving power
What distribution of answers would you get if you obtained five points? Ten? Fifteen? I've written a couple of short functions to help us find out.
In the functions below, I simulate two datasets. One datasets is being plucked from points that obey the model
$$
y = -\frac{1}{2}x,
$$
whereas the second model is being plucked from
$$
y = -0.46x.
$$
Clearly, the fixed model $y=-0.5x$ should only be preferred for the first dataset, and the free model is the correct one to use for the second model. Now let us find out if this is the case.
By the way, the function below trims odds ratios to keep them from becoming too large. If an odds ratio is bigger than 10, we set it equal to 10 for plotting purposes.
End of explanation
fig, ax = make_figures(n=50)
Explanation: Here we can see that with five data points, the odds ratio will tend to prefer the simpler model. We do not have too much information---why request the extra information? Note that for the second dataset in some cases the deviations are great enough that the alternative model is strongly preferred (right panel, extra bump at 10). However, this is rare.
End of explanation |
13,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step33: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step36: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step38: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step39: Hyperparameters
Tune the following parameters
Step41: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step43: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step46: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 4
sample_id = 107
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
# Min-max scaling: zi = (xi - min(RGB)) / (max(RGB) - min(RGB)) => zi = (xi - 0) / (255 - 0)
return x / float(255)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
n_classes = 10
result = np.zeros([len(x), n_classes])
for i in range(0, len(x)):
one_hot = np.zeros(n_classes)
one_hot[x[i]] = 1
result[i] = one_hot
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None] + list(image_shape), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.int32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weights = tf.Variable(
tf.truncated_normal(
[conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs],
stddev=0.1,
seed=1
)
)
biases = tf.Variable(
tf.truncated_normal([conv_num_outputs], stddev=0.1, seed=1)
)
conv = tf.nn.conv2d(
x_tensor,
weights,
[1, conv_strides[0], conv_strides[1], 1],
"SAME"
)
conv = tf.add(conv, biases)
conv = tf.nn.relu(conv)
pool = tf.nn.max_pool(
conv,
[1, pool_ksize[0], pool_ksize[1],1 ],
[1, pool_strides[0], pool_strides[1], 1],
"SAME"
)
return pool
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
from functools import reduce
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
flat_size = reduce(lambda x, y: x * y, x_tensor.get_shape().as_list()[1:])
return tf.reshape(x_tensor, [-1, flat_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(
tf.truncated_normal(
[x_tensor.get_shape().as_list()[-1], num_outputs],
stddev=0.1,
seed=1
)
)
biases = tf.Variable(
tf.truncated_normal([num_outputs], stddev=0.1, seed=1)
)
result = tf.matmul(x_tensor, weights)
result = tf.add(result, biases)
result = tf.nn.relu(result)
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(
tf.truncated_normal(
[x_tensor.get_shape().as_list()[-1], num_outputs],
stddev=0.1,
seed=1
)
)
biases = tf.Variable(
tf.truncated_normal([num_outputs], stddev=0.1, seed=1)
)
result = tf.matmul(x_tensor, weights)
result = tf.add(result, biases)
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
conv_layer_1 = conv2d_maxpool(
x, 128, (2, 2), (2, 2), (2, 2), (2, 2)
)
conv_layer_2 = conv2d_maxpool(
conv_layer_1, 1024, (2, 2), (2, 2), (2, 2), (2, 2)
)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv_layer_2)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
conn_layer_1 = fully_conn(flat, 512)
conn_layer_1 = tf.nn.dropout(conn_layer_1, keep_prob)
conn_layer_2 = fully_conn(conn_layer_1, 32)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
result = output(conn_layer_2, 10)
# TODO: return output
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(
optimizer,
feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(
cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}
)
valid_acc = session.run(
accuracy, feed_dict={x: valid_features, y: valid_labels,keep_prob: 1.0}
)
print("Loss: {:3.5f}, Validation Accuracy: {:0.5f}".format(loss, valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 25
batch_size = 512
keep_probability = 0.75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
13,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imaging Cortical Layers
Step1: Extract images from the imaging site of our proposed cortical layers | Python Code:
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
#%matplotlib inline
import numpy as np
import urllib2
import scipy.stats as stats
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv,
x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
def synapses_over_unmasked(row):
s = (row[4]/row[3])*(64**3)
return [row[0], row[1], row[2], s]
syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)
syn_normalized = syn_unmasked
Explanation: Imaging Cortical Layers
End of explanation
# Looking at images across y, and of the layers in the y-direction
#########################################################################################
from image_builder import get_image
xs = np.unique(data_thresholded[:,0])
ys = np.unique(data_thresholded[:,1])
# Layer across y
get_image((0,1),(0,len(ys)-1),xs,ys, "across_y")
# Each y-layer defined by bounds of local minima in total syn density at each y
y_bounds = [(1564,1837), (1837,2071), (2071,2305), (2305,2539), (2539,3124)]
for _, bounds in enumerate(y_bounds):
y_lower = np.where(ys==bounds[0])[0][0]
y_upper = np.where(ys==bounds[1])[0][0]
print y_lower,y_upper
i = get_image((0,1),(y_lower,y_upper),xs,ys,str(bounds[0])+"_"+str(bounds[1]))
Explanation: Extract images from the imaging site of our proposed cortical layers
End of explanation |
13,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Revised version
Step1: Step 4a
Step2: Step 4b
Step3: Step 5
Step4: Apply to data | Python Code:
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import itertools
import urllib2
import scipy.stats as stats
%matplotlib inline
np.set_printoptions(precision=3, threshold=1000000, suppress=True)
np.random.seed(1)
alpha = .025
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
num_samples = 150 # how many different sized samples to draw
N = np.sum(csv[:, -1]) # total data set size
L = np.unique(csv[:, 2]) # list of labels
print L.shape
# sample sizes to iterate over
sample_sizes = np.logspace(1.0, 5.0, num=num_samples, base=10.0)
print sample_sizes
Explanation: Revised version: instead of considering just the synapse values alone, we consider instead synapses/unmasked, as per Greg's explanation of the unmasked variable
Define Model
Call a row vector from our data set $X_i=(x_i, y_i, z_i, u_i, s_i)$ where positions are respective to those in the original data set. Each row vector represents a 'bin' of pixels, let the number of rows in the data correspond to N. We will look at the number of synapses per bin, and whether the number follows a uniform distribution, conditioned on unmasked.
Assumptions
The number of synapses per bin follows a multinomial distribution, where the probability of synapses is conditioned on the unmasked value for that bin.
Statistical test
We will test whether or not the number of synapses are distributed uniformly across each bin. In other words does X follow a multinomial distribution where each cell has equal probabilities.
$H_0: \textrm{ all cells have equal probability }$
$H_A: \textrm{ cells do not have equal probability }$
Test statistic
We'll use Pearson's Chi Squared Test to determine whether to reject the null. First, define $\bar \pi$ to be the average synaptic density (synapse/unmasked) across all bins. Let $E_i$, the expected number of synapses at bin i, be $E_i=\bar \pi u_i$ where $u_i$ is unmasked value at that bin. Let $X_i$ be the observed number of synapses.
Our test statistic is as follows
$$
T = \sum_{i = 1}^{N} \frac{(X_i - E_i)^2}{X_i}
$$
and it approximately follows a chi-squared distribution with N-1 degrees of freedom. Therefore, given a signifigance level, $\alpha$, we can use the inverse CDF of the chi-squared distribution to determine a critical value. When T is greater than the critical value, we can reject the null.
End of explanation
# simulated sampling under the null
repeats = 100 # how many repitions per sample size
pi_null = np.array([1.0/float(len(L))]*len(L)) # pi vector under the null (uniform probs)
power_null = []
for s in sample_sizes:
power = 0
E_i = pi_null*s # expected per label
for r in xrange(repeats):
null_data = np.random.multinomial(s, pi_null)
chi_sq = stats.chisquare(null_data, E_i)
p_value = chi_sq[1]
# can we reject the null hypothesis
if p_value < alpha:
power = power + 1
power_null.append(float(power)/float(repeats))
Explanation: Step 4a: Sample data from null
End of explanation
# simulated sampling under alternate
repeats = 100 # how many repitions per sample size
power_alt = []
pi_alt = np.random.rand(len(L)) # create a pi vector (random probabilities)
pi_alt = pi_alt/np.sum(pi_alt) # normalize
for s in sample_sizes:
power = 0
E_i = pi_null*s # all labels have equal expectancy
for r in xrange(repeats):
alt_data = np.random.multinomial(s, pi_alt) # use pi vector to gen data
chi_sq = stats.chisquare(alt_data, E_i)
p_value = chi_sq[1]
# can we reject the null hypothesis
if p_value < alpha:
power = power + 1
power_alt.append(float(power)/float(repeats))
Explanation: Step 4b: Sample data from alternate
End of explanation
plt.scatter(sample_sizes, power_null, hold=True, label='null', s=4)
plt.scatter(sample_sizes, power_alt, color='green', hold=True, label='alt', s=4)
plt.xlabel('sample size')
plt.xscale('log')
plt.ylabel('power')
plt.axhline(alpha, color='red', linestyle='--', label='alpha')
plt.legend(loc=5)
plt.show()
Explanation: Step 5: Plot power vs n
End of explanation
from __future__ import division
csv = csv[np.where(csv[:, -2] != 0)]
X = csv[:, -1]
density = csv[:, -1]/csv[:,-2]
# get average density (probability)
avg = np.average(density)
# expected values are everage probability multipled by unmasked per bin
E = csv[:, -2]*avg
print X[:50]
print E[:50]
print stats.chisquare(X, E)
Explanation: Apply to data
End of explanation |
13,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing MusicBrainz data with Python/JS, an introduction
This introductory notebook will explain how I get database from MusicBrainz and how I transform it to Python format for display in tables or plots.
A static HTML version of this notebook and the next ones should be available on github.io.
Prerequisites
Step1: Accessing the database from Python
Once the local database is set I can access it using e.g. Python with the psycopg2 library to perform SQL queries. Let's try a simple query.
With musicbrainz-docker, my database is on a virtual machine. I can access it from my main machine by setting the following parameters
Step3: Of course your parameters (especially IP) might be different from mine.
In order to simplify this procedure I developed a new branch in the musicbrainz-docker project that creates a Jupyter VM. If you use this branch, you don't need to set the parameters above, they are set when you start your notebook.
We need to define a SQL query as a Python string that psycopg2 will send to our database
Step4: Let's apply our query
Step5: We got one result! So that means the correct Ludwig van Beethoven (1770-1828) exists in the MusicBrainz database. I also extracted his MBID (unique identifier) so that you can check Beethoven's page is available on the main musicbrainz server.
If you only want to manipulate basic data as Python strings and numbers, that's all you need, and you can start writing other queries.
But in my case I want to do more complex stuff on the data, so I want to use another Python library that will help me to manipulate and plot the data. I'm going to use PanDas for that.
Using PanDas to manipulate data
The PanDas library allows manipulations of complex data in Python as Series or DataFrames. It also integrates some of the matplotlib plotting library capabilities directly on the DataFrames object. Let's do the same query as earlier using pandas | Python Code:
%load_ext watermark
%watermark --python -r
%watermark --date --updated
Explanation: Visualizing MusicBrainz data with Python/JS, an introduction
This introductory notebook will explain how I get database from MusicBrainz and how I transform it to Python format for display in tables or plots.
A static HTML version of this notebook and the next ones should be available on github.io.
Prerequisites: having PostgreSQL to store the database (or being able to create virtual machines that will run PostgreSQL). I will use Python to manipulate the data but you can probably do the same in other languages. I will not go into details on how I build the SQL queries to fetch the data, you will need to look into the MusicBrainz schema if you try something too different from my examples.
Getting the MusicBrainz data
The first step is to get a local copy of the MusicBrainz database in order to make direct queries to it without going through the website or or webservice (which doesn't give the possibility to write complex queries).
The raw data itself is available for download and the files are updated twice a week. As of early 2017 the database zipped files to download are close to 2.5Gb.
Several possibilities exist to build the database locally, using the raw data above. I'm only explaining the basics here:
if you already have or can have PostgreSQL installed (MusicBrainz uses version 9.5 for the moment) on your machine, you can use the mbslave project that will recreate the database structure on your machine. You will also be able to synchronise your database and fetch the latest changes when you want.
another possibility is to use virtual machines to store the database and create a local copy of the website also (this is not required for what I intend to show here). I'm using the musicbrainz-docker project that uses Docker to create several machines for the different MusicBrainz components (database, website, search)
In both cases you should expect to download several Gb of data and need several Gb of RAM to have the postgreSQL database running smoothly.
Customize the database
Note: this step is again absolutely not required. It also increases a lot the space you need to run the database (the new dump you need to download is 4Gb large).
In my case, I want to explore metadata about the data modifications, i.e. the edits performed by MusicBrainz contributors. In order to do so I had to download also the mbdump-edit.tar.bz2 and mbdump-editor.tar.bz2 and add them to the local database build process (I did that by patching the createdb.sh script in musicbrainz-docker).
Python toolbox
For data analysis I will use Python3 libraries:
- PanDas for manipulating data as tables
- psycopg2 and sqlalchemy to access the SQL database
- plotly for plots
End of explanation
import os
import psycopg2
# define global variables to store our DB credentials
PGHOST = 'localhost'
PGDATABASE = os.environ.get('PGDATABASE', 'musicbrainz')
PGUSER = os.environ.get('PGUSER', 'musicbrainz')
PGPASSWORD = os.environ.get('PGPASSWORD', 'musicbrainz')
Explanation: Accessing the database from Python
Once the local database is set I can access it using e.g. Python with the psycopg2 library to perform SQL queries. Let's try a simple query.
With musicbrainz-docker, my database is on a virtual machine. I can access it from my main machine by setting the following parameters:
End of explanation
sql_beethoven =
SELECT gid, name, begin_date_year, end_date_year
FROM artist
WHERE name='Ludwig van Beethoven'
Explanation: Of course your parameters (especially IP) might be different from mine.
In order to simplify this procedure I developed a new branch in the musicbrainz-docker project that creates a Jupyter VM. If you use this branch, you don't need to set the parameters above, they are set when you start your notebook.
We need to define a SQL query as a Python string that psycopg2 will send to our database
End of explanation
with psycopg2.connect(host=PGHOST, database=PGDATABASE,
user=PGUSER, password=PGPASSWORD) as cnx:
crs = cnx.cursor()
crs.execute(sql_beethoven)
for result in crs:
print(result)
Explanation: Let's apply our query
End of explanation
# pandas SQL query require an sqlalchemy engine object
# rather than the direct psycopg2 connection
import sqlalchemy
import pandas
engine = sqlalchemy.create_engine(
'postgresql+psycopg2://{PGUSER}:{PGPASSWORD}@{PGHOST}/{PGDATABASE}'.format(**locals()),
isolation_level='READ UNCOMMITTED'
)
pandas.read_sql(sql_beethoven, engine)
Explanation: We got one result! So that means the correct Ludwig van Beethoven (1770-1828) exists in the MusicBrainz database. I also extracted his MBID (unique identifier) so that you can check Beethoven's page is available on the main musicbrainz server.
If you only want to manipulate basic data as Python strings and numbers, that's all you need, and you can start writing other queries.
But in my case I want to do more complex stuff on the data, so I want to use another Python library that will help me to manipulate and plot the data. I'm going to use PanDas for that.
Using PanDas to manipulate data
The PanDas library allows manipulations of complex data in Python as Series or DataFrames. It also integrates some of the matplotlib plotting library capabilities directly on the DataFrames object. Let's do the same query as earlier using pandas:
End of explanation |
13,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 01
Big Data Ingesting
Step1: The next step will be to copy the data file that we will be using for this tutorial into the same folder as these notes. We will be looking at a couple of different types of data sets. We'll start with a simple data set that appears to be a functional set of data where one output column depends on the input columns of the data. In this case, we're looking at a set of patient data where there are a handful of input variables that may feed into the likelyhood that the patient will develop type 2 diabetes. The output column is a quantitative measure of disease progression one year after baseline measurements. (http
Step2: Now that we've loaded the data in, the first thing to do is to take a look at the raw data. We can look at the first 5 rows (the head of the data set) by doing the following.
Step3: Before we move forward, note that there is a strange value in the first row under 'GLU'
Step4: So we see the first row is gone. That's what we wanted. However, this doesn't really tell us much by itself. It is better to start investigating how the output variable ('Target' in this case) depends on the inputs. We'll visualize the data one at a time to look at this. We'll make a scatter plot where we look at the Target as a function of the Age column. The first entry provides the 'x' values where the second provides the 'y' values. The final input tells the plotting software to plot the data points as dots, not connected lines. We'll almost always use this feature.
Step5: This doesn't tell us much. It looks like there isn't a large dependence on age - othewise we would have seen something more specific than a large blob of data. Let's try other inputs. We'll plot a bunch of them in a row.
Jupyter Hint
Step6: It looks like there are some of these, like BMI, that as the BMI goes up, so does the Target.
Import Classification Data
There is another type of data set where we have any number of input variables, but the output is no longer a continuous number, but rather it is a class. By that we mean that it is one of a finite number of possibilities. For example, in this next data set, we are looking at the characteristics of three different iris flowers. The measurements apply to one of the three types
Step7: As you can see, the 'target' column is no longer numerical, but a text entry that is one of the three possible iris varieties. We also see that the default column headings are a bit long and will get tiring to type out when we want to reference them. Let's rename the columns first.
Step8: Now we want to visualize the data. We don't know what to expect, so let's just pick a couple of variables and see what the data look like.
Step9: So we see that there are entries at a number of different points, but it would be really nice to be able to identify which point correpsonds to which variety. We will use another python library to do this. We'll also set the default style to 'white' which looks better.
Step10: The seaborn library provides a number of different plotting options. One of them is lmplot. It is designed to provide a linear model fit (which we don't want right now), so we'll set the fig_reg option to False so that it doesn't try to fit them.
Note that we need two additional parameters here
Step11: Now we can see that the cluster off to the left all belongs to the Setosa variety. It would be really nice to try plotting the other variables as well. We could do that manually or use a nice shortcut in seaborn called pairplot. This plots the hue column against all possible pairs of the other data columns.
Step12: We see that there are some of these plots that show there might be a way to distinuish the three different varieties. We'll look at how to do that later on, but this gives us a start.
Import Image Data
The last type of data we are going to look at are image data. This type of data provides information about each pixel (or element) in an image. We'll start by working with gray-scale images where each pixel could be a value anywhere between 0 (black) and 255 (white). We'll read in the data then look at how to create the image. This data set are handwritten digits from 0 to 9 that have been digitized. We will eventually try to teach the computer to read the handwritten digits.
Step13: This data set has 65 columns. The first 64 correspond to the grayscale value for each of the pixels in an 8 by 8 image. The last column (the 'target') indicates what digit the image is supposed to be. We'll pick one row to start with (row 41 in this case). We'll use some in-line commenting to explain each step here. | Python Code:
import pandas as pd
Explanation: Class 01
Big Data Ingesting: CSVs, Data frames, and Plots
Welcome to PHY178/CSC171. We will be using the Python language to import data, run machine learning, visualize the results, and communicate those results.
Much of the data that we will use this semester is stored in a CSV file. This stand for Comma-separated Values. The data files are stored in rows- one row per line, with the column values separated by commas. Take a quick look at the data in Class01_diabetes_data.csv by clicking on it in the "Files" tab. You can see that the entries all bunch up together since they are separated by the comma delimeter, not by space.
Where to get data
We will spend quite a bit of time looking for public data as we get going in this class. Here are a couple of places to look for data sets to work with:
* The UCI repository: https://archive.ics.uci.edu/ml/datasets.html
* Kaggle Public Datasets: https://www.kaggle.com/datasets
* Ceasar's repository: https://github.com/caesar0301/awesome-public-datasets
Explore a few of these and try downloading one of the files. For example, the data in the UCI repository can be downloaded from the "Data Folder" links. You have to right-click the file, then save it to the local computer. Their files aren't labeled as "CSV" files (the file extension is .data), but they are CSV files.
How to put it on the cloud
Once you have a data file, you need to upload it to the cloud so that we can import it and plot it. The easiest way to do this is to click on the "Files" link in the toolbar. Click on the "Create" button and then drag the file into the upload box. Put the file in the same folder as the Class01 notebook and you'll be able to load it later on.
Import Regression Data
The first thing we want to do is to import data into our notebook so that we can examine it, evaluate it, and use machine learning to learn from it. We will be using a Python library that makes all of that much easier.
Jupyter Hint: Run the command in the next window to import that Pandas library. You evaluate cells in the notebook by highlighting them (by clicking on them), then pressing Shift-Enter to execute the cell.
End of explanation
diabetes = pd.read_csv('Class01_diabetes_data.csv')
Explanation: The next step will be to copy the data file that we will be using for this tutorial into the same folder as these notes. We will be looking at a couple of different types of data sets. We'll start with a simple data set that appears to be a functional set of data where one output column depends on the input columns of the data. In this case, we're looking at a set of patient data where there are a handful of input variables that may feed into the likelyhood that the patient will develop type 2 diabetes. The output column is a quantitative measure of disease progression one year after baseline measurements. (http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html)
End of explanation
diabetes.head()
Explanation: Now that we've loaded the data in, the first thing to do is to take a look at the raw data. We can look at the first 5 rows (the head of the data set) by doing the following.
End of explanation
diabetes.dropna(inplace=True)
diabetes.head()
Explanation: Before we move forward, note that there is a strange value in the first row under 'GLU': NaN. This means 'not a number' and indicates there was a missing value or other problem with the data. Before we move foward, we want to drop any row that has missing values in it. There is a simple pandas command that will do that: dropna(inplace=True). The argument to this command: inplace=True tells the computer to drop the rows in our current dataset, not make a new copy.
End of explanation
diabetes.plot(x='Age',y='Target',kind='scatter')
Explanation: So we see the first row is gone. That's what we wanted. However, this doesn't really tell us much by itself. It is better to start investigating how the output variable ('Target' in this case) depends on the inputs. We'll visualize the data one at a time to look at this. We'll make a scatter plot where we look at the Target as a function of the Age column. The first entry provides the 'x' values where the second provides the 'y' values. The final input tells the plotting software to plot the data points as dots, not connected lines. We'll almost always use this feature.
End of explanation
diabetes.plot(x='Sex',y='Target',kind='scatter')
diabetes.plot(x='BMI',y='Target',kind='scatter')
diabetes.plot(x='BP',y='Target',kind='scatter')
diabetes.plot(x='TC',y='Target',kind='scatter')
diabetes.plot(x='LDL',y='Target',kind='scatter')
diabetes.plot(x='HDL',y='Target',kind='scatter')
diabetes.plot(x='TCH',y='Target',kind='scatter')
diabetes.plot(x='LTG',y='Target',kind='scatter')
diabetes.plot(x='GLU',y='Target',kind='scatter')
Explanation: This doesn't tell us much. It looks like there isn't a large dependence on age - othewise we would have seen something more specific than a large blob of data. Let's try other inputs. We'll plot a bunch of them in a row.
Jupyter Hint: Clicking in the white space next to the output cell will expand and contract the output contents. This is helpful when you have lots of output.
End of explanation
irisDF = pd.read_csv('Class01_iris_data.csv')
irisDF.head()
Explanation: It looks like there are some of these, like BMI, that as the BMI goes up, so does the Target.
Import Classification Data
There is another type of data set where we have any number of input variables, but the output is no longer a continuous number, but rather it is a class. By that we mean that it is one of a finite number of possibilities. For example, in this next data set, we are looking at the characteristics of three different iris flowers. The measurements apply to one of the three types:
* Setosa
* Versicolour
* Virginica
Let's take a look at this data set and see what it takes to visualize it. First load the data in and inspect the first few rows.
End of explanation
irisDF.columns=['sepalLen','sepalWid','petalLen','petalWid','target']
irisDF.head()
Explanation: As you can see, the 'target' column is no longer numerical, but a text entry that is one of the three possible iris varieties. We also see that the default column headings are a bit long and will get tiring to type out when we want to reference them. Let's rename the columns first.
End of explanation
irisDF.plot(x='sepalLen',y='sepalWid',kind='scatter')
Explanation: Now we want to visualize the data. We don't know what to expect, so let's just pick a couple of variables and see what the data look like.
End of explanation
import seaborn as sns
sns.set_style('white')
Explanation: So we see that there are entries at a number of different points, but it would be really nice to be able to identify which point correpsonds to which variety. We will use another python library to do this. We'll also set the default style to 'white' which looks better.
End of explanation
sns.lmplot(x='sepalLen', y='sepalWid', data=irisDF, hue='target', fit_reg=False)
Explanation: The seaborn library provides a number of different plotting options. One of them is lmplot. It is designed to provide a linear model fit (which we don't want right now), so we'll set the fig_reg option to False so that it doesn't try to fit them.
Note that we need two additional parameters here: the first is to tell seaborn to use the irisDF data. That means it will look in that data set for the x and y columns we provide. The second is the hue option. This tells seaborn what column to use to determine the color (or hue) of the points. In this case, it will notice that there are three different options in that column and color them appropriately.
End of explanation
sns.pairplot(irisDF, hue="target")
Explanation: Now we can see that the cluster off to the left all belongs to the Setosa variety. It would be really nice to try plotting the other variables as well. We could do that manually or use a nice shortcut in seaborn called pairplot. This plots the hue column against all possible pairs of the other data columns.
End of explanation
digitDF = pd.read_csv('Class01_digits_data.csv')
digitDF.head()
Explanation: We see that there are some of these plots that show there might be a way to distinuish the three different varieties. We'll look at how to do that later on, but this gives us a start.
Import Image Data
The last type of data we are going to look at are image data. This type of data provides information about each pixel (or element) in an image. We'll start by working with gray-scale images where each pixel could be a value anywhere between 0 (black) and 255 (white). We'll read in the data then look at how to create the image. This data set are handwritten digits from 0 to 9 that have been digitized. We will eventually try to teach the computer to read the handwritten digits.
End of explanation
testnum = 61
#
# First, get the first 64 columns which correspond to the image data
#
testimage = digitDF.loc[testnum][0:64]
#
# Then reshape this from a 1 by 64 array into a matrix that is 8 by 8.
#
testimage = testimage.reshape((8,8))
#
# We'll print out what the image is supposed to be. Note the format of the print statement.
# The '{}' means 'insert the argument from the format here'.
# The .format means 'pass these values into the string.
#
print('Expected Digit: {}'.format(digitDF.loc[testnum][64]))
#
# Finally, we need one more library to plot the images.
#
import matplotlib.pyplot as plt
#
# We tell Python to plot a gray scale image, then to show our resahped data as an image.
#
plt.gray()
plt.matshow(testimage)
Explanation: This data set has 65 columns. The first 64 correspond to the grayscale value for each of the pixels in an 8 by 8 image. The last column (the 'target') indicates what digit the image is supposed to be. We'll pick one row to start with (row 41 in this case). We'll use some in-line commenting to explain each step here.
End of explanation |
13,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to PyTorch
Introduction to Torch's tensor library
All of deep learning is computations on tensors, which are
generalizations of a matrix that can be indexed in more than 2
dimensions. We will see exactly what this means in-depth later. First,
lets look what we can do with tensors.
Step1: Creating Tensors
~~~~~~~~~~~~~~~~
Tensors can be created from Python lists with the torch.Tensor()
function.
Step2: What is a 3D tensor anyway? Think about it like this. If you have a
vector, indexing into the vector gives you a scalar. If you have a
matrix, indexing into the matrix gives you a vector. If you have a 3D
tensor, then indexing into the tensor gives you a matrix!
A note on terminology
Step3: You can also create tensors of other datatypes. The default, as you can
see, is Float. To create a tensor of integer types, try
torch.LongTensor(). Check the documentation for more data types, but
Float and Long will be the most common.
You can create a tensor with random data and the supplied dimensionality
with torch.randn()
Step4: Operations with Tensors
~~~~~~~~~~~~~~~~~~~~~~~
You can operate on tensors in the ways you would expect.
Step5: See the documentation <http
Step6: Reshaping Tensors
~~~~~~~~~~~~~~~~~
Use the .view() method to reshape a tensor. This method receives heavy
use, because many neural network components expect their inputs to have
a certain shape. Often you will need to reshape before passing your data
to the component.
Step7: Computation Graphs and Automatic Differentiation
The concept of a computation graph is essential to efficient deep
learning programming, because it allows you to not have to write the
back propagation gradients yourself. A computation graph is simply a
specification of how your data is combined to give you the output. Since
the graph totally specifies what parameters were involved with which
operations, it contains enough information to compute derivatives. This
probably sounds vague, so lets see what is going on using the
fundamental class of Pytorch
Step8: So Variables know what created them. z knows that it wasn't read in from
a file, it wasn't the result of a multiplication or exponential or
whatever. And if you keep following z.grad_fn, you will find yourself at
x and y.
But how does that help us compute a gradient?
Step9: So now, what is the derivative of this sum with respect to the first
component of x? In math, we want
\begin{align}\frac{\partial s}{\partial x_0}\end{align}
Well, s knows that it was created as a sum of the tensor z. z knows
that it was the sum x + y. So
\begin{align}s = \overbrace{x_0 + y_0}^\text{$z_0$} + \overbrace{x_1 + y_1}^\text{$z_1$} + \overbrace{x_2 + y_2}^\text{$z_2$}\end{align}
And so s contains enough information to determine that the derivative
we want is 1!
Of course this glosses over the challenge of how to actually compute
that derivative. The point here is that s is carrying along enough
information that it is possible to compute it. In reality, the
developers of Pytorch program the sum() and + operations to know how to
compute their gradients, and run the back propagation algorithm. An
in-depth discussion of that algorithm is beyond the scope of this
tutorial.
Lets have Pytorch compute the gradient, and see that we were right
Step10: Understanding what is going on in the block below is crucial for being a
successful programmer in deep learning. | Python Code:
# Author: Robert Guthrie
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
Explanation: Introduction to PyTorch
Introduction to Torch's tensor library
All of deep learning is computations on tensors, which are
generalizations of a matrix that can be indexed in more than 2
dimensions. We will see exactly what this means in-depth later. First,
lets look what we can do with tensors.
End of explanation
# Create a torch.Tensor object with the given data. It is a 1D vector
V_data = [1., 2., 3.]
V = torch.Tensor(V_data)
print(V)
# Creates a matrix
M_data = [[1., 2., 3.], [4., 5., 6]]
M = torch.Tensor(M_data)
print(M)
# Create a 3D tensor of size 2x2x2.
T_data = [[[1., 2.], [3., 4.]],
[[5., 6.], [7., 8.]]]
T = torch.Tensor(T_data)
print(T)
Explanation: Creating Tensors
~~~~~~~~~~~~~~~~
Tensors can be created from Python lists with the torch.Tensor()
function.
End of explanation
# Index into V and get a scalar
print(V[0])
# Index into M and get a vector
print(M[0])
# Index into T and get a matrix
print(T[0])
Explanation: What is a 3D tensor anyway? Think about it like this. If you have a
vector, indexing into the vector gives you a scalar. If you have a
matrix, indexing into the matrix gives you a vector. If you have a 3D
tensor, then indexing into the tensor gives you a matrix!
A note on terminology:
when I say "tensor" in this tutorial, it refers
to any torch.Tensor object. Matrices and vectors are special cases of
torch.Tensors, where their dimension is 1 and 2 respectively. When I am
talking about 3D tensors, I will explicitly use the term "3D tensor".
End of explanation
x = torch.randn((3, 4, 5))
print(x)
Explanation: You can also create tensors of other datatypes. The default, as you can
see, is Float. To create a tensor of integer types, try
torch.LongTensor(). Check the documentation for more data types, but
Float and Long will be the most common.
You can create a tensor with random data and the supplied dimensionality
with torch.randn()
End of explanation
x = torch.Tensor([1., 2., 3.])
y = torch.Tensor([4., 5., 6.])
z = x + y
print(z)
Explanation: Operations with Tensors
~~~~~~~~~~~~~~~~~~~~~~~
You can operate on tensors in the ways you would expect.
End of explanation
# By default, it concatenates along the first axis (concatenates rows)
x_1 = torch.randn(2, 5)
y_1 = torch.randn(3, 5)
z_1 = torch.cat([x_1, y_1])
print(z_1)
# Concatenate columns:
x_2 = torch.randn(2, 3)
y_2 = torch.randn(2, 5)
# second arg specifies which axis to concat along
z_2 = torch.cat([x_2, y_2], 1)
print(z_2)
# If your tensors are not compatible, torch will complain. Uncomment to see the error
# torch.cat([x_1, x_2])
Explanation: See the documentation <http://pytorch.org/docs/torch.html>__ for a
complete list of the massive number of operations available to you. They
expand beyond just mathematical operations.
One helpful operation that we will make use of later is concatenation.
End of explanation
x = torch.randn(2, 3, 4)
print(x)
print(x.view(2, 12)) # Reshape to 2 rows, 12 columns
# Same as above. If one of the dimensions is -1, its size can be inferred
print(x.view(2, -1))
Explanation: Reshaping Tensors
~~~~~~~~~~~~~~~~~
Use the .view() method to reshape a tensor. This method receives heavy
use, because many neural network components expect their inputs to have
a certain shape. Often you will need to reshape before passing your data
to the component.
End of explanation
# Variables wrap tensor objects
x = autograd.Variable(torch.Tensor([1., 2., 3]), requires_grad=True)
# You can access the data with the .data attribute
print(x.data)
# You can also do all the same operations you did with tensors with Variables.
y = autograd.Variable(torch.Tensor([4., 5., 6]), requires_grad=True)
z = x + y
print(z.data)
# BUT z knows something extra.
print(z.grad_fn)
Explanation: Computation Graphs and Automatic Differentiation
The concept of a computation graph is essential to efficient deep
learning programming, because it allows you to not have to write the
back propagation gradients yourself. A computation graph is simply a
specification of how your data is combined to give you the output. Since
the graph totally specifies what parameters were involved with which
operations, it contains enough information to compute derivatives. This
probably sounds vague, so lets see what is going on using the
fundamental class of Pytorch: autograd.Variable.
First, think from a programmers perspective. What is stored in the
torch.Tensor objects we were creating above? Obviously the data and the
shape, and maybe a few other things. But when we added two tensors
together, we got an output tensor. All this output tensor knows is its
data and shape. It has no idea that it was the sum of two other tensors
(it could have been read in from a file, it could be the result of some
other operation, etc.)
The Variable class keeps track of how it was created. Lets see it in
action.
End of explanation
# Lets sum up all the entries in z
s = z.sum()
print(s)
print(s.grad_fn)
Explanation: So Variables know what created them. z knows that it wasn't read in from
a file, it wasn't the result of a multiplication or exponential or
whatever. And if you keep following z.grad_fn, you will find yourself at
x and y.
But how does that help us compute a gradient?
End of explanation
# calling .backward() on any variable will run backprop, starting from it.
s.backward()
print(x.grad)
Explanation: So now, what is the derivative of this sum with respect to the first
component of x? In math, we want
\begin{align}\frac{\partial s}{\partial x_0}\end{align}
Well, s knows that it was created as a sum of the tensor z. z knows
that it was the sum x + y. So
\begin{align}s = \overbrace{x_0 + y_0}^\text{$z_0$} + \overbrace{x_1 + y_1}^\text{$z_1$} + \overbrace{x_2 + y_2}^\text{$z_2$}\end{align}
And so s contains enough information to determine that the derivative
we want is 1!
Of course this glosses over the challenge of how to actually compute
that derivative. The point here is that s is carrying along enough
information that it is possible to compute it. In reality, the
developers of Pytorch program the sum() and + operations to know how to
compute their gradients, and run the back propagation algorithm. An
in-depth discussion of that algorithm is beyond the scope of this
tutorial.
Lets have Pytorch compute the gradient, and see that we were right:
(note if you run this block multiple times, the gradient will increment.
That is because Pytorch accumulates the gradient into the .grad
property, since for many models this is very convenient.)
End of explanation
x = torch.randn((2, 2))
y = torch.randn((2, 2))
z = x + y # These are Tensor types, and backprop would not be possible
var_x = autograd.Variable(x)
var_y = autograd.Variable(y)
# var_z contains enough information to compute gradients, as we saw above
var_z = var_x + var_y
print(var_z.grad_fn)
var_z_data = var_z.data # Get the wrapped Tensor object out of var_z...
# Re-wrap the tensor in a new variable
new_var_z = autograd.Variable(var_z_data)
# ... does new_var_z have information to backprop to x and y?
# NO!
print(new_var_z.grad_fn)
# And how could it? We yanked the tensor out of var_z (that is
# what var_z.data is). This tensor doesn't know anything about
# how it was computed. We pass it into new_var_z, and this is all the
# information new_var_z gets. If var_z_data doesn't know how it was
# computed, theres no way new_var_z will.
# In essence, we have broken the variable away from its past history
Explanation: Understanding what is going on in the block below is crucial for being a
successful programmer in deep learning.
End of explanation |
13,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power estimate by projecting the covariance with MNE
We can apply the MNE inverse operator to a covariance matrix to obtain
an estimate of source power. This is computationally more efficient than first
estimating the source timecourses and then computing their power.
Step1: Compute empty-room covariance
First we compute an empty-room covariance, which captures noise from the
sensors and environment.
Step2: Epoch the data
Step3: Compute and plot covariances
In addition to the empty-room covariance above, we compute two additional
covariances
Step4: We can also look at the covariances using topomaps, here we just show the
baseline and data covariances, followed by the data covariance whitened
by the baseline covariance
Step5: Apply inverse operator to covariance
Finally, we can construct an inverse using the empty-room noise covariance
Step6: Project our data and baseline covariance to source space
Step7: And visualize power is relative to the baseline | Python Code:
# Author: Denis A. Engemann <[email protected]>
# Luke Bloy <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse_cov
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
Explanation: Compute source power estimate by projecting the covariance with MNE
We can apply the MNE inverse operator to a covariance matrix to obtain
an estimate of source power. This is computationally more efficient than first
estimating the source timecourses and then computing their power.
End of explanation
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_empty_room.crop(0, 60)
raw_empty_room.info['bads'] = ['MEG 2443']
raw_empty_room.info['projs'] = raw.info['projs']
noise_cov = mne.compute_raw_covariance(
raw_empty_room, method=['empirical', 'shrunk'])
del raw_empty_room
Explanation: Compute empty-room covariance
First we compute an empty-room covariance, which captures noise from the
sensors and environment.
End of explanation
raw.info['bads'] = ['MEG 2443', 'EEG 053']
raw.load_data().filter(4, 12)
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
tmin, tmax = -0.2, 0.5
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw.copy().filter(4, 12), events, event_id, tmin, tmax,
proj=True, picks=('meg', 'eog'), baseline=None,
reject=reject, preload=True)
del raw
Explanation: Epoch the data
End of explanation
base_cov = mne.compute_covariance(
epochs, tmin=-0.2, tmax=0, method=['shrunk', 'empirical'], rank=None,
verbose=True)
data_cov = mne.compute_covariance(
epochs, tmin=0., tmax=0.2, method=['shrunk', 'empirical'], rank=None,
verbose=True)
fig_noise_cov = mne.viz.plot_cov(noise_cov, epochs.info, show_svd=False)
fig_base_cov = mne.viz.plot_cov(base_cov, epochs.info, show_svd=False)
fig_data_cov = mne.viz.plot_cov(data_cov, epochs.info, show_svd=False)
Explanation: Compute and plot covariances
In addition to the empty-room covariance above, we compute two additional
covariances:
Baseline covariance, which captures signals not of interest in our
analysis (e.g., sensor noise, environmental noise, physiological
artifacts, and also resting-state-like brain activity / "noise").
Data covariance, which captures our activation of interest (in addition
to noise sources).
End of explanation
evoked = epochs.average().pick('meg')
evoked.drop_channels(evoked.info['bads'])
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
evoked_noise_cov = mne.EvokedArray(data=np.diag(noise_cov['data'])[:, None],
info=evoked.info)
evoked_data_cov = mne.EvokedArray(data=np.diag(data_cov['data'])[:, None],
info=evoked.info)
evoked_data_cov_white = mne.whiten_evoked(evoked_data_cov, noise_cov)
def plot_cov_diag_topomap(evoked, ch_type='grad'):
evoked.plot_topomap(
ch_type=ch_type, times=[0],
vmin=np.min, vmax=np.max, cmap='viridis',
units=dict(mag='None', grad='None'),
scalings=dict(mag=1, grad=1),
cbar_fmt=None)
plot_cov_diag_topomap(evoked_noise_cov, 'grad')
plot_cov_diag_topomap(evoked_data_cov, 'grad')
plot_cov_diag_topomap(evoked_data_cov_white, 'grad')
Explanation: We can also look at the covariances using topomaps, here we just show the
baseline and data covariances, followed by the data covariance whitened
by the baseline covariance:
End of explanation
# Read the forward solution and compute the inverse operator
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
# make an MEG inverse operator
info = evoked.info
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
loose=0.2, depth=0.8)
Explanation: Apply inverse operator to covariance
Finally, we can construct an inverse using the empty-room noise covariance:
End of explanation
stc_data = apply_inverse_cov(data_cov, evoked.info, inverse_operator,
nave=len(epochs), method='dSPM', verbose=True)
stc_base = apply_inverse_cov(base_cov, evoked.info, inverse_operator,
nave=len(epochs), method='dSPM', verbose=True)
Explanation: Project our data and baseline covariance to source space:
End of explanation
stc_data /= stc_base
brain = stc_data.plot(subject='sample', subjects_dir=subjects_dir,
clim=dict(kind='percent', lims=(50, 90, 98)))
Explanation: And visualize power is relative to the baseline:
End of explanation |
13,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The objective of this notebook is to show how to read and plot data from a mooring (time series).
Step1: Data reading
The data file is located in the datafiles directory.
Step2: As the platform is fixed, we will work on time series.<br/>
We will read the time and the sea water temperature variables, as well as their respective units.
Step3: Variable units and dimension
Step4: Let's have a look at the dimension of the array
Step5: The first number corresponds to the time and the second to the depth.
Basic plot
For a time series, we simply use the plot function of matplotlib.<br>
The 1st line change the font size to 16 (see matplotlib.RcParams).
Step6: As we plotted all the values, regardless of the quality flags, the result is not meaningful.
Select data according to Quality Flag
We have to load the corresponding variables
Step7: and we keep only the sea level values with a flag equal to 1.<br>
To do so, we use the masked arrays module.
Step8: Let's check the plot again
Step9: Still bad. It seems the QF don't allow us to filter out the data.<br>
Let's have a closer look at it
Step10: The values are either 1 (good data) or 9 (missing values), never a value indicating suspect or bad data.
A possible solution is to keep only sea level measurements lower than, let's say, 3 meters.
Step11: The units set for the time is maybe not the easiest to read.<br/>
However the netCDF4 module offers easy solutions to properly convert the time.
Converting time units
NetCDF4 provides the function num2date to convert the time vector into dates.<br/>
http
Step12: Finally, to avoid to have the overlap of the date ticklabels, we use the autofmt_xdate function.<br/>
Everything is in place to create the improved plot. | Python Code:
%matplotlib inline
import netCDF4
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import rcParams
from matplotlib import colors
from mpl_toolkits.basemap import Basemap
Explanation: The objective of this notebook is to show how to read and plot data from a mooring (time series).
End of explanation
datadir = './datafiles/'
datafile = 'NO_TS_MO_HoekVanHollandTG.nc'
Explanation: Data reading
The data file is located in the datafiles directory.
End of explanation
with netCDF4.Dataset(datadir + datafile) as nc:
time0 = nc.variables['TIME'][:]
time0_units = nc.variables['TIME'].units
sealevel = nc.variables['SLEV'][:]
sealevel_units = nc.variables['SLEV'].units
Explanation: As the platform is fixed, we will work on time series.<br/>
We will read the time and the sea water temperature variables, as well as their respective units.
End of explanation
print('Sea level units = %s' %sealevel_units)
Explanation: Variable units and dimension
End of explanation
print(sealevel.shape)
Explanation: Let's have a look at the dimension of the array
End of explanation
rcParams.update({'font.size': 16})
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(time0, sealevel, 'k-')
plt.xlabel(time0_units)
plt.ylabel(sealevel_units)
plt.show()
Explanation: The first number corresponds to the time and the second to the depth.
Basic plot
For a time series, we simply use the plot function of matplotlib.<br>
The 1st line change the font size to 16 (see matplotlib.RcParams).
End of explanation
with netCDF4.Dataset(datadir + datafile) as nc:
sealevel_QC = nc.variables['SLEV_QC'][:]
Explanation: As we plotted all the values, regardless of the quality flags, the result is not meaningful.
Select data according to Quality Flag
We have to load the corresponding variables:
End of explanation
sealevel = np.ma.masked_where(sealevel_QC!=1, sealevel)
Explanation: and we keep only the sea level values with a flag equal to 1.<br>
To do so, we use the masked arrays module.
End of explanation
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(time0, sealevel, 'k-')
plt.xlabel(time0_units)
plt.ylabel(sealevel_units)
plt.show()
Explanation: Let's check the plot again:
End of explanation
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(time0, sealevel_QC, 'ko')
plt.xlabel(time0_units)
plt.ylabel('Quality flags')
plt.show()
Explanation: Still bad. It seems the QF don't allow us to filter out the data.<br>
Let's have a closer look at it:
End of explanation
sealevel = np.ma.masked_outside(sealevel, -3., 3.)
fig = plt.figure(figsize=(15,8))
ax = plt.subplot(111)
plt.plot(time0, sealevel, 'ko-', lw=0.2, ms=1)
plt.xlabel(time0_units)
plt.ylabel(sealevel_units)
plt.show()
Explanation: The values are either 1 (good data) or 9 (missing values), never a value indicating suspect or bad data.
A possible solution is to keep only sea level measurements lower than, let's say, 3 meters.
End of explanation
from netCDF4 import num2date
dates = num2date(time0, units=time0_units)
print dates[:5]
Explanation: The units set for the time is maybe not the easiest to read.<br/>
However the netCDF4 module offers easy solutions to properly convert the time.
Converting time units
NetCDF4 provides the function num2date to convert the time vector into dates.<br/>
http://unidata.github.io/netcdf4-python/#section7
End of explanation
fig = plt.figure(figsize=(15,8))
ax = plt.subplot(111)
plt.plot(dates, sealevel, 'ko-', lw=0.2, ms=1)
plt.ylabel(sealevel_units)
plt.title('Sea level at station HoekVanHolland')
fig.autofmt_xdate()
plt.grid()
plt.savefig('NO_TS_MO_HoekVanHollandTG.png', dpi=300)
plt.show()
Explanation: Finally, to avoid to have the overlap of the date ticklabels, we use the autofmt_xdate function.<br/>
Everything is in place to create the improved plot.
End of explanation |
13,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CCSDT theory for a closed-shell reference
This notebook extends the spinorbital-CCSD notebook to compute CCSDT
Step1: Read calculation information (integrals, number of orbitals)
We start by reading information about the reference state, integrals, and denominators from the file sr-h6-sto-3g.npy. The variable H is a dictionary that holds the blocks of the Hamiltonian normal-ordered with respect to the Hartree–Fock determinant. invD similarly is a dictionary that stores the denominators $(\epsilon_i + \epsilon_j + \ldots - \epsilon_a - \epsilon_b - \ldots)^{-1}$.
Step2: Define orbital spaces and the Hamiltonian and cluster operators
Step3: In the following lines, we apply Wick's theorem to simplify the similarity-transformed Hamiltonian $\bar{H}$ computing all contributions ranging from operator rank 0 to 6 (triple substitutions).
Then we convert all the terms into many-body equations accumulated into the residual R.
Step4: Here we generate the CCSDT equations.
Step5: CCSDT algorithm
Here we code a simple loop in which we evaluate the energy and residuals of the CCSD equations and update the amplitudes | Python Code:
import time
import wicked as w
import numpy as np
from examples_helpers import *
Explanation: CCSDT theory for a closed-shell reference
This notebook extends the spinorbital-CCSD notebook to compute CCSDT
End of explanation
molecule = "sr-h6-sto-3g"
with open(f"{molecule}.npy", "rb") as f:
Eref = np.load(f)
nocc, nvir = np.load(f)
H = np.load(f, allow_pickle=True).item()
invD = compute_inverse_denominators(H, nocc, nvir, 3)
Explanation: Read calculation information (integrals, number of orbitals)
We start by reading information about the reference state, integrals, and denominators from the file sr-h6-sto-3g.npy. The variable H is a dictionary that holds the blocks of the Hamiltonian normal-ordered with respect to the Hartree–Fock determinant. invD similarly is a dictionary that stores the denominators $(\epsilon_i + \epsilon_j + \ldots - \epsilon_a - \epsilon_b - \ldots)^{-1}$.
End of explanation
w.reset_space()
w.add_space("o", "fermion", "occupied", ["i", "j", "k", "l", "m", "n"])
w.add_space("v", "fermion", "unoccupied", ["a", "b", "c", "d", "e", "f"])
Top = w.op("T", ["v+ o", "v+ v+ o o", "v+ v+ v+ o o o"])
Hop = w.utils.gen_op("H", 1, "ov", "ov") + w.utils.gen_op("H", 2, "ov", "ov")
# the similarity-transformed Hamiltonian truncated to the four-nested commutator term
Hbar = w.bch_series(Hop, Top, 4)
Explanation: Define orbital spaces and the Hamiltonian and cluster operators
End of explanation
wt = w.WickTheorem()
expr = wt.contract(w.rational(1), Hbar, 0, 6)
mbeq = expr.to_manybody_equation("R")
Explanation: In the following lines, we apply Wick's theorem to simplify the similarity-transformed Hamiltonian $\bar{H}$ computing all contributions ranging from operator rank 0 to 6 (triple substitutions).
Then we convert all the terms into many-body equations accumulated into the residual R.
End of explanation
energy_eq = generate_equation(mbeq, 0, 0)
t1_eq = generate_equation(mbeq, 1, 1)
t2_eq = generate_equation(mbeq, 2, 2)
t3_eq = generate_equation(mbeq, 3, 3)
exec(energy_eq)
exec(t1_eq)
exec(t2_eq)
exec(t3_eq)
# show what do these functions look like
print(energy_eq)
Explanation: Here we generate the CCSDT equations.
End of explanation
Ecorr_ref = -0.108354659115 # from forte sparse implementation
T = {
"ov": np.zeros((nocc, nvir)),
"oovv": np.zeros((nocc, nocc, nvir, nvir)),
"ooovvv": np.zeros((nocc, nocc, nocc, nvir, nvir, nvir)),
}
header = "Iter. Energy [Eh] Corr. energy [Eh] |R| "
print("-" * len(header))
print(header)
print("-" * len(header))
start = time.perf_counter()
maxiter = 50
for i in range(maxiter):
# 1. compute energy and residuals
R = {}
Ecorr_w = evaluate_residual_0_0(H, T)
Etot_w = Eref + Ecorr_w
R["ov"] = evaluate_residual_1_1(H, T)
Roovv = evaluate_residual_2_2(H, T)
R["oovv"] = antisymmetrize_residual_2_2(Roovv, nocc, nvir)
Rooovvv = evaluate_residual_3_3(H, T)
R["ooovvv"] = antisymmetrize_residual_3_3(Rooovvv, nocc, nvir)
# 2. amplitude update
update_cc_amplitudes(T, R, invD, 3)
# 3. check for convergence
norm_R = np.sqrt(np.linalg.norm(R["ov"]) ** 2 + np.linalg.norm(R["oovv"]) ** 2)
print(f"{i:3d} {Etot_w:+.12f} {Ecorr_w:+.12f} {norm_R:e}")
if norm_R < 1.0e-8:
break
end = time.perf_counter()
t = end - start
print("-" * len(header))
print(f"CCSDT total energy {Etot_w:+.12f} [Eh]")
print(f"CCSDT correlation energy {Ecorr_w:+.12f} [Eh]")
print(f"Reference CCSDT correlation energy {Ecorr_ref:+.12f} [Eh]")
print(f"Error {Ecorr_w - Ecorr_ref:+.12e} [Eh]")
print(f"Timing {t:+.12e} [s]")
assert np.isclose(Ecorr_w, Ecorr_ref)
Explanation: CCSDT algorithm
Here we code a simple loop in which we evaluate the energy and residuals of the CCSD equations and update the amplitudes
End of explanation |
13,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, I will show how to train the TensorFlow version of Sketch-RNN on a new dataset, and convert the weights of the TF model to a JSON format that is usable by Sketch-RNN-JS so that interactive web demos can be built.
For the purpose of this tutorial, I will be training on the dataset file called kanji.rdp25.npz which is available inside the repo https
Step1: define the path of the model you want to load, and also the path of the dataset
Step2: Let's see if our model kind of works by sampling from it | Python Code:
# import the required libraries
import numpy as np
import time
import random
import codecs
import collections
import os
import math
import json
import tensorflow as tf
from six.moves import xrange
# libraries required for visualisation:
from IPython.display import SVG, display
import svgwrite # conda install -c omnia svgwrite=1.1.6
import PIL
from PIL import Image
import matplotlib.pyplot as plt
# set numpy output to something sensible
np.set_printoptions(precision=8, edgeitems=6, linewidth=200, suppress=True)
tf.logging.info("TensorFlow Version: %s", tf.__version__)
# import our command line tools
'''
from magenta.models.sketch_rnn.sketch_rnn_train import *
from magenta.models.sketch_rnn.model import *
from magenta.models.sketch_rnn.utils import *
from magenta.models.sketch_rnn.rnn import *
'''
# If code is modified to remove magenta dependencies:
from sketch_rnn_train import *
from model import *
from utils import *
from rnn import *
# little function that displays vector images and saves them to .svg
def draw_strokes(data, factor=0.2, svg_filename = '/tmp/sketch_rnn/svg/sample.svg'):
tf.gfile.MakeDirs(os.path.dirname(svg_filename))
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(svg_filename, size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in xrange(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i,0])/factor
y = float(data[i,1])/factor
lift_pen = data[i, 2]
p += command+str(x)+","+str(y)+" "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
dwg.save()
display(SVG(dwg.tostring()))
# generate a 2D grid of many vector drawings
def make_grid_svg(s_list, grid_space=10.0, grid_space_x=16.0):
def get_start_and_end(x):
x = np.array(x)
x = x[:, 0:2]
x_start = x[0]
x_end = x.sum(axis=0)
x = x.cumsum(axis=0)
x_max = x.max(axis=0)
x_min = x.min(axis=0)
center_loc = (x_max+x_min)*0.5
return x_start-center_loc, x_end
x_pos = 0.0
y_pos = 0.0
result = [[x_pos, y_pos, 1]]
for sample in s_list:
s = sample[0]
grid_loc = sample[1]
grid_y = grid_loc[0]*grid_space+grid_space*0.5
grid_x = grid_loc[1]*grid_space_x+grid_space_x*0.5
start_loc, delta_pos = get_start_and_end(s)
loc_x = start_loc[0]
loc_y = start_loc[1]
new_x_pos = grid_x+loc_x
new_y_pos = grid_y+loc_y
result.append([new_x_pos-x_pos, new_y_pos-y_pos, 0])
result += s.tolist()
result[-1][2] = 1
x_pos = new_x_pos+delta_pos[0]
y_pos = new_y_pos+delta_pos[1]
return np.array(result)
Explanation: In this notebook, I will show how to train the TensorFlow version of Sketch-RNN on a new dataset, and convert the weights of the TF model to a JSON format that is usable by Sketch-RNN-JS so that interactive web demos can be built.
For the purpose of this tutorial, I will be training on the dataset file called kanji.rdp25.npz which is available inside the repo https://github.com/hardmaru/sketch-rnn-datasets/ under the kanji subdirectory. If you have a custom dataset, you will need to convert it over to an .npz file using the stroke-3 format as done for these datasets. Please study the README.md in Sketch-RNN to understand how the file format that Sketch-RNN can work with work, in the section called "Creating Your Own Dataset".
After cloning the TensorFlow repo for the Sketch-RNN model, below is the command that I ran to train the TensorFlow model:
python sketch_rnn_train.py --data_dir=kanji --hparams=data_set=['kanji.rdp25.npz'],num_steps=200000,conditional=0,dec_rnn_size=1024
I store the kanji.rdp25.npz inside the subdirectory called kanji but you can use whatever you want. The important thing to note here is that I'm trainining a decoder-only model by setting conditional=0 and I'm training a 1 layer LSTM with hidden size of 1024, which should be good enough for most datasets in the order of 10K size. Using 200K steps should take around half a day on a single P100 GPU, so it should cost around USD 10 dollars using the current prices for Google Cloud Platform to train this model.
After the model is trained, I run the remaining commands for this IPython notebook to generate a file call custom.gen.json, which can be used in the Sketch-RNN-JS repo for interactive work:
https://github.com/tensorflow/magenta-demos/tree/master/sketch-rnn-js
This json format created will also work for future TensorFlow.js and ML5.js versions of sketch-RNN.
End of explanation
# you may need to change these to link to where your data and checkpoints are actually stored!
# in the default config, model_dir is likely to be /tmp/sketch_rnn/models
data_dir = './kanji'
model_dir = './log'
[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir)
[hps_model, eval_hps_model, sample_hps_model] = load_model(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
def decode(z_input=None, draw_mode=True, temperature=0.1, factor=0.2):
z = None
if z_input is not None:
z = [z_input]
sample_strokes, m = sample(sess, sample_model, seq_len=eval_model.hps.max_seq_len, temperature=temperature, z=z)
strokes = to_normal_strokes(sample_strokes)
if draw_mode:
draw_strokes(strokes, factor)
return strokes
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# randomly unconditionally generate 10 examples
N = 10
reconstructions = []
for i in range(N):
reconstructions.append([decode(temperature=0.5, draw_mode=False), [0, i]])
Explanation: define the path of the model you want to load, and also the path of the dataset
End of explanation
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
def get_model_params():
# get trainable params.
model_names = []
model_params = []
model_shapes = []
with sess.as_default():
t_vars = tf.trainable_variables()
for var in t_vars:
param_name = var.name
p = sess.run(var)
model_names.append(param_name)
params = p
model_params.append(params)
model_shapes.append(p.shape)
return model_params, model_shapes, model_names
def quantize_params(params, max_weight=10.0, factor=32767):
result = []
max_weight = np.abs(max_weight)
for p in params:
r = np.array(p)
r /= max_weight
r[r>1.0] = 1.0
r[r<-1.0] = -1.0
result.append(np.round(r*factor).flatten().astype(np.int).tolist())
return result
model_params, model_shapes, model_names = get_model_params()
model_names
# scale factor converts "model-coordinates" to "pixel coordinates" for your JS canvas demo later on.
# the larger it is, the larger your drawings (in pixel space) will be.
# I recommend setting this to 100.0 and iterating the value in the json file later on when you build the JS part.
scale_factor = 200.0
metainfo = {"mode":2,"version":6,"max_seq_len":train_set.max_seq_length,"name":"custom","scale_factor":scale_factor}
model_params_quantized = quantize_params(model_params)
model_blob = [metainfo, model_shapes, model_params_quantized]
with open("custom.gen.full.json", 'w') as outfile:
json.dump(model_blob, outfile, separators=(',', ':'))
Explanation: Let's see if our model kind of works by sampling from it:
End of explanation |
13,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo for NitroML on Cloud using KubeFlow
Step 1
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6 | Python Code:
import sys
# install kfp (https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.html)
!{sys.executable} -m pip install --user --upgrade -q kfp==1.0.0
!{sys.executable} -m pip install --user --upgrade -q kfp-server-api==1.0.0
# Download skaffold and set it executable.
# !curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
Explanation: Demo for NitroML on Cloud using KubeFlow
Step 1: Get kfp and skaffold.
End of explanation
# !{sys.executable} -m pip install --user --upgrade -q tensorflow_datasets==3.1.0
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__)); import tensorflow_datasets as tfds; print('TFDS version: {}'.format(tfds.__version__))"
!python3 -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
Explanation: Step 2: Check and install tfx (if necessary)
If TFX is not installed, uncomment the pip install command below. We have tested this example with tfx==0.23.0.dev
End of explanation
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
# Docker image name for the pipeline image
# IMAGE_NAME = 'nitroml_benchmark4'
IMAGE_NAME = 'nitroml_tfx_0130.dev'
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/' + IMAGE_NAME
Explanation: Step 3: Get the GCP project ID and create Docker image name
End of explanation
import sys, os
PROJECT_DIR=os.path.join(sys.path[0], '..')
%cd {PROJECT_DIR}
# This refers to the KFP cluster endpoint
# To find your endpoint, go to: Google_Project_Console -> AI_PLATFORMS -> PIPELINES.
# Then for the cluster you want to run your pipeline on, click on the "Open Pipeline Dashboard". Copy the url "*.googleusercontent.com". This is your ENDPOINT var.
from examples import config
from absl import logging
ENDPOINT = config.ENDPOINT
logging.info(f'Using {ENDPOINT} as the ENDPOINT')
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
else:
logging.info(f'Using {ENDPOINT} as the ENDPOINT')
PIPELINE_NAME=config.PIPELINE_NAME
PIPELINE_NAME
!pwd
Explanation: Step 4: Set KFP Cluster End point
End of explanation
_OPENML_API_KEY = 'OPENML_API_KEY'
os.environ[_OPENML_API_KEY] = 'b1514bb2761ecc4709ab26db50673a41'
os.getenv(_OPENML_API_KEY, '')
example = 'metalearning'
example = 'openml_cc18'
example = 'titanic'
if example == 'titanic':
pipeline_path = 'examples/titanic_benchmark.py'
pipeline_name = f'{PIPELINE_NAME}_titanic'
elif example == 'openml_cc18':
pipeline_path = 'examples/openml_cc18_benchmark.py'
pipeline_name = f'{PIPELINE_NAME}_openML_demo'
elif example == 'metalearning':
algorithm = 'nearest_neighbor'
# algorithm = 'majority_voting'
pipeline_path = 'examples/metalearning_benchmark.py'
pipeline_name = f'metalearning_{algorithm}'
TFX_IMAGE=config.TFX_IMAGE
os.environ['NITROML_RUNS_PER_BENCHMARK'] = '1'
!tfx pipeline create \
--pipeline-path={pipeline_path} \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE} \
--build-base-image={TFX_IMAGE} \
--engine='kubeflow'
Explanation: Step 5: Create the tfx pipeline
End of explanation
# If we update the pipeline
!tfx pipeline update \
--pipeline-path={pipeline_path} \
--endpoint={ENDPOINT} \
--engine='kubeflow'
print (pipeline_name)
!tfx run create --pipeline-name={pipeline_name} --endpoint={ENDPOINT} --engine='kubeflow'
# !kfp --endpoint {ENDPOINT} --namespace kubeflow diagnose_me
Explanation: Step 6: Run the created tfx pipeline
Step 7 (Optional): If the pipeline src is updated, we will have to update the pipeline at endpoint. The following block updates the pipeline and runs it.
End of explanation |
13,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Polara for custom evaluation scenarios
Polara is designed to automate the process of model prototyping and evaluation as much as possible. As a part of it,
<div class="alert alert-block alert-info">Polara follows a certain data management workflow, aimed at maintaining a consistent and predictable internal state.</div>
By default, it implements several conventional evaluation scenarios fully controlled by a set of configurational parameters. A user does not have to worry about anything beyond just setting the appropriate values of these parameters (a complete list of them can be obtained by calling the get_configuration method of a RecommenderData instance). As the result an input preferences data will be automatically pre-processed and converted into a convenient representation with an independent access to the training and evaluation parts.
This default behaviour, however, can be flexibly manipulated to run custom scenarios with externally provided evaluation data. This flexibility is achieved with the help of the special set_test_data method implemented in the RecommenderData class. This guide demonstrates how to use the configuration parameters in conjunction with this method to cover various customizations.
Prepare data
We will use Movielens-1M data for experimentation. The data will be divided into several parts
Step1: Downloading the data (alternatively you can provide a path to the local copy of the data as an argument to the function)
Step2: Sampling 5% of the preferences data to form the holdout dataset
Step3: Make 20% of all users unseen during the training phase
Step4: Scenario 0
Step5: We will use prepare_training_only method instead of the general prepare
Step6: This sets all the required configuration parameters and transform the data accordingly.
Let's check that test data is empty,
Step7: and the whole input was used as a training part
Step8: Internally, the data was transformed to have a certain numeric representation, which Polara relies on
Step9: <div class="alert alert-block alert-info">The mapping between external and internal data representations is stored in the `data_model.index` attribute.</div>
The transformation can be disabled by setting the build_index attribute to False before data processing (not recommended).
You can easily build a recommendation model now
Step10: However, the recommendations cannot be generated, as there is no testing data. The following function call will raise an error
Step11: Mind the warm_start=False argument, which tells Polara to work only with known users. If some users from holdout are not a part of the training data, they will be filtered out and the corresponding notification message will be displayed (you can turn it off by setting data_model.verbose=False). In this example 1129 users were filtered out, as initially the holdout set contained both known and unknown users.
Note, that items not present in the training data are also filtered. This behavior can be changed by setting data_model.ensure_consistency=False (not recommended).
Step12: The recommendation model can now be evaluated
Step13: Scenario 2
Step14: You can provide this list by setting the test_users argument of the set_test_data method
Step15: Recommendations in that case will have a corresponding shape of number of test users x top-n (by default top-10).
Step16: As the holdout was not provided, it's previous state is cleared from the data model
Step17: The order of test user id's in the recommendations matrix may not correspond to their order in the test_users list. The true order can be obtained via index attribute - the users are sorted in ascending order by their internal index. This order is used to construct the recommendations matrix.
Step18: Note, that there's no need to provide testset argument in the case of known users.
All the information about test users' preferences is assumed to be fully present in the training data and the following function call will intentionally raise an error
Step19: None of these users are present in the training
Step20: In order to generate recommendations for these users, we assign the dataset of their preferences as a testset (mind the warm_start argument value)
Step21: As we use an SVD-based model, there is no need for any modifications to generate recommendations - it uses the same analytical formula for both standard and warm-start regime
Step22: Note, that internally the unseen_data dataset is transformed
Step23: Scenario 4
Step24: As previously, all unrelated users and items are removed from the datasets and the remaining entities are reindexed. | Python Code:
import numpy as np
from polara.datasets.movielens import get_movielens_data
seed = 0
def random_state(seed=seed): # to fix random state in experiments
return np.random.RandomState(seed=seed)
Explanation: Using Polara for custom evaluation scenarios
Polara is designed to automate the process of model prototyping and evaluation as much as possible. As a part of it,
<div class="alert alert-block alert-info">Polara follows a certain data management workflow, aimed at maintaining a consistent and predictable internal state.</div>
By default, it implements several conventional evaluation scenarios fully controlled by a set of configurational parameters. A user does not have to worry about anything beyond just setting the appropriate values of these parameters (a complete list of them can be obtained by calling the get_configuration method of a RecommenderData instance). As the result an input preferences data will be automatically pre-processed and converted into a convenient representation with an independent access to the training and evaluation parts.
This default behaviour, however, can be flexibly manipulated to run custom scenarios with externally provided evaluation data. This flexibility is achieved with the help of the special set_test_data method implemented in the RecommenderData class. This guide demonstrates how to use the configuration parameters in conjunction with this method to cover various customizations.
Prepare data
We will use Movielens-1M data for experimentation. The data will be divided into several parts:
1. observations, used for training,
2. holdout, used for evaluating recommendations against the true preferences,
3. unseen data, used for warm-start scenarios, where test users with their preferences are not a part of training.
The last two datasets serve as an imitation of external data sources, which are not a part of initial data model.
Also note, that holdout dataset contains items of both known and unseen (warm-start) users.
End of explanation
data = get_movielens_data()
Explanation: Downloading the data (alternatively you can provide a path to the local copy of the data as an argument to the function):
End of explanation
data_sampled = data.sample(frac=0.95, random_state=random_state()).sort_values('userid')
holdout = data[~data.index.isin(data_sampled.index)]
Explanation: Sampling 5% of the preferences data to form the holdout dataset:
End of explanation
users, unseen_users = np.split(data_sampled.userid.drop_duplicates().values,
[int(0.8*data_sampled.userid.nunique()),])
observations = data_sampled.query('userid in @users')
Explanation: Make 20% of all users unseen during the training phase:
End of explanation
from polara.recommender.data import RecommenderData
from polara.recommender.models import SVDModel
data_model = RecommenderData(observations, 'userid', 'movieid', 'rating', seed=seed)
Explanation: Scenario 0: building a recommender model without any evaluation
This is the simplest case, which allows to completely ignore evaluation phase. This sets an initial configuration for all further evaluation scenarios.
End of explanation
data_model.prepare_training_only()
Explanation: We will use prepare_training_only method instead of the general prepare:
End of explanation
data_model.test
Explanation: This sets all the required configuration parameters and transform the data accordingly.
Let's check that test data is empty,
End of explanation
data_model.training.shape
observations.shape
Explanation: and the whole input was used as a training part:
End of explanation
data_model.training.head()
observations.head()
Explanation: Internally, the data was transformed to have a certain numeric representation, which Polara relies on:
End of explanation
svd = SVDModel(data_model)
svd.build()
Explanation: <div class="alert alert-block alert-info">The mapping between external and internal data representations is stored in the `data_model.index` attribute.</div>
The transformation can be disabled by setting the build_index attribute to False before data processing (not recommended).
You can easily build a recommendation model now:
End of explanation
data_model.set_test_data(holdout=holdout, warm_start=False)
Explanation: However, the recommendations cannot be generated, as there is no testing data. The following function call will raise an error:
python
svd.get_recommendations()
Scenario 1: evaluation with pre-specified holdout data for known users
In the competitions like Netflix Prize you may be provided with a dedicated evaluation dataset (a probe set), which contains hidden preferences information about known users. In terms of the Polara syntax, this is a holdout set.
You can assign this holdout set to the data model by calling the set_test_data method as follows:
End of explanation
data_model.test.holdout.userid.nunique()
Explanation: Mind the warm_start=False argument, which tells Polara to work only with known users. If some users from holdout are not a part of the training data, they will be filtered out and the corresponding notification message will be displayed (you can turn it off by setting data_model.verbose=False). In this example 1129 users were filtered out, as initially the holdout set contained both known and unknown users.
Note, that items not present in the training data are also filtered. This behavior can be changed by setting data_model.ensure_consistency=False (not recommended).
End of explanation
svd.switch_positive = 4 # treat ratings below 4 as negative feedback
svd.evaluate()
data_model.test.holdout.query('rating>=4').shape[0] # maximum number of possible true_positive hits
svd.evaluate('relevance')
Explanation: The recommendation model can now be evaluated:
End of explanation
test_users = random_state().choice(users, size=5, replace=False)
test_users
Explanation: Scenario 2: see recommendations for selected known users without evaluation
Polara also allows to handle cases, where you don't have a probe set and the task is to simply generate recommendations for a list of selected test users. The evaluation in that case is to be performed externally.
Let's randomly pick a few test users from all known users (i.e. those who are present in the training data):
End of explanation
data_model.set_test_data(test_users=test_users, warm_start=False)
Explanation: You can provide this list by setting the test_users argument of the set_test_data method:
End of explanation
svd.get_recommendations().shape
print((len(test_users), svd.topk))
Explanation: Recommendations in that case will have a corresponding shape of number of test users x top-n (by default top-10).
End of explanation
print(data_model.test.holdout)
Explanation: As the holdout was not provided, it's previous state is cleared from the data model:
End of explanation
data_model.index.userid.training.query('old in @test_users')
test_users
Explanation: The order of test user id's in the recommendations matrix may not correspond to their order in the test_users list. The true order can be obtained via index attribute - the users are sorted in ascending order by their internal index. This order is used to construct the recommendations matrix.
End of explanation
unseen_data = data_sampled.query('userid in @unseen_users')
unseen_data.shape
assert unseen_data.userid.nunique() == len(unseen_users)
print(len(unseen_users))
Explanation: Note, that there's no need to provide testset argument in the case of known users.
All the information about test users' preferences is assumed to be fully present in the training data and the following function call will intentionally raise an error:
python
data_model.set_test_data(testset=some_test_data, warm_start=False)
If the testset contains new (unseen) information, you should consider the warm-start scenarios, described below.
Scenario 3: see recommendations for unseen users without evaluation
Let's form a dataset with new users and their preferences:
End of explanation
data_model.index.userid.training.old.isin(unseen_users).any()
Explanation: None of these users are present in the training:
End of explanation
data_model.set_test_data(testset=unseen_data, warm_start=True)
Explanation: In order to generate recommendations for these users, we assign the dataset of their preferences as a testset (mind the warm_start argument value):
End of explanation
svd.get_recommendations().shape
Explanation: As we use an SVD-based model, there is no need for any modifications to generate recommendations - it uses the same analytical formula for both standard and warm-start regime:
End of explanation
data_model.test.testset.head()
data_model.index.userid.test.head() # test user index mapping, new index starts from 0
data_model.index.itemid.head() # item index mapping
unseen_data.head()
Explanation: Note, that internally the unseen_data dataset is transformed: users are reindexed starting from 0 and items are reindexed based on the current item index of the training set.
End of explanation
data_model.set_test_data(testset=unseen_data, holdout=holdout, warm_start=True)
Explanation: Scenario 4: evaluate recommendations for unseen users with external holdout data
This is the most complete scenario. We generate recommendations based on the test users' preferences, encoded in the testset, and evaluate them against the holdout. You should use this setup only when the Polara's built-in warm-start evaluation pipeline (turned on by data_model.warm_start=True ) is not sufficient, , e.g. when the preferences data is fixed and provided externally.
End of explanation
data_model.test.testset.head(10)
data_model.test.holdout.head(10)
svd.switch_positive = 4
svd.evaluate()
data_model.test.holdout.query('rating>=4').shape[0] # maximum number of possible true positives
svd.evaluate('relevance')
Explanation: As previously, all unrelated users and items are removed from the datasets and the remaining entities are reindexed.
End of explanation |
13,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4.2 サードパーティ製パッケージを使ってスクレイピングに挑戦
Requests http
Step1: RequestsでWebページを取得
Step2: Requestsを使いこなす
connpass APIリファレンス https
Step3: httpbin(1)
Step4: Beautiful Soup 4を使いこなす | Python Code:
import requests
import bs4
Explanation: 4.2 サードパーティ製パッケージを使ってスクレイピングに挑戦
Requests http://docs.python-requests.org/
Beautiful Soup http://www.crummy.com/software/BeautifulSoup/
End of explanation
# Requestsでgihyo.jpのページのデータを取得
import requests
r = requests.get('http://gihyo.jp/lifestyle/clip/01/everyday-cat')
r.status_code # ステータスコードを取得
r.text[:50] # 先頭50文字を取得
Explanation: RequestsでWebページを取得
End of explanation
# JSON形式のAPIレスポンスを取得
r = requests.get('https://connpass.com/api/v1/event/?keyword=python')
data = r.json() # JSONをデコードしたデータを取得
for event in data['events']:
print(event['title'])
# 各種HTTPメソッドに対応
payload = {'key1': 'value1', 'key2': 'value2'}
r = requests.post('http://httpbin.org/post', data=payload)
r = requests.put('http://httpbin.org/put', data=payload)
r = requests.delete('http://httpbin.org/delete')
r = requests.head('http://httpbin.org/get')
r = requests.options('http://httpbin.org/get')
# Requestsの便利な使い方
r = requests.get('http://httpbin.org/get', params=payload)
r.url
r = requests.get('https://httpbin.org/basic-auth/user/passwd', auth=('user', 'passwd'))
r.status_code
Explanation: Requestsを使いこなす
connpass APIリファレンス https://connpass.com/about/api/
End of explanation
# Beautiful Soup 4で「技評ねこ部通信」を取得
import requests
from bs4 import BeautifulSoup
r = requests.get('http://gihyo.jp/lifestyle/clip/01/everyday-cat')
soup = BeautifulSoup(r.content, 'html.parser')
title = soup.title # titleタグの情報を取得
type(title) # オブジェクトの型は Tag 型
print(title) # タイトルの中身を確認
print(title.text) # タイトルの中のテキストを取得
# 技評ねこ部通信の1件分のデータを取得
div = soup.find('div', class_='readingContent01')
li = div.find('li') # divタグの中の最初のliタグを取得
print(li.a['href']) # liタグの中のaタグのhref属性の値を取得
print(li.a.text) # aタグの中の文字列を取得
li.a.text.split(maxsplit=1) # 文字列のsplit()で日付とタイトルに分割
# 技評ねこ部通信の全データを取得
div = soup.find('div', class_='readingContent01')
for li in div.find_all('li'): # divタグの中の全liタグを取得
url = li.a['href']
date, text = li.a.text.split(maxsplit=1)
print('{},{},{}'.format(date, text, url))
Explanation: httpbin(1): HTTP Client Testing Service https://httpbin.org/
Beautiful Soup 4でWebページを解析
End of explanation
# タグの情報を取得する
div = soup.find('div', class_='readingContent01')
type(div) # データの型はTag型
div.name
div['class']
div.attrs # 全属性を取得
# さまざまな検索方法
a_tags = soup.find_all('a') # タグ名を指定
len(a_tags)
import re
for tag in soup.find_all(re.compile('^b')): # 正規表現で指定
print(tag.name)
for tag in soup.find_all(['html', 'title']): # リストで指定
print(tag.name)
# キーワード引数での属性指定
tag = soup.find(id='categoryNavigation') # id属性を指定して検索
tag.name, tag.attrs
tags = soup.find_all(id=True) # id属性があるタグを全て検索
len(tags)
div = soup.find('div', class_='readingContent01') # class属性はclass_と指定する
div.attrs
div = soup.find('div', {'class': 'readingContent01'}) # 辞書形式でも指定できる
div.attrs
# CSSセレクターを使用した検索
soup.select('title') # タグ名を指定
tags = soup.select('body a') # body タグの下のaタグ
len(a_tags)
a_tags = soup.select('p > a') # pタグの直下のaタグ
len(a_tags)
soup.select('body > a') # bodyタグの直下のaタグは存在しない
div = soup.select('.readingContent01') # classを指定
div = soup.select('div.readingContent01')
div = soup.select('#categoryNavigation') # idを指定
div = soup.select('div#categoryNavigation')
a_tag = soup.select_one('div > a') # 最初のdivタグ直下のaタグを返す
Explanation: Beautiful Soup 4を使いこなす
End of explanation |
13,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrices de Transformación
Las matrices de rotación y traslación nos sirven para transformar una coordenada entre diferentes sistemas coordenados, pero tambien lo podemos ver, como la transformación que le hace cada eslabon a nuestro punto de ubicación.
Empecemos con la rotación
Step1: Para empezar definiremos nuestra posición de inicio, como la coordenada
Step2: Agregamos un $1$ al final, debido a que las matrices de transformación homogenea son de dimensión $\Re^{4\times 4}$ y de otra manera no concordarian las dimensiones.
De manera similar vamos a definir el punto origen, el cual nos va a permitir definir una trayectoria para dibujar nuestro vector
Step3: Una vez que tenemos estos puntos, juntamos los elementos de $x$, $y$ y $z$ de cada uno para poderlos graficar
Step4: Ahora podemos graficar en tres dimensiones de la siguiente manera
Step5: Ejercicio
Dibuja un triangulo que tenga vertices en los puntos $p_1 = (1,1)$, $p_2 = (1,2)$ y $p_3 = (2,2)$, en el plano $xy$ (con una elevación $z=0$).
Step6: Podemos definir matrices de la siguiente manera, y ver que el resultado es lo que esperariamos si quisieramos rotar el vector unitario $\hat{i}$ , con $30^o$ es decir $\frac{\tau}{12}$.
Step7: Pero podemos hacer algo mejor, podemos definir una función que nos devuelva una matriz de rotación, dandole como argumento el angulo de rotación.
Step8: Con lo que podemos usar esta función para crear la matriz de rotación necesaria
Step9: Entonces, tendremos el mismo resultado, con un codigo mas limpio.
Step10: Y usamos el mismo codigo para separar las coordenadas $x$, $y$ y $z$ | Python Code:
from math import pi, sin, cos
from numpy import matrix
from matplotlib.pyplot import figure, plot, style
from mpl_toolkits.mplot3d import Axes3D
style.use("ggplot")
%matplotlib notebook
τ = 2*pi
Explanation: Matrices de Transformación
Las matrices de rotación y traslación nos sirven para transformar una coordenada entre diferentes sistemas coordenados, pero tambien lo podemos ver, como la transformación que le hace cada eslabon a nuestro punto de ubicación.
Empecemos con la rotación:
$$
R_z =
\begin{pmatrix}
\cos{\theta} & -\sin{\theta} & 0 & 0 \
\sin{\theta} & \cos{\theta} & 0 & 0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
\end{pmatrix}
$$
La matriz que escribimos, girará nuestro de eje de coordenadas con respecto al eje $z$ un angulo $\theta$.
Por cierto, las funciones trigonométricas toman como argumento el angulo en radianes, por lo que tomaré la convencion de llamar a $\tau = 2 \pi$, para definir los angulos como fracciones de la vuelta completa.
End of explanation
pos_1 = matrix([[1],
[0],
[0],
[1]])
Explanation: Para empezar definiremos nuestra posición de inicio, como la coordenada:
$$
P_1 =
\begin{pmatrix}
1 \
0 \
0
\end{pmatrix}
$$
End of explanation
o = matrix([[0],
[0],
[0],
[1]])
Explanation: Agregamos un $1$ al final, debido a que las matrices de transformación homogenea son de dimensión $\Re^{4\times 4}$ y de otra manera no concordarian las dimensiones.
De manera similar vamos a definir el punto origen, el cual nos va a permitir definir una trayectoria para dibujar nuestro vector:
End of explanation
xs = [o.item(0), pos_1.item(0)]
ys = [o.item(1), pos_1.item(1)]
zs = [o.item(2), pos_1.item(2)]
Explanation: Una vez que tenemos estos puntos, juntamos los elementos de $x$, $y$ y $z$ de cada uno para poderlos graficar:
End of explanation
# Define el cuadro general en donde se diuja la gráfica
f1 = figure(figsize=(6, 6))
# Agrega el area para graficar a nuestra figura
a1 = f1.add_subplot(111, projection='3d')
# Utiliza los datos en xs, ys y zs para graficar una linea con bolitas en cada extremo
a1.plot(xs, ys, zs, "-o")
# Define los limites de la grafica en cada eje
a1.set_xlim(-0.1, 1.1)
a1.set_ylim(-0.1, 1.1)
a1.set_zlim(-0.1, 1.1);
Explanation: Ahora podemos graficar en tres dimensiones de la siguiente manera:
End of explanation
fig = figure(figsize=(6, 6))
ax = fig.add_subplot(111, projection='3d')
# YOUR CODE HERE
raise NotImplementedError()
ax.set_xlim(-0.1, 2.1)
ax.set_ylim(-0.1, 2.1)
ax.set_zlim(-0.1, 1.1);
from numpy.testing import assert_allclose
ls = ax.get_lines()
assert_allclose(ls[0].get_xdata(), [0, 0.02154399], rtol=1e-05, atol=1e-05)
assert_allclose(ls[1].get_xdata(), [0.02154399, 0.05997832], rtol=1e-05, atol=1e-05)
assert_allclose(ls[2].get_xdata(), [0.05997832, 0], rtol=1e-05, atol=1e-05)
Explanation: Ejercicio
Dibuja un triangulo que tenga vertices en los puntos $p_1 = (1,1)$, $p_2 = (1,2)$ y $p_3 = (2,2)$, en el plano $xy$ (con una elevación $z=0$).
End of explanation
rot_1 = matrix([[cos(τ/12), -sin(τ/12), 0, 0],
[sin(τ/12), cos(τ/12), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
rot_1*pos_1
Explanation: Podemos definir matrices de la siguiente manera, y ver que el resultado es lo que esperariamos si quisieramos rotar el vector unitario $\hat{i}$ , con $30^o$ es decir $\frac{\tau}{12}$.
End of explanation
def rotacion_z(θ):
'''
Esta función devuelve una matriz de transformación con valores numéricos,
correspondientes a una rotación alrededor del eje z.
'''
A = matrix([[cos(θ), -sin(θ), 0, 0],
[sin(θ), cos(θ), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
return A
Explanation: Pero podemos hacer algo mejor, podemos definir una función que nos devuelva una matriz de rotación, dandole como argumento el angulo de rotación.
End of explanation
rot_2 = rotacion_z(τ/12)
rot_2
Explanation: Con lo que podemos usar esta función para crear la matriz de rotación necesaria:
End of explanation
p = rot_2*pos_1
p
Explanation: Entonces, tendremos el mismo resultado, con un codigo mas limpio.
End of explanation
xs = [o.item(0), p.item(0)]
ys = [o.item(1), p.item(1)]
zs = [o.item(2), p.item(2)]
f2 = figure(figsize=(8, 8))
a2 = f2.add_subplot(111, projection='3d')
a2.plot(xs, ys, zs, "-o")
a2.set_xlim(-0.1, 1.1)
a2.set_ylim(-0.1, 1.1)
a2.set_zlim(-0.1, 1.1);
Explanation: Y usamos el mismo codigo para separar las coordenadas $x$, $y$ y $z$:
End of explanation |
13,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
II. Numpy and Scipy
Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)
Making vectors and matrices
Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command
Step1: You can pass in a second argument to array that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,
Step2: To build matrices, you can either use the array command with lists of lists
Step3: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command
Step4: The first argument is a tuple containing the shape of the matrix, and the second is the data type argument, which follows the same conventions as in the array command. Thus, you can make row vectors
Step5: or column vectors
Step6: There's also an identity command that behaves as you'd expect
Step7: as well as a ones command.
Linspace, matrix functions, and plotting
The linspace command makes a linear array of points from a starting to an ending value.
Step8: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
Step9: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
Step10: In conjunction with matplotlib, this is a nice way to plot things
Step11: Matrix operations
Matrix objects act sensibly when multiplied by scalars
Step12: as well as when you add two matrices together. (However, the matrices have to be the same shape.)
Step13: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication
Step14: To get matrix multiplication, you need the dot command
Step15: dot can also do dot products (duh!)
Step16: as well as matrix-vector products.
There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object
Step17: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
Step18: We'll find this useful later on.
Matrix Solvers
You can solve systems of linear equations using the solve command
Step19: There are a number of routines to compute eigenvalues and eigenvectors
eigvals returns the eigenvalues of a matrix
eigvalsh returns the eigenvalues of a Hermitian matrix
eig returns the eigenvalues and eigenvectors of a matrix
eigh returns the eigenvalues and eigenvectors of a Hermitian matrix.
Step20: Example
Step21: Let's see whether this works for our sin example from above
Step22: Pretty close!
One-Dimensional Harmonic Oscillator using Finite Difference
Now that we've convinced ourselves that finite differences aren't a terrible approximation, let's see if we can use this to solve the one-dimensional harmonic oscillator.
We want to solve the time-independent Schrodinger equation
$$ -\frac{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial x^2} + V(x)\psi(x) = E\psi(x)$$
for $\psi(x)$ when $V(x)=\frac{1}{2}m\omega^2x^2$ is the harmonic oscillator potential. We're going to use the standard trick to transform the differential equation into a matrix equation by multiplying both sides by $\psi^*(x)$ and integrating over $x$. This yields
$$ -\frac{\hbar}{2m}\int\psi(x)\frac{\partial^2}{\partial x^2}\psi(x)dx + \int\psi(x)V(x)\psi(x)dx = E$$
We will again use the finite difference approximation. The finite difference formula for the second derivative is
$$ y'' = \frac{y_{i+1}-2y_i+y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can think of the first term in the Schrodinger equation as the overlap of the wave function $\psi(x)$ with the second derivative of the wave function $\frac{\partial^2}{\partial x^2}\psi(x)$. Given the above expression for the second derivative, we can see if we take the overlap of the states $y_1,\dots,y_n$ with the second derivative, we will only have three points where the overlap is nonzero, at $y_{i-1}$, $y_i$, and $y_{i+1}$. In matrix form, this leads to the tridiagonal Laplacian matrix, which has -2's along the diagonals, and 1's along the diagonals above and below the main diagonal.
The second term turns leads to a diagonal matrix with $V(x_i)$ on the diagonal elements. Putting all of these pieces together, we get
Step23: We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.
But the eigenfunctions as we generate them aren't properly normalized. The reason is that finite difference isn't a real basis in the quantum mechanical sense. It's a basis of Dirac δ functions at each point; we interpret the space betwen the points as being "filled" by the wave function, but the finite difference basis only has the solution being at the points themselves. We can fix this by dividing the eigenfunctions of our finite difference Hamiltonian by the square root of the spacing, and this gives properly normalized functions.
Special Functions
The solutions to the Harmonic Oscillator are supposed to be Hermite polynomials. The Wikipedia page has the HO states given by
$$\psi_n(x) = \frac{1}{\sqrt{2^n n!}}
\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}
\exp\left(-\frac{m\omega x^2}{2\hbar}\right)
H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)$$
Let's see whether they look like those. There are some special functions in the Numpy library, and some more in Scipy. Hermite Polynomials are in Numpy
Step24: Let's compare the first function to our solution.
Step25: The agreement is almost exact.
We can use the subplot command to put multiple comparisons in different panes on a single plot
Step26: Other than phase errors (which I've corrected with a little hack
Step28: As well as Jacobi, Laguerre, Hermite polynomials, Hypergeometric functions, and many others. There's a full listing at the Scipy Special Functions Page.
Least squares fitting
Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following
Step29: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
Step30: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
Step31: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function
$$ y = Ae^{-ax} $$
$$ \log(y) = \log(A) - ax$$
Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.
There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)
Step32: Let's see whether this curve fits the data.
Step34: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data
Step35: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
First define a general Gaussian function to fit to.
Step36: Now fit to it using curve_fit
Step37: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Monte Carlo, random numbers, and computing $\pi$
Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function gives pseudorandom numbers uniformly distributed between 0 and 1
Step38: random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution
Step39: It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
Step40: One of the first programs I ever wrote was a program to compute $\pi$ by taking random numbers as x and y coordinates, and counting how many of them were in the unit circle. For example
Step41: The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.
The above code uses some higher level Numpy tricks to compute the radius of each point in a single line, to count how many radii are below one in a single line, and to filter the x,y points based on their radii. To be honest, I rarely write code like this
Step42: If you're interested a great method, check out Ramanujan's method. This converges so fast you really need arbitrary precision math to display enough decimal places. You can do this with the Python decimal module, if you're interested.
Numerical Integration
Integration can be hard, and sometimes it's easier to work out a definite integral using an approximation. For example, suppose we wanted to figure out the integral
Step43: Scipy has a numerical integration routine quad (since sometimes numerical integration is called quadrature), that we can use for this
Step44: There are also 2d and 3d numerical integrators in Scipy. See the docs for more information.
Fast Fourier Transform and Signal Processing
Very often we want to use FFT techniques to help obtain the signal from noisy data. Scipy has several different options for this. | Python Code:
%pylab inline
import numpy as np
# Import pylab to provide scientific Python libraries (NumPy, SciPy, Matplotlib)
%pylab --no-import-all
#import pylab as pl
# import the Image display module
from IPython.display import Image
import math
np.array([1,2,3,4,5,6])
Explanation: II. Numpy and Scipy
Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)
Making vectors and matrices
Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command:
End of explanation
np.array([1,2,3,4,5,6],'d')
np.array([1,2,3,4,5,6],'D')
np.array([1,2,3,4,5,6],'i')
Explanation: You can pass in a second argument to array that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,
End of explanation
np.array([[0,1],[1,0]],'d')
Explanation: To build matrices, you can either use the array command with lists of lists:
End of explanation
np.zeros((3,3),'d')
Explanation: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command:
End of explanation
np.zeros(3,'d')
np.zeros((1,3),'d')
Explanation: The first argument is a tuple containing the shape of the matrix, and the second is the data type argument, which follows the same conventions as in the array command. Thus, you can make row vectors:
End of explanation
np.zeros((3,1),'d')
Explanation: or column vectors:
End of explanation
np.identity(4,'d')
Explanation: There's also an identity command that behaves as you'd expect:
End of explanation
np.linspace(0,1)
Explanation: as well as a ones command.
Linspace, matrix functions, and plotting
The linspace command makes a linear array of points from a starting to an ending value.
End of explanation
np.linspace(0,1,11)
Explanation: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
End of explanation
x = np.linspace(0,2*np.pi)
np.sin(x)
Explanation: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
End of explanation
plot(x,np.sin(x))
Explanation: In conjunction with matplotlib, this is a nice way to plot things:
End of explanation
0.125*identity(3,'d')
Explanation: Matrix operations
Matrix objects act sensibly when multiplied by scalars:
End of explanation
identity(2,'d') + array([[1,1],[1,2]])
Explanation: as well as when you add two matrices together. (However, the matrices have to be the same shape.)
End of explanation
identity(2)*ones((2,2))
Explanation: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication:
End of explanation
dot(identity(2),ones((2,2)))
Explanation: To get matrix multiplication, you need the dot command:
End of explanation
v = array([3,4],'d')
sqrt(dot(v,v))
Explanation: dot can also do dot products (duh!):
End of explanation
m = array([[1,2],[3,4]])
m.T
Explanation: as well as matrix-vector products.
There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object:
End of explanation
diag([1,2,3,4,5])
Explanation: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
End of explanation
A = array([[1,1,1],[0,2,5],[2,5,-1]])
b = array([6,-4,27])
solve(A,b)
Explanation: We'll find this useful later on.
Matrix Solvers
You can solve systems of linear equations using the solve command:
End of explanation
A = array([[13,-4],[-4,7]],'d')
eigvalsh(A)
eigh(A)
Explanation: There are a number of routines to compute eigenvalues and eigenvectors
eigvals returns the eigenvalues of a matrix
eigvalsh returns the eigenvalues of a Hermitian matrix
eig returns the eigenvalues and eigenvectors of a matrix
eigh returns the eigenvalues and eigenvectors of a Hermitian matrix.
End of explanation
def nderiv(y,x):
"Finite difference derivative of the function f"
n = len(y)
d = zeros(n,'d') # assume double
# Use centered differences for the interior points, one-sided differences for the ends
for i in range(1,n-1):
d[i] = (y[i+1]-y[i-1])/(x[i+1]-x[i-1])
d[0] = (y[1]-y[0])/(x[1]-x[0])
d[n-1] = (y[n-1]-y[n-2])/(x[n-1]-x[n-2])
return d
Explanation: Example: Finite Differences
Now that we have these tools in our toolbox, we can start to do some cool stuff with it. Many of the equations we want to solve in Physics involve differential equations. We want to be able to compute the derivative of functions:
$$ y' = \frac{y(x+h)-y(x)}{h} $$
by discretizing the function $y(x)$ on an evenly spaced set of points $x_0, x_1, \dots, x_n$, yielding $y_0, y_1, \dots, y_n$. Using the discretization, we can approximate the derivative by
$$ y_i' \approx \frac{y_{i+1}-y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can write a derivative function in Python via
End of explanation
x = linspace(0,2*pi)
dsin = nderiv(sin(x),x)
plot(x,dsin,label='numerical')
plot(x,cos(x),label='analytical')
title("Comparison of numerical and analytical derivatives of sin(x)")
legend()
Explanation: Let's see whether this works for our sin example from above:
End of explanation
def Laplacian(x):
h = x[1]-x[0] # assume uniformly spaced points
n = len(x)
M = -2*identity(n,'d')
for i in range(1,n):
M[i,i-1] = M[i-1,i] = 1
return M/h**2
x = linspace(-3,3)
m = 1.0
ohm = 1.0
T = (-0.5/m)*Laplacian(x)
V = 0.5*(ohm**2)*(x**2)
H = T + diag(V)
E,U = eigh(H)
h = x[1]-x[0]
# Plot the Harmonic potential
plot(x,V,color='k')
for i in range(4):
# For each of the first few solutions, plot the energy level:
axhline(y=E[i],color='k',ls=":")
# as well as the eigenfunction, displaced by the energy level so they don't
# all pile up on each other:
plot(x,-U[:,i]/sqrt(h)+E[i])
title("Eigenfunctions of the Quantum Harmonic Oscillator")
xlabel("Displacement (bohr)")
ylabel("Energy (hartree)")
Explanation: Pretty close!
One-Dimensional Harmonic Oscillator using Finite Difference
Now that we've convinced ourselves that finite differences aren't a terrible approximation, let's see if we can use this to solve the one-dimensional harmonic oscillator.
We want to solve the time-independent Schrodinger equation
$$ -\frac{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial x^2} + V(x)\psi(x) = E\psi(x)$$
for $\psi(x)$ when $V(x)=\frac{1}{2}m\omega^2x^2$ is the harmonic oscillator potential. We're going to use the standard trick to transform the differential equation into a matrix equation by multiplying both sides by $\psi^*(x)$ and integrating over $x$. This yields
$$ -\frac{\hbar}{2m}\int\psi(x)\frac{\partial^2}{\partial x^2}\psi(x)dx + \int\psi(x)V(x)\psi(x)dx = E$$
We will again use the finite difference approximation. The finite difference formula for the second derivative is
$$ y'' = \frac{y_{i+1}-2y_i+y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can think of the first term in the Schrodinger equation as the overlap of the wave function $\psi(x)$ with the second derivative of the wave function $\frac{\partial^2}{\partial x^2}\psi(x)$. Given the above expression for the second derivative, we can see if we take the overlap of the states $y_1,\dots,y_n$ with the second derivative, we will only have three points where the overlap is nonzero, at $y_{i-1}$, $y_i$, and $y_{i+1}$. In matrix form, this leads to the tridiagonal Laplacian matrix, which has -2's along the diagonals, and 1's along the diagonals above and below the main diagonal.
The second term turns leads to a diagonal matrix with $V(x_i)$ on the diagonal elements. Putting all of these pieces together, we get:
End of explanation
from numpy.polynomial.hermite import Hermite
def ho_evec(x,n,m,ohm):
vec = [0]*9
vec[n] = 1
Hn = Hermite(vec)
return (1/sqrt(2**n*math.factorial(n)))*pow(m*ohm/pi,0.25)*exp(-0.5*m*ohm*x**2)*Hn(x*sqrt(m*ohm))
Explanation: We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.
But the eigenfunctions as we generate them aren't properly normalized. The reason is that finite difference isn't a real basis in the quantum mechanical sense. It's a basis of Dirac δ functions at each point; we interpret the space betwen the points as being "filled" by the wave function, but the finite difference basis only has the solution being at the points themselves. We can fix this by dividing the eigenfunctions of our finite difference Hamiltonian by the square root of the spacing, and this gives properly normalized functions.
Special Functions
The solutions to the Harmonic Oscillator are supposed to be Hermite polynomials. The Wikipedia page has the HO states given by
$$\psi_n(x) = \frac{1}{\sqrt{2^n n!}}
\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}
\exp\left(-\frac{m\omega x^2}{2\hbar}\right)
H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)$$
Let's see whether they look like those. There are some special functions in the Numpy library, and some more in Scipy. Hermite Polynomials are in Numpy:
End of explanation
plot(x,ho_evec(x,0,1,1),label="Analytic")
plot(x,-U[:,0]/sqrt(h),label="Numeric")
xlabel('x (bohr)')
ylabel(r'$\psi(x)$')
title("Comparison of numeric and analytic solutions to the Harmonic Oscillator")
legend()
Explanation: Let's compare the first function to our solution.
End of explanation
phase_correction = [-1,1,1,-1,-1,1]
for i in range(6):
subplot(2,3,i+1)
plot(x,ho_evec(x,i,1,1),label="Analytic")
plot(x,phase_correction[i]*U[:,i]/sqrt(h),label="Numeric")
Explanation: The agreement is almost exact.
We can use the subplot command to put multiple comparisons in different panes on a single plot:
End of explanation
from scipy.special import airy,jn,eval_chebyt,eval_legendre
subplot(2,2,1)
x = linspace(-1,1)
Ai,Aip,Bi,Bip = airy(x)
plot(x,Ai)
plot(x,Aip)
plot(x,Bi)
plot(x,Bip)
title("Airy functions")
subplot(2,2,2)
x = linspace(0,10)
for i in range(4):
plot(x,jn(i,x))
title("Bessel functions")
subplot(2,2,3)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_chebyt(i,x))
title("Chebyshev polynomials of the first kind")
subplot(2,2,4)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_legendre(i,x))
title("Legendre polynomials")
Explanation: Other than phase errors (which I've corrected with a little hack: can you find it?), the agreement is pretty good, although it gets worse the higher in energy we get, in part because we used only 50 points.
The Scipy module has many more special functions:
End of explanation
raw_data = \
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18
Explanation: As well as Jacobi, Laguerre, Hermite polynomials, Hypergeometric functions, and many others. There's a full listing at the Scipy Special Functions Page.
Least squares fitting
Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following:
End of explanation
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
title("Raw Data")
xlabel("Distance")
plot(data[:,0],data[:,1],'bo')
Explanation: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
End of explanation
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
Explanation: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
End of explanation
params = polyfit(data[:,0],log(data[:,1]),1)
a = params[0]
A = exp(params[1])
Explanation: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function
$$ y = Ae^{-ax} $$
$$ \log(y) = \log(A) - ax$$
Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.
There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)
End of explanation
x = linspace(1,45)
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
semilogy(x,A*exp(a*x),'b-')
Explanation: Let's see whether this curve fits the data.
End of explanation
gauss_data = \
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646
data = []
for line in gauss_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
plot(data[:,0],data[:,1],'bo')
Explanation: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data:
End of explanation
def gauss(x,A,a): return A*exp(a*x**2)
Explanation: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
First define a general Gaussian function to fit to.
End of explanation
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,data[:,0],data[:,1])
x = linspace(-1,1)
plot(data[:,0],data[:,1],'bo')
A,a = params
plot(x,gauss(x,A,a),'b-')
Explanation: Now fit to it using curve_fit:
End of explanation
from random import random
rands = []
for i in range(100):
rands.append(random())
plot(rands)
Explanation: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Monte Carlo, random numbers, and computing $\pi$
Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function gives pseudorandom numbers uniformly distributed between 0 and 1:
End of explanation
from random import gauss
grands = []
for i in range(100):
grands.append(gauss(0,1))
plot(grands)
Explanation: random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution:
End of explanation
plot(rand(100))
Explanation: It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
End of explanation
npts = 5000
xs = 2*rand(npts)-1
ys = 2*rand(npts)-1
r = xs**2+ys**2
ninside = (r<1).sum()
figsize(6,6) # make the figure square
title("Approximation to pi = %f" % (4*ninside/float(npts)))
plot(xs[r<1],ys[r<1],'b.')
plot(xs[r>1],ys[r>1],'r.')
figsize(8,6) # change the figsize back to 4x3 for the rest of the notebook
Explanation: One of the first programs I ever wrote was a program to compute $\pi$ by taking random numbers as x and y coordinates, and counting how many of them were in the unit circle. For example:
End of explanation
n = 100
total = 0
for k in range(n):
total += pow(-1,k)/(2*k+1.0)
print 4*total
Explanation: The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.
The above code uses some higher level Numpy tricks to compute the radius of each point in a single line, to count how many radii are below one in a single line, and to filter the x,y points based on their radii. To be honest, I rarely write code like this: I find some of these Numpy tricks a little too cute to remember them, and I'm more likely to use a list comprehension (see below) to filter the points I want, since I can remember that.
As methods of computing $\pi$ go, this is among the worst. A much better method is to use Leibniz's expansion of arctan(1):
$$\frac{\pi}{4} = \sum_k \frac{(-1)^k}{2*k+1}$$
End of explanation
from numpy import sqrt
def f(x): return exp(-x)
x = linspace(0,10)
plot(x,exp(-x))
Explanation: If you're interested a great method, check out Ramanujan's method. This converges so fast you really need arbitrary precision math to display enough decimal places. You can do this with the Python decimal module, if you're interested.
Numerical Integration
Integration can be hard, and sometimes it's easier to work out a definite integral using an approximation. For example, suppose we wanted to figure out the integral:
$$\int_0^\infty\exp(-x)dx=1$$
End of explanation
from scipy.integrate import quad
quad(f,0,inf)
Explanation: Scipy has a numerical integration routine quad (since sometimes numerical integration is called quadrature), that we can use for this:
End of explanation
from scipy.fftpack import fft,fftfreq
npts = 4000
nplot = npts/10
t = linspace(0,120,npts)
def acc(t): return 10*sin(2*pi*2.0*t) + 5*sin(2*pi*8.0*t) + 2*rand(npts)
signal = acc(t)
FFT = abs(fft(signal))
freqs = fftfreq(npts, t[1]-t[0])
subplot(211)
plot(t[:nplot], signal[:nplot])
subplot(212)
plot(freqs,20*log10(FFT),',')
show()
Explanation: There are also 2d and 3d numerical integrators in Scipy. See the docs for more information.
Fast Fourier Transform and Signal Processing
Very often we want to use FFT techniques to help obtain the signal from noisy data. Scipy has several different options for this.
End of explanation |
13,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An example packet from ESEO.
Step1: Trim the data between the 0x7e7e flags. We skip Reed-Solomon decoding, since we are confident that there are no bit errors. We remove the 16 Reed-Solomon parity check bytes.
Step2: Reverse the bytes in the data.
Step3: Perform bit de-stuffing.
Step4: That it interesting, we have found a run of ones longer than 5 inside the data. We wouln't expect such a run due to byte stuffing. This happens during byte of a total of 161 data bytes.
Step5: Perform G3RUH descrambling.
Step6: Perform NRZ-I decoding.
Step7: The long sequences of zeros are a good indicator, but still we don't have the expected 8A A6 8A 9E 40 40 60 92 AE 68 88 AA 98 61 AX.25 header.
Reflect the bytes again.
Step8: The CRC is CRC16_CCITT_ZERO following the notation of this online calculator.
Data from SITAEL
Step9: They have CC64 rather than ec 64 near the end. Why?
We drop the Reed-Solomon parity check bytes (last 16 bytes).
Step10: Here we have 18 3d instead of 1839 near the end.
Step11: For some reason we have needed to do something funny with the start of the descrambler (changing byte align) and reflect the bytes again to get something as in their example. | Python Code:
bits = np.array([0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1],\
dtype = 'uint8')
hexprint(bits)
Explanation: An example packet from ESEO.
End of explanation
data = bits[16:16+161*8-16*8]
hexprint(data)
Explanation: Trim the data between the 0x7e7e flags. We skip Reed-Solomon decoding, since we are confident that there are no bit errors. We remove the 16 Reed-Solomon parity check bytes.
End of explanation
def reflect_bytes(x):
return np.fliplr(x[:x.size//8*8].reshape((-1,8))).ravel()
data_rev = reflect_bytes(data)
hexprint(data_rev)
Explanation: Reverse the bytes in the data.
End of explanation
def destuff(x):
y = list()
run = 0
for i, bit in enumerate(x):
if run == 5:
if bit == 1:
print('Long run found at bit', i)
break
else:
run = 0
elif bit == 0:
run = 0
y.append(bit)
elif bit == 1:
run += 1
y.append(bit)
return np.array(y, dtype = 'uint8')
data_rev_destuff = destuff(data_rev)
Explanation: Perform bit de-stuffing.
End of explanation
1193/8
hexprint(data_rev_destuff)
Explanation: That it interesting, we have found a run of ones longer than 5 inside the data. We wouln't expect such a run due to byte stuffing. This happens during byte of a total of 161 data bytes.
End of explanation
def descramble(x):
y = np.concatenate((np.zeros(17, dtype='uint8'), x))
z = y[:-17] ^ y[5:-12] ^ y[17:]
return z
def nrzi_decode(x):
return x ^ np.concatenate((np.zeros(1, dtype = 'uint8'), x[:-1])) ^ 1
data_descrambled = descramble(data_rev_destuff)
hexprint(data_descrambled)
Explanation: Perform G3RUH descrambling.
End of explanation
data_nrz = nrzi_decode(data_descrambled)
hexprint(data_nrz)
Explanation: Perform NRZ-I decoding.
End of explanation
data_nrz_rev = reflect_bytes(data_nrz)
hexprint(data_nrz_rev)
Explanation: The long sequences of zeros are a good indicator, but still we don't have the expected 8A A6 8A 9E 40 40 60 92 AE 68 88 AA 98 61 AX.25 header.
Reflect the bytes again.
End of explanation
raw_input_bits = np.array([1,1,0,1,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,0,1,0,1,0,0,0,1,0,1,1,1,0,1,0,1,0,1,1,0,0,0,1,0,0,1,0,0,0,1,1,1,0,0,0,1,0,1,1,0,0,0,1,0,0,1,0,1,1,0,0,0,0,1,1,1,0,0,0,1,0,1,0,0,0,1,1,1,1,1,0,1,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,1,1,0,0,0,1,1,1,0,0,1,1,0,0,1,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0,0,0,1,1,0,0,1,0,1,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,1,0,0,0,0,0,0,0,1,1,0,0,1,0,1,0,0,0,0,0,1,0,1,1,0,0,0,0,1,0,1,0,0,0,0,1,0,1,0,0,1,0,0,0,1,1,0,0,0,1,1,1,0,0,0,1,0,1,1,1,0,1,0,0,0,0,1,1,1,1,1,0,0,0,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,1,1,1,0,1,0,0,0,1,0,1,1,0,1,0,1,0,1,0,0,0,1,1,0,1,1,0,0,1,1,0,0,0,1,0,1,0,0,0,0,1,1,0,1,0,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0,1,0,1,0,0,0,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,0,1,1,0,1,1,1,0,0,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0,1,0,0,1,1,0,0,1,0,0,0,1,0,0,0,0,1,1,1,0,1,1,0,0,1,1,0,0,1,0,1,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,1,1,0,1,0,0,0,0,0,1,1,1,0,0,0,1,1,1,0,0,1,0,0,1,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,1,0,1,0,0,0,1,0,0,1,1,0,0,0,1,1,1,1,0,1,0,0,1,0,1,1,1,1,0,0,0,0,0,1,0,0,1,1,0,0,1,0,1,1,0,1,1,1,0,1,0,0,1,0,0,0,0,1,1,0,0,0,1,1,0,0,1,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,1,1,0,0,1,1,1,0,0,1,0,0,1,0,0,0,0,1,0,0,0,0,1,1,0,1,1,0,0,1,1,0,1,0,1,0,0,0,0,0,1,0,1,0,0,1,1,1,0,1,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,0,1,1,0,1,1,0,0,1,1,0,1,1,0,0,1,0,0,0,0,0,0,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,0,0,0,1,1,0,0,0,1,1,0,0,0,1,0,0,1,1,1,0,0,1,0,1,0,1,1,0,1,0,0,1,1,1,1,0,1,1,1,0,1,0,1,0,1,1,1,1,1,0,1,0,1,1,0,1,0,0,0,1,1,0,0,0,0,1,0,0,1,1,0,0,0,1,0,0,1,1,1,0,1,1,1,1,0,1,0,1,0,1,1,1,0,1,1,0,0,0,1,0,1,1,1,0,1,1,0,0,1,1,0,1,0,0,0,1,0,0,1,0,0,1,0,1,1,0,1,1,0,0,0,1,1,1,1,1,0,1,0,1,1,1,1,0,1,1,0,0,1,1,0,0,0,0,1,1,0,0,0,0,0,0,0,1,1,0,0,0,1,0,1,0,1,0,1,1,1,1,1,0,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,0,0,0,0,1,0,1,1,1,1,0,0,1,0,1,1,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,1,0,0,1,0,1,1,0,1,0,1,1,1,1,0,0,1,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,1,0,1,1,1,1,0,1,0,1,1,1,0,1,1,0,1,1,0,1,1,1,0,1,1,0,0,1,0,1,0,0,0,1,1,1,1,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,1,1,0,1,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,1,1,0,1,0,1,1,1,0,1,1,1,1,1,1,0,0,1,1,0,1,0,1,1,1,0,1,0,0,1,0,0,0,0,0,1,0,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,0,0,1,1,0,1,1,0,1,0,0,1,1,0,1,1,1,0,0,1,1,1,0,0,1,1,1,0,1,0,0,0,0,1,1,1,0,0,1,0,1,1,0,0,0,1,1,1,1,1,0,0,0,1,0,0,0,0,1,1,0,0,0,0,1,1,1,1,0,0,1,1,1,0,0,0,0,0,1,1,1,0,0,1,0,0,1,1,1,1,1,1,1,1,0,0,1,0,1,1,1,0,0,1,0,1,1,0,0,0,0,1,1,0,0,0,1,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,1,1,0,1,0,1,0,1,1,1,1,0,0,0,0,0,0,0,1,0,0,1,0,0,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,1,1,0,0,0,1,1,0,1,1,0,0,1,0,0,0,1,0,1,0,1,1,0,0,0,0,1,1,0,1,0,0,0,1,0,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,1,1,0,1,0,1,0,0,1,1,1,1,0,1,1,1,0,0], dtype = 'uint8')
hexprint(raw_input_bits)
raw_input_bits.size//8
raw_input_stream = 'D3F8 0EA2 EAC4 8E2C 4B0E 28FA 9020 C733 A658 CB52 EF01 9416 1429 18E2 E87C 773E E8B5 46CC 50D0 CDCA 337D 1B83 3726 443B 329D AC34'
#input_stream = np.unpackbits(np.frombuffer(binascii.a2b_hex(raw_input_stream.replace(' ','')), dtype='uint8'))
input_stream = raw_input_bits
input_stream_reflected = reflect_bytes(input_stream)
hexprint(input_stream_reflected)
Explanation: The CRC is CRC16_CCITT_ZERO following the notation of this online calculator.
Data from SITAEL
End of explanation
input_stream_reflected_no_rs = input_stream_reflected[:-16*8]
input_stream_reflected_no_rs.size//8
after_unstuffing = destuff(input_stream_reflected_no_rs)
hexprint(after_unstuffing)
Explanation: They have CC64 rather than ec 64 near the end. Why?
We drop the Reed-Solomon parity check bytes (last 16 bytes).
End of explanation
after_unstuffing.size/8
after_derandom = nrzi_decode(descramble(after_unstuffing))
hexprint(reflect_bytes(after_derandom))
after_derandom.size
Explanation: Here we have 18 3d instead of 1839 near the end.
End of explanation
reflect_bytes(after_derandom).size/8
Explanation: For some reason we have needed to do something funny with the start of the descrambler (changing byte align) and reflect the bytes again to get something as in their example.
End of explanation |
13,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Core vision
Basic image opening/processing functionality
Helpers
Step1: Image.n_px
Image.n_px (property)
Number of pixels in image
Step2: Image.shape
Image.shape (property)
Image (height,width) tuple (NB
Step3: Image.aspect
Image.aspect (property)
Aspect ratio of the image, i.e. width/height
Step4: Basic types
This section regroups the basic types used in vision with the transform that create objects of those types.
Step5: Images
Step6: Segmentation masks
Step7: Points
Step8: Points are expected to come as an array/tensor of shape (n,2) or as a list of lists with two elements. Unless you change the defaults in PointScaler (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).
Note
Step9: Bounding boxes
Step10: Test get_annotations on the coco_tiny dataset against both image filenames and bounding box labels.
Step11: Bounding boxes are expected to come as tuple with an array/tensor of shape (n,4) or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in PointScaler (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention
Step12: Basic Transforms
Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the tfms you pass to a TfmdDS or a Datasource) or tuple transforms (in the tuple_tfms you pass to a TfmdDS or a Datasource). The safest way that will work across applications is to always use them as tuple_tfms. For instance, if you have points or bounding boxes as targets and use Resize as a single-item transform, when you get to PointScaler (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
Step13: Any data augmentation transform that runs on PIL Images must be run before this transform.
Step14: Let's confirm we can pipeline this with PILImage.create.
Step15: To work with data augmentation, and in particular the grid_sample method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass do_scale=False. We also need to make sure they are following our convention of points being x,y coordinates, so pass along y_first=True if you have your data in an y,x format to add a flip.
Warning
Step16: To work with data augmentation, and in particular the grid_sample method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass do_scale=False. We also need to make sure they are following our convention of points being x,y coordinates, so pass along y_first=True if you have your data in an y,x format to add a flip.
Note
Step17: Export - | Python Code:
#|export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#|export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#|export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
Explanation: Core vision
Basic image opening/processing functionality
Helpers
End of explanation
test_eq(im.n_px, 30*20)
#|export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
Explanation: Image.n_px
Image.n_px (property)
Number of pixels in image
End of explanation
test_eq(im.shape, (20,30))
#|export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
Explanation: Image.shape
Image.shape (property)
Image (height,width) tuple (NB: opposite order of Image.size(), same order as numpy array and pytorch tensor)
End of explanation
test_eq(im.aspect, 30/20)
#|export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#|export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#|export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#|export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
Explanation: Image.aspect
Image.aspect (property)
Aspect ratio of the image, i.e. width/height
End of explanation
#|export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#|export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#|export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#|export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#|export
class PILImage(PILBase): pass
#|export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#|hide
test_eq(np.array(im), np.array(tpil))
#|export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#|export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
Explanation: Basic types
This section regroups the basic types used in vision with the transform that create objects of those types.
End of explanation
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
Explanation: Images
End of explanation
#|export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
Explanation: Segmentation masks
End of explanation
#|export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#|export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
Explanation: Points
End of explanation
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
Explanation: Points are expected to come as an array/tensor of shape (n,2) or as a list of lists with two elements. Unless you change the defaults in PointScaler (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).
Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like F.grid_sample.
End of explanation
#|export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
Explanation: Bounding boxes
End of explanation
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
#|export
from matplotlib import patches, patheffects
#|export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
#|export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
Explanation: Test get_annotations on the coco_tiny dataset against both image filenames and bounding box labels.
End of explanation
#|export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
Explanation: Bounding boxes are expected to come as tuple with an array/tensor of shape (n,4) or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in PointScaler (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.
Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
End of explanation
#|export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#|export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
Explanation: Basic Transforms
Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the tfms you pass to a TfmdDS or a Datasource) or tuple transforms (in the tuple_tfms you pass to a TfmdDS or a Datasource). The safest way that will work across applications is to always use them as tuple_tfms. For instance, if you have points or bounding boxes as targets and use Resize as a single-item transform, when you get to PointScaler (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
End of explanation
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
Explanation: Any data augmentation transform that runs on PIL Images must be run before this transform.
End of explanation
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
Explanation: Let's confirm we can pipeline this with PILImage.create.
End of explanation
#|export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#|export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x): return getattr(x, 'img_size') if self.sz is None else self.sz
def setups(self, dl):
res = first(dl.do_item(None), risinstance(TensorPoint))
if res is not None: self.c = res.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
Explanation: To work with data augmentation, and in particular the grid_sample method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass do_scale=False. We also need to make sure they are following our convention of points being x,y coordinates, so pass along y_first=True if you have your data in an y,x format to add a flip.
Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
End of explanation
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#|hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#|export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#|export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#|export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#|hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#|hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
Explanation: To work with data augmentation, and in particular the grid_sample method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass do_scale=False. We also need to make sure they are following our convention of points being x,y coordinates, so pass along y_first=True if you have your data in an y,x format to add a flip.
Note: This transform automatically grabs the sizes of the images it sees before a <code>TensorPoint</code> object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a <code>TensorPoint</code> by passing it with sz=....
End of explanation
#|hide
from nbdev.export import notebook2script
notebook2script()
Explanation: Export -
End of explanation |
13,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Eager Execution
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Now you can run TensorFlow operations and the results will return immediately
Step3: Enabling eager execution changes how TensorFlow operations behave—now they
immediately evaluate and return their values to Python. tf.Tensor objects
reference concrete values instead of symbolic handles to nodes in a computational
graph. Since there isn't a computational graph to build and run later in a
session, it's easy to inspect results using print() or a debugger. Evaluating,
printing, and checking tensor values does not break the flow for computing
gradients.
Eager execution works nicely with NumPy. NumPy
operations accept tf.Tensor arguments. TensorFlow
math operations convert
Python objects and NumPy arrays to tf.Tensor objects. The
tf.Tensor.numpy method returns the object's value as a NumPy ndarray.
Step4: Dynamic control flow
A major benefit of eager execution is that all the functionality of the host
language is available while your model is executing. So, for example,
it is easy to write fizzbuzz
Step5: This has conditionals that depend on tensor values and it prints these values
at runtime.
Build a model
Many machine learning models are represented by composing layers. When
using TensorFlow with eager execution you can either write your own layers or
use a layer provided in the tf.keras.layers package.
While you can use any Python object to represent a layer,
TensorFlow has tf.keras.layers.Layer as a convenient base class. Inherit from
it to implement your own layer
Step6: Use tf.keras.layers.Dense layer instead of MySimpleLayer above as it has
a superset of its functionality (it can also add a bias).
When composing layers into models you can use tf.keras.Sequential to represent
models which are a linear stack of layers. It is easy to use for basic models
Step8: Alternatively, organize models in classes by inheriting from tf.keras.Model.
This is a container for layers that is a layer itself, allowing tf.keras.Model
objects to contain other tf.keras.Model objects.
Step9: It's not required to set an input shape for the tf.keras.Model class since
the parameters are set the first time input is passed to the layer.
tf.keras.layers classes create and contain their own model variables that
are tied to the lifetime of their layer objects. To share layer variables, share
their objects.
Eager training
Computing gradients
Automatic differentiation
is useful for implementing machine learning algorithms such as
backpropagation for training
neural networks. During eager execution, use tf.GradientTape to trace
operations for computing gradients later.
tf.GradientTape is an opt-in feature to provide maximal performance when
not tracing. Since different operations can occur during each call, all
forward-pass operations get recorded to a "tape". To compute the gradient, play
the tape backwards and then discard. A particular tf.GradientTape can only
compute one gradient; subsequent calls throw a runtime error.
Step10: Train a model
The following example creates a multi-layer model that classifies the standard
MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
trainable graphs in an eager execution environment.
Step11: Even without training, call the model and inspect the output in eager execution
Step12: While keras models have a builtin training loop (using the fit method), sometimes you need more customization. Here's an example, of a training loop implemented with eager
Step13: Variables and optimizers
tf.Variable objects store mutable tf.Tensor values accessed during
training to make automatic differentiation easier. The parameters of a model can
be encapsulated in classes as variables.
Better encapsulate model parameters by using tf.Variable with
tf.GradientTape. For example, the automatic differentiation example above
can be rewritten
Step14: Use objects for state during eager execution
With graph execution, program state (such as the variables) is stored in global
collections and their lifetime is managed by the tf.Session object. In
contrast, during eager execution the lifetime of state objects is determined by
the lifetime of their corresponding Python object.
Variables are objects
During eager execution, variables persist until the last reference to the object
is removed, and is then deleted.
Step15: Object-based saving
tf.train.Checkpoint can save and restore tf.Variables to and from
checkpoints
Step16: To save and load models, tf.train.Checkpoint stores the internal state of objects,
without requiring hidden variables. To record the state of a model,
an optimizer, and a global step, pass them to a tf.train.Checkpoint
Step17: Object-oriented metrics
tf.metrics are stored as objects. Update a metric by passing the new data to
the callable, and retrieve the result using the tf.metrics.result method,
for example
Step18: Summaries and TensorBoard
TensorBoard is a visualization tool for
understanding, debugging and optimizing the model training process. It uses
summary events that are written while executing the program.
TensorFlow 1 summaries only work in eager mode, but can be run with the compat.v2 module
Step19: Advanced automatic differentiation topics
Dynamic models
tf.GradientTape can also be used in dynamic models. This example for a
backtracking line search
algorithm looks like normal NumPy code, except there are gradients and is
differentiable, despite the complex control flow
Step20: Custom gradients
Custom gradients are an easy way to override gradients in eager and graph
execution. Within the forward function, define the gradient with respect to the
inputs, outputs, or intermediate results. For example, here's an easy way to clip
the norm of the gradients in the backward pass
Step21: Custom gradients are commonly used to provide a numerically stable gradient for a
sequence of operations
Step22: Here, the log1pexp function can be analytically simplified with a custom
gradient. The implementation below reuses the value for tf.exp(x) that is
computed during the forward pass—making it more efficient by eliminating
redundant calculations
Step23: Performance
Computation is automatically offloaded to GPUs during eager execution. If you
want control over where a computation runs you can enclose it in a
tf.device('/gpu
Step24: A tf.Tensor object can be copied to a different device to execute its
operations
Step25: Benchmarks
For compute-heavy models, such as
ResNet50
training on a GPU, eager execution performance is comparable to graph execution.
But this gap grows larger for models with less computation and there is work to
be done for optimizing hot code paths for models with lots of small operations.
Work with graphs
While eager execution makes development and debugging more interactive,
TensorFlow graph execution has advantages for distributed training, performance
optimizations, and production deployment. However, writing graph code can feel
different than writing regular Python code and more difficult to debug.
For building and training graph-constructed models, the Python program first
builds a graph representing the computation, then invokes Session.run to send
the graph for execution on the C++-based runtime. This provides | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow.compat.v1 as tf
Explanation: Eager Execution
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
TensorFlow's eager execution is an imperative programming environment that
evaluates operations immediately, without building graphs: operations return
concrete values instead of constructing a computational graph to run later. This
makes it easy to get started with TensorFlow and debug models, and it
reduces boilerplate as well. To follow along with this guide, run the code
samples below in an interactive python interpreter.
Eager execution is a flexible machine learning platform for research and
experimentation, providing:
An intuitive interface—Structure your code naturally and use Python data
structures. Quickly iterate on small models and small data.
Easier debugging—Call ops directly to inspect running models and test
changes. Use standard Python debugging tools for immediate error reporting.
Natural control flow—Use Python control flow instead of graph control
flow, simplifying the specification of dynamic models.
Eager execution supports most TensorFlow operations and GPU acceleration. For a
collection of examples running in eager execution, see:
tensorflow/contrib/eager/python/examples.
Note: Some models may experience increased overhead with eager execution
enabled. Performance improvements are ongoing, but please
file a bug if you find a
problem and share your benchmarks.
Setup and basic usage
To start eager execution, add `` to the beginning of
the program or console session. Do not add this operation to other modules that
the program calls.
End of explanation
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
Explanation: Now you can run TensorFlow operations and the results will return immediately:
End of explanation
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
Explanation: Enabling eager execution changes how TensorFlow operations behave—now they
immediately evaluate and return their values to Python. tf.Tensor objects
reference concrete values instead of symbolic handles to nodes in a computational
graph. Since there isn't a computational graph to build and run later in a
session, it's easy to inspect results using print() or a debugger. Evaluating,
printing, and checking tensor values does not break the flow for computing
gradients.
Eager execution works nicely with NumPy. NumPy
operations accept tf.Tensor arguments. TensorFlow
math operations convert
Python objects and NumPy arrays to tf.Tensor objects. The
tf.Tensor.numpy method returns the object's value as a NumPy ndarray.
End of explanation
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
Explanation: Dynamic control flow
A major benefit of eager execution is that all the functionality of the host
language is available while your model is executing. So, for example,
it is easy to write fizzbuzz:
End of explanation
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
Explanation: This has conditionals that depend on tensor values and it prints these values
at runtime.
Build a model
Many machine learning models are represented by composing layers. When
using TensorFlow with eager execution you can either write your own layers or
use a layer provided in the tf.keras.layers package.
While you can use any Python object to represent a layer,
TensorFlow has tf.keras.layers.Layer as a convenient base class. Inherit from
it to implement your own layer:
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
Explanation: Use tf.keras.layers.Dense layer instead of MySimpleLayer above as it has
a superset of its functionality (it can also add a bias).
When composing layers into models you can use tf.keras.Sequential to represent
models which are a linear stack of layers. It is easy to use for basic models:
End of explanation
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
Run the model.
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
Explanation: Alternatively, organize models in classes by inheriting from tf.keras.Model.
This is a container for layers that is a layer itself, allowing tf.keras.Model
objects to contain other tf.keras.Model objects.
End of explanation
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
Explanation: It's not required to set an input shape for the tf.keras.Model class since
the parameters are set the first time input is passed to the layer.
tf.keras.layers classes create and contain their own model variables that
are tied to the lifetime of their layer objects. To share layer variables, share
their objects.
Eager training
Computing gradients
Automatic differentiation
is useful for implementing machine learning algorithms such as
backpropagation for training
neural networks. During eager execution, use tf.GradientTape to trace
operations for computing gradients later.
tf.GradientTape is an opt-in feature to provide maximal performance when
not tracing. Since different operations can occur during each call, all
forward-pass operations get recorded to a "tape". To compute the gradient, play
the tape backwards and then discard. A particular tf.GradientTape can only
compute one gradient; subsequent calls throw a runtime error.
End of explanation
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
Explanation: Train a model
The following example creates a multi-layer model that classifies the standard
MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
trainable graphs in an eager execution environment.
End of explanation
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
Explanation: Even without training, call the model and inspect the output in eager execution:
End of explanation
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
Explanation: While keras models have a builtin training loop (using the fit method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
End of explanation
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
Explanation: Variables and optimizers
tf.Variable objects store mutable tf.Tensor values accessed during
training to make automatic differentiation easier. The parameters of a model can
be encapsulated in classes as variables.
Better encapsulate model parameters by using tf.Variable with
tf.GradientTape. For example, the automatic differentiation example above
can be rewritten:
End of explanation
if tf.config.list_physical_devices('GPU'):
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
Explanation: Use objects for state during eager execution
With graph execution, program state (such as the variables) is stored in global
collections and their lifetime is managed by the tf.Session object. In
contrast, during eager execution the lifetime of state objects is determined by
the lifetime of their corresponding Python object.
Variables are objects
During eager execution, variables persist until the last reference to the object
is removed, and is then deleted.
End of explanation
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
Explanation: Object-based saving
tf.train.Checkpoint can save and restore tf.Variables to and from
checkpoints:
End of explanation
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
Explanation: To save and load models, tf.train.Checkpoint stores the internal state of objects,
without requiring hidden variables. To record the state of a model,
an optimizer, and a global step, pass them to a tf.train.Checkpoint:
End of explanation
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
Explanation: Object-oriented metrics
tf.metrics are stored as objects. Update a metric by passing the new data to
the callable, and retrieve the result using the tf.metrics.result method,
for example:
End of explanation
from tensorflow.compat.v2 import summary
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# your model code goes here
summary.scalar('global_step', global_step, step=global_step)
!ls tb/
Explanation: Summaries and TensorBoard
TensorBoard is a visualization tool for
understanding, debugging and optimizing the model training process. It uses
summary events that are written while executing the program.
TensorFlow 1 summaries only work in eager mode, but can be run with the compat.v2 module:
End of explanation
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
Explanation: Advanced automatic differentiation topics
Dynamic models
tf.GradientTape can also be used in dynamic models. This example for a
backtracking line search
algorithm looks like normal NumPy code, except there are gradients and is
differentiable, despite the complex control flow:
End of explanation
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
Explanation: Custom gradients
Custom gradients are an easy way to override gradients in eager and graph
execution. Within the forward function, define the gradient with respect to the
inputs, outputs, or intermediate results. For example, here's an easy way to clip
the norm of the gradients in the backward pass:
End of explanation
def log1pexp(x):
return tf.log(1 + tf.exp(x))
class Grad(object):
def __init__(self, f):
self.f = f
def __call__(self, x):
x = tf.convert_to_tensor(x)
with tf.GradientTape() as tape:
tape.watch(x)
r = self.f(x)
g = tape.gradient(r, x)
return g
grad_log1pexp = Grad(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.).numpy()
Explanation: Custom gradients are commonly used to provide a numerically stable gradient for a
sequence of operations:
End of explanation
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = Grad(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.).numpy()
Explanation: Here, the log1pexp function can be analytically simplified with a custom
gradient. The implementation below reuses the value for tf.exp(x) that is
computed during the forward pass—making it more efficient by eliminating
redundant calculations:
End of explanation
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tf.config.list_physical_devices('GPU'):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
Explanation: Performance
Computation is automatically offloaded to GPUs during eager execution. If you
want control over where a computation runs you can enclose it in a
tf.device('/gpu:0') block (or the CPU equivalent):
End of explanation
if tf.config.list_physical_devices('GPU'):
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
Explanation: A tf.Tensor object can be copied to a different device to execute its
operations:
End of explanation
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tf.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
Explanation: Benchmarks
For compute-heavy models, such as
ResNet50
training on a GPU, eager execution performance is comparable to graph execution.
But this gap grows larger for models with less computation and there is work to
be done for optimizing hot code paths for models with lots of small operations.
Work with graphs
While eager execution makes development and debugging more interactive,
TensorFlow graph execution has advantages for distributed training, performance
optimizations, and production deployment. However, writing graph code can feel
different than writing regular Python code and more difficult to debug.
For building and training graph-constructed models, the Python program first
builds a graph representing the computation, then invokes Session.run to send
the graph for execution on the C++-based runtime. This provides:
Automatic differentiation using static autodiff.
Simple deployment to a platform independent server.
Graph-based optimizations (common subexpression elimination, constant-folding, etc.).
Compilation and kernel fusion.
Automatic distribution and replication (placing nodes on the distributed system).
Deploying code written for eager execution is more difficult: either generate a
graph from the model, or run the Python runtime and code directly on the server.
Write compatible code
The same code written for eager execution will also build a graph during graph
execution. Do this by simply running the same code in a new Python session where
eager execution is not enabled.
Most TensorFlow operations work during eager execution, but there are some things
to keep in mind:
Use tf.data for input processing instead of queues. It's faster and easier.
Use object-oriented layer APIs—like tf.keras.layers and
tf.keras.Model—since they have explicit storage for variables.
Most model code works the same during eager and graph execution, but there are
exceptions. (For example, dynamic models using Python control flow to change the
computation based on inputs.)
Once eager execution is enabled with tf.enable_eager_execution, it
cannot be turned off. Start a new Python session to return to graph execution.
It's best to write code for both eager execution and graph execution. This
gives you eager's interactive experimentation and debuggability with the
distributed performance benefits of graph execution.
Write, debug, and iterate in eager execution, then import the model graph for
production deployment. Use tf.train.Checkpoint to save and restore model
variables, this allows movement between eager and graph execution environments.
See the examples in:
tensorflow/contrib/eager/python/examples.
Use eager execution in a graph environment
Selectively enable eager execution in a TensorFlow graph environment using
tfe.py_func. This is used when `` has not
been called.
End of explanation |
13,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MRI intensity normalization
Intensity normalization of multi-channel MRI images using the method proposed by Nyul et al. 2000.
In the original paper, the authors suggest a method where a set of standard histogram landmarks are learned from a set of MRI images. These landmarks are then used to equalize the histograms of the images to normalize. In both learning and transformation, the histograms are used to find the intensity landmarks.
Ackwoledgements
Step1: Then, the train the standard histogram. By default, the parameters are set as follows
Step2: Save the standard histogram
Step3: Apply intensity normalization to new images | Python Code:
import os
import numpy as np
import nibabel as nib
from nyul import nyul_train_standard_scale
DATA_DIR = 'data_examples'
T1_name = 'T1.nii.gz'
MASK_name = 'brainmask.nii.gz'
# generate training scans
train_scans = [os.path.join(DATA_DIR, folder, T1_name)
for folder in os.listdir(DATA_DIR)]
mask_scans = [os.path.join(DATA_DIR, folder, MASK_name)
for folder in os.listdir(DATA_DIR)]
Explanation: MRI intensity normalization
Intensity normalization of multi-channel MRI images using the method proposed by Nyul et al. 2000.
In the original paper, the authors suggest a method where a set of standard histogram landmarks are learned from a set of MRI images. These landmarks are then used to equalize the histograms of the images to normalize. In both learning and transformation, the histograms are used to find the intensity landmarks.
Ackwoledgements:
The Python implementation is based on the awesome implementation available here Reinhold et al. 2019.
For this particular tutorial, we use a very small subset from the Calgary-Campinas dataset.
Train the standard histogram:
To train the standard histogram, we just have to create a list of the input images to process. Optionally, we can also provide the brainmasks:
End of explanation
standard_scale, perc = nyul_train_standard_scale(train_scans, mask_scans)
Explanation: Then, the train the standard histogram. By default, the parameters are set as follows:
* Minimum percentile to consider i_min=1
* Maximum percentile to consider i_max=99
* Minimum percentile on the standard histogram i_s_min=1
* Maximum percentile on the standard histogram i_s_max=100
* Middle percentile lower bound l_percentile=10
* Middle percentile upped bound u_percentile=90
* number of deciles step=10
End of explanation
standard_path = 'histograms/standard_test.npy'
np.save(standard_path, [standard_scale, perc])
Explanation: Save the standard histogram:
Save the histogram to apply it to unseen images afterwards:
End of explanation
from nyul import nyul_apply_standard_scale
import matplotlib.pyplot as plt
image_1 = nib.load(train_scans[0]).get_data()
mask_1 = nib.load(mask_scans[0]).get_data()
image_1_norm = nyul_apply_standard_scale(image_1, standard_path, input_mask=mask_1)
fig, axs = plt.subplots(2, 1, constrained_layout=True)
f1 = axs[0].hist(image_1.flatten(), bins=64, range=(-10,600))
f2 = axs[1].hist(image_1_norm.flatten(), bins=64, range=(-10,200))
axs[0].set_title('Image 1 Original')
axs[0].set_title('Image 1 Normalized')
Explanation: Apply intensity normalization to new images:
Finally, the learned histogram can be applied to new images. Here, we just use the same images before and after normalization as an example.
End of explanation |
13,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Latent Dirichlet Allocation applied to real data
Step1: Data fetching and preprocessing
Building sample dataset
We are considering a collection of English news articles about the case relating to allegations of sexual assault against the former IMF director Dominique Strauss-Kahn (May 2011). It was obtained thanks to a Web Search with keywords and provided generosyly by Aurélien Lauf, Leila Khouas and Mohamed Dermouche on the UCL Machine Learning Repository at
Step2: We can see from this example that the textual data are not very cleaned
Step3: Textual data preprocessing
We have first to make readable the corpus of text. Several steps are needed for that purpose among them
Step4: define stop words
We came out with this list of words to delete while looking at the distribution of words and those that are meaning less in our context
Step5: words to understand as one token
The idea now is to define the unit of analysis, the token in natural language processing field. For some standard expression in our context we define as token a succession of word thanks to the function MWETokenizer of the package nltk.
Step6: an high-dimensionnal vocabulary
One problem of text analysis is the high number of unique words that are to be found in the corpus and its corollary the sparsity. We want first to give an overview of this issue and then to answer to it thanks to the method implemented in TfidfVectorizer of scikit-learn
Step7: Translate token to index
We will write the algorithm to work on numbers more than on actual words. The idea is thus to translate the full range of words into indices, keeping the translation rule carefully. For that purpose we consider the package gensim.
Step8: Illustration
Step9: Implementation of the Latent Dirichlet Allocation
The generative model
Let's recall briefly to understand the notation the generative graphical model that is under the LDA. We won't discuss here further.
Given $d \in {1, \cdots n_{docs}}$, the document index, $w \in {1, \cdots, n_{words}}$ the word index, $ k \in {1, \cdots, n_{topics}}$ the topic index, the underlying process modeled by LDA is
Step10: Initialisation
Step11: Understanding the object
Step12: We can check that the amount of word_0 in the full corpus is dispatched between each topic.
NB
Step13: Understanding the object doc_topic
This matrix represents the count of words for each document d that belongs to each topic t. It describes which topic is present in document d and with which relative intensity.
Example
Step14: We can check that the number of words classified matches the number of word in the document
Step15: Algorithm
Step16: The result
We chek the relative imabalance of number of words between the topic
Step17: definition of topics with their related words
Step20: A complete analysis of the result is out of our goal since we haven't thought long enough to our data that were a little bit unclean due to the fetching method (sometimes other info that were we guess around the article in the website page are in fact integrated to the text). Still we can interpret the topic 4 to be related to international matter, the topic 3 to economics, topic 2 to juridical matter, topic 1 the global picture in terms of network and topic 0 to the act itself.
Playing with the features
Next idea is to understand the impact when we vary some hyperparameters or parameters we considered as constant. For that purpose we decided to compare the performance along two axis
Step21: Study of time convergence
We want to know how many iterations are needed to stabilize the loglikelihood, the idea being that one iteration is already time-consuming so that we want to minimize the number of iterations.
Step22: Influence of Dirichlet parameters
$\alpha$ and $\beta$ are the parameters for the two Dirichlet priors. They are called the concentration parameter, since in our symmetric distribution case, a high value means an higher degree of mixture of the input. More precisely the higher the $\alpha$, the more likely each document contains a mixture of most of the topics, and not any single topic specifically. Likewise, a high $\beta$-value means that each topic is likely to contain a mixture of most of the words, and not any word specifically, while a low value means that a topic may contain a mixture of just a few of the words.
Step23: Influence of the number of topics | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Latent Dirichlet Allocation applied to real data
End of explanation
#get raw data
import xml.etree.ElementTree as ET
tree = ET.parse('../dataset/nysk.xml')
root = tree.getroot()
root1 = root.getchildren()[150].getchildren()
texts=[]
for document in root.iter('document'):
text = document.find('text').text
texts += [text]
#Example of text
texts[1]
Explanation: Data fetching and preprocessing
Building sample dataset
We are considering a collection of English news articles about the case relating to allegations of sexual assault against the former IMF director Dominique Strauss-Kahn (May 2011). It was obtained thanks to a Web Search with keywords and provided generosyly by Aurélien Lauf, Leila Khouas and Mohamed Dermouche on the UCL Machine Learning Repository at : https://archive.ics.uci.edu/ml/datasets/NYSK
End of explanation
# Sample texts
print(len(texts))
shuffle(texts)
texts_test = texts[:250]
print(len(texts_test))
Explanation: We can see from this example that the textual data are not very cleaned: title and paratext might ne included in the sample.
We will consider for the implementation 250 articles out of the 10421 for computational cost. Please note that the code itself is scalable up to an higher limit.
End of explanation
from nltk.tokenize import MWETokenizer
from nltk.stem.snowball import SnowballStemmer
Explanation: Textual data preprocessing
We have first to make readable the corpus of text. Several steps are needed for that purpose among them: stemming, tokenization and dropping of stop words.
End of explanation
my_stop_words = nltk.corpus.stopwords.words('english')
# Add my stopwords
my_stop_words = my_stop_words + ["n't", "'s", "wednesday", "year",
"ve", "said", "a", "would", "may", "say", "saturday",
"thursday", "select", "one", "part"]
Explanation: define stop words
We came out with this list of words to delete while looking at the distribution of words and those that are meaning less in our context
End of explanation
tokenizer = MWETokenizer([("world", "trade", "organisation"), ('dominique', 'strauss-kahn'),
("international", "monetary", "fund"), ('new', 'york'), ("wall", "street")])
stemmer = SnowballStemmer("english")
texts_tok = []
for text in texts_test:
tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
tokens = tokenizer.tokenize(tokens)
filtered_tokens = []
for token in tokens:
if token not in my_stop_words:
if re.search('[a-zA-Z]', token):
stemmed_token = stemmer.stem(token)
filtered_tokens.append(stemmed_token)
texts_tok += [filtered_tokens]
Explanation: words to understand as one token
The idea now is to define the unit of analysis, the token in natural language processing field. For some standard expression in our context we define as token a succession of word thanks to the function MWETokenizer of the package nltk.
End of explanation
from collections import defaultdict
frequency = defaultdict(int)
for text in texts_tok:
for token in text:
frequency[token] += 1
df_freq = pd.DataFrame(frequency, index=['value']).T
print('Most frequent words')
print(df_freq.sort_values(['value'], ascending=False).head())
df_freq[df_freq['value'] <10].hist(['value'], bins=50)
plt.title('Distribution for the least frequent words')
print()
# Extract the most discrimnant tokens
def tokenStem(text):
tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
tokens = tokenizer.tokenize(tokens)
filtered_tokens = []
for token in tokens:
if re.search('[a-zA-Z]', token):
stemmed_token = stemmer.stem(token)
filtered_tokens.append(stemmed_token)
return filtered_tokens
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
min_df=0.2, stop_words=my_stop_words,
use_idf=True, tokenizer=tokenStem, ngram_range=(1,1))
%time tfidf_matrix = tfidf_vectorizer.fit_transform(texts_test)
terms = tfidf_vectorizer.get_feature_names()
#Build
texts_red = [[token for token in text if token in terms] for text in texts_tok]
Explanation: an high-dimensionnal vocabulary
One problem of text analysis is the high number of unique words that are to be found in the corpus and its corollary the sparsity. We want first to give an overview of this issue and then to answer to it thanks to the method implemented in TfidfVectorizer of scikit-learn
End of explanation
# Building the dictionnary
dictionary = corpora.Dictionary(texts_red)
# Store the translation rule in a pd.DataFrame
dict_df = pd.DataFrame(data=dictionary.token2id, index=['value']).T
dict_df.head()
# Translating our corpus
texts_idx = [dictionary.doc2idx(text) for text in texts_red]
# Example
texts_idx[2]
Explanation: Translate token to index
We will write the algorithm to work on numbers more than on actual words. The idea is thus to translate the full range of words into indices, keeping the translation rule carefully. For that purpose we consider the package gensim.
End of explanation
length = []
for text in range(len(texts_idx)):
length += [len(texts_idx[text])]
plt.figure()
plt.boxplot(length)
print(max(length), min(length))
Explanation: Illustration: length diversity of texts in corpus
End of explanation
n_docs = len(texts_idx) # Number of documents in corpus
n_words = len(dict_df) # Number of words in full corpus
n_topics =5 # Number of topics we want to find
n_iter =20
alpha = 0.1
beta =0.1
Explanation: Implementation of the Latent Dirichlet Allocation
The generative model
Let's recall briefly to understand the notation the generative graphical model that is under the LDA. We won't discuss here further.
Given $d \in {1, \cdots n_{docs}}$, the document index, $w \in {1, \cdots, n_{words}}$ the word index, $ k \in {1, \cdots, n_{topics}}$ the topic index, the underlying process modeled by LDA is:
to generate $\pi_d \sim \mathcal{Dir}(\alpha)$the topics distribution for the document $d$.
to pick $t_{dw} \sim \mathcal{Mult}(\pi_i)$ the topic $k$ for the word $w$.
to generate $\phi_k \sim \mathcal{Dir}(\beta)$ the distribution of words for topic $k$.
to pick $y_{dw}| t_{dw} \sim \mathcal{Mult}(\phi_k)$ the word $w$ knowing the topic $k$.
The variables to infer in the process is the two unknown latent variable $\pi_d$ and $\phi_k$ what can be done with several techniques as describe in the article's commentary. We want here to implement the Collapsed Gibbs Sampling for its familiarity and the nice explanation written in Griffiths TL, Steyvers M. Finding scientific topics. From their demonstration we get that the full conditional distribution is:
$ \Pi(t_{dw}) \equiv P(t_{dw}=j | t_{-d}, w) \propto \frac{n_{-d,j}^{w} + \beta}{n_{-d,j} + W\beta}\frac{n_{-d,j}^d + \alpha}{n_{-d}^d + T\alpha}$
Definition of hyperparameters
End of explanation
def initialisation(n_docs,n_topics,n_words,texts_idx):
doc_topic = np.zeros((n_docs, n_topics)) # number of words per topic for each doc
word_topic = np.zeros((n_topics, n_words)) # count of each word for each topic
doc = np.zeros(n_docs) # number of words for each doc/length of each doc
topic = np.zeros(n_topics) # number of words for each topic
topics_peridx = {} # topic assigned for each word for each document
for d in range(n_docs):
idx =0
for w in texts_idx[d]:
# generate random data for the first step
t=np.random.randint(n_topics)
doc_topic[d, t] +=1 #
doc[d] +=1
word_topic[t,w] +=1
topic[t] +=1
topics_peridx[(d,idx)] = t
idx +=1
output = [doc_topic, doc, word_topic, topic, topics_peridx]
return output
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
Explanation: Initialisation
End of explanation
print(word_topic.shape) # n_topics*n_words (number of topics * number of words in final dictionary)
print(word_topic)
Explanation: Understanding the object : word_topic
This matrix gather the count of each kind of words per topic. It described the corpus that defines the topic.
Ex: the number of word_0 in topic t.
End of explanation
# Find the word corresponding to the index 0
value0 = dict_df[dict_df.value==0].index[0]
print(value0)
# Look for its frequency inside the final vocabulary (of texts_red)
freq_red = defaultdict(int)
for text in texts_red:
for token in text:
freq_red[token] += 1
print('Number of occurences in full corpus:',freq_red[value0])
print('Dispatched word_0 to each topic:',freq_red[value0] == sum(word_topic[:,0]) )
Explanation: We can check that the amount of word_0 in the full corpus is dispatched between each topic.
NB:It illustrates one interesting feature in Latent Dirichlet Allocation that allows one word to be in several topic thus having different meaning relatively to the context.
End of explanation
print(doc_topic[0:10])
print()
print('Matrix shape',doc_topic.shape) # n_docs*n_topics
Explanation: Understanding the object doc_topic
This matrix represents the count of words for each document d that belongs to each topic t. It describes which topic is present in document d and with which relative intensity.
Example: document d has 13 words that are classified to belong to t.
NB: It illustrates the fact that one document can entail/be generated by several topics.
End of explanation
print('Number of words in document_0:', len(texts_idx[0]))
print('Equals to sum of words in each topic for document_0:',sum(doc_topic[0])==len(texts_idx[0]))
Explanation: We can check that the number of words classified matches the number of word in the document
End of explanation
def pi(d,w, alpha, beta):
'''
Compute p(t|w, -t):
the full conditional distribution of topic t given the word w
'''
left = (word_topic[:,w] + beta) / (topic + beta*n_words)
right = (doc_topic[d,:] + alpha) / (doc[d] + alpha*n_topics)
p_t = left*right # is equivalent
p_t /= (np.sum(p_t)) # normalization to get a probability
return(p_t)
start_time = time.time()
for iteration in range(n_iter):
print('iteration:',iteration)
for d in range(n_docs):
idx =0
for w in texts_idx[d]:
t = topics_peridx[(d,idx)]
# withdraw the current assignment of t
doc_topic[d, t] -=1
doc[d] -=1
word_topic[t,w] -=1
topic[t] -=1
# compute the conditional distribution
p_t = pi(d,w,alpha, beta)
# choose the topic for word w
t = np.random.multinomial(1,p_t)
t= t.argmax()
doc_topic[d, t] +=1
doc[d] +=1
word_topic[t,w] +=1
topic[t] +=1
topics_peridx[(d,idx)] = t
idx +=1
print("--- %s seconds ---" % (time.time() - start_time))
Explanation: Algorithm
End of explanation
# Relative number of words per topic
pd.DataFrame(topic/sum(topic)*100).T
# Distribution of words per topic
word_topic_df = pd.DataFrame(word_topic)
word_topic_df.columns = dict_df.sort_values(['value']).index
word_topic_df
# Estimation of pi : P(w|t)
word_topic_df / word_topic_df.sum(axis=0)
Explanation: The result
We chek the relative imabalance of number of words between the topic
End of explanation
for t in range(n_topics):
topic_ = word_topic_df.iloc[t]
print('topic', t)
print(topic_[topic_ >50].index)
Explanation: definition of topics with their related words
End of explanation
def log_multi_beta(alpha, K=None):
Logarithm of the multinomial beta function.
if K is None:
# alpha is assumed to be a vector
return np.sum(gammaln(alpha)) - gammaln(np.sum(alpha))
else:
# alpha is assumed to be a scalar
return K * gammaln(alpha) - gammaln(K*alpha)
def loglikelihood():
Compute the likelihood that the model generated the data.
loglik = 0
for t in range(n_topics):
loglik += log_multi_beta(word_topic[t,:]+beta)
loglik -= log_multi_beta(beta, n_words)
for d in range(n_docs):
loglik += log_multi_beta(doc_topic[d,:]+alpha)
loglik -= log_multi_beta(alpha, n_topics)
return loglik
def LDA(n_iter,alpha,beta, verbose =False):
logliks = []
for iteration in range(n_iter):
for d in range(n_docs):
idx =0
for w in texts_idx[d]:
t = topics_peridx[(d,idx)]
# withdraw the current assignment of t
doc_topic[d, t] -=1
doc[d] -=1
word_topic[t,w] -=1
topic[t] -=1
p_t = pi(d,w, alpha, beta)
t = np.random.multinomial(1,p_t)
t= t.argmax()
doc_topic[d, t] +=1
doc[d] +=1
word_topic[t,w] +=1
topic[t] +=1
topics_peridx[(d,idx)] = t
idx +=1
if (iteration % 5==0):
print('iteration:',iteration)
if (verbose==True):
loglik = loglikelihood()
print("loglikelihood",round(loglik))
logliks += [loglik]
if verbose == False:
logliks = loglikelihood()
print("loglikelihood",round(logliks))
return(logliks)
Explanation: A complete analysis of the result is out of our goal since we haven't thought long enough to our data that were a little bit unclean due to the fetching method (sometimes other info that were we guess around the article in the website page are in fact integrated to the text). Still we can interpret the topic 4 to be related to international matter, the topic 3 to economics, topic 2 to juridical matter, topic 1 the global picture in terms of network and topic 0 to the act itself.
Playing with the features
Next idea is to understand the impact when we vary some hyperparameters or parameters we considered as constant. For that purpose we decided to compare the performance along two axis:
- how well the estimated loglikelihood represent the data
- what is the running time of the transformed algorithm
For that we should recall the form of the likelihood and apply to it the logarithm of the multinomial beta function. We get this idea from Blondel (http://mblondel.org/journal/2010/08/21/latent-dirichlet-allocation-in-python/).
Definitions
End of explanation
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
n_iter = 70
%time convergenceLDA = LDA(n_iter, alpha,beta, verbose = True)
x = range(n_iter)
fig = plt.figure()
plt.plot(x,convergenceLDA)
plt.ylabel('loglikelihood')
plt.xlabel('iterations')
plt.ylim(-290000, -210000)
Explanation: Study of time convergence
We want to know how many iterations are needed to stabilize the loglikelihood, the idea being that one iteration is already time-consuming so that we want to minimize the number of iterations.
End of explanation
#Study on alpha
lik_alpha = []
iter_alpha = np.linspace(0.1, 2.0, num=10).tolist()
for alpha in iter_alpha:
print('alpha:',alpha)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
lik = LDA(20, alpha, beta)
lik_alpha += [lik]
fig = plt.figure()
plt.plot(iter_alpha,lik_alpha)
plt.ylabel('loglikelihood')
plt.xlabel('iterations')
plt.ylim(-290000, -210000)
alpha=0.1
lik_beta = []
iter_beta = np.linspace(0.1, 2.0, num=10).tolist()
for beta in iter_beta:
print('beta:',beta)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
lik = LDA(20, alpha, beta)
lik_beta += [lik]
fig = plt.figure()
plt.plot(iter_beta,lik_beta)
plt.ylabel('pseudo loglikelihood')
plt.xlabel('iterations')
plt.ylim(-77000, -57000)
print(min(lik_beta), max(lik_beta))
Explanation: Influence of Dirichlet parameters
$\alpha$ and $\beta$ are the parameters for the two Dirichlet priors. They are called the concentration parameter, since in our symmetric distribution case, a high value means an higher degree of mixture of the input. More precisely the higher the $\alpha$, the more likely each document contains a mixture of most of the topics, and not any single topic specifically. Likewise, a high $\beta$-value means that each topic is likely to contain a mixture of most of the words, and not any word specifically, while a low value means that a topic may contain a mixture of just a few of the words.
End of explanation
alpha=0.1
beta =0.75
iter_topics = range(2,6)
lik_topics = []
for n_topics in iter_topics:
print('n_topics:',n_topics)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
lik = LDA(20, alpha, beta)
lik_topics += [lik]
fig = plt.figure()
plt.plot(iter_topics,lik_topics)
plt.ylabel('loglikelihood')
plt.xlabel('iterations')
plt.ylim(-290000, -210000)
print(min(lik_topics), max(lik_topics))
Explanation: Influence of the number of topics
End of explanation |
13,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tests of PySpark UDF with mapInArrow vs. mapInPandas
Step1: Dataset preparation
Step2: mapInPAndas tests
Test 1 - dummy UDF
Step3: Test 2
Step4: mapInArrow tests
Test 3
Step5: Test 5
Step6: DataFrame API with higher-order functions
Square the array using Spark higher order function for array processing | Python Code:
# This is a new feature, candidate from Spark 3.3.0
# See https://issues.apache.org/jira/browse/SPARK-37227
import findspark
findspark.init("/home/luca/Spark/spark-3.3.0-SNAPSHOT-bin-spark_21220128")
# use only 1 core to make performance comparisons easier/cleaner
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("dimuon mass") \
.master("local[1]") \
.config("spark.driver.memory", "2g") \
.getOrCreate()
Explanation: Tests of PySpark UDF with mapInArrow vs. mapInPandas
End of explanation
# simple tests: create data from memory
# We use array as this is where converting to pandas is slow
df = spark.sql("select Array(rand(),rand(),rand()) col3 from range(1e8)")
%%time
# write to a noop source
# this is to test the speed of processing the dataframe with no additional operations
df.write.format("noop").mode("overwrite").save()
Explanation: Dataset preparation
End of explanation
%%time
# A dummy UDF that just returns the input data
def UDF_dummy(iterator):
for batch in iterator:
yield batch
df.mapInPandas(UDF_dummy, df.schema).write.format("noop").mode("overwrite").save()
Explanation: mapInPAndas tests
Test 1 - dummy UDF
End of explanation
%%time
# UDF function that squares the input
def UDF_pandas_square(iterator):
for batch in iterator:
yield batch*batch
df.mapInPandas(UDF_pandas_square, df.schema).write.format("noop").mode("overwrite").save()
Explanation: Test 2: square the array with mapInPandas
End of explanation
%%time
# A dummy UDF that just returns the input data
def UDF_dummy(iterator):
for batch in iterator:
yield batch
df.mapInArrow(UDF_dummy, df.schema).write.format("noop").mode("overwrite").save()
### Test 4: dummy UDF using mapInArrow and awkward array
%%time
# this requires pip install awkward
import awkward as ak
# a dummy UDF that convert back and forth to awkward arrays
# it just returns the input data
def UDF_dummy_with_awkward_array(iterator):
for batch in iterator:
b = ak.from_arrow(batch)
yield from ak.to_arrow_table(b).to_batches()
df.mapInArrow(UDF_dummy_with_awkward_array, df.schema).write.format("noop").mode("overwrite").save()
Explanation: mapInArrow tests
Test 3: dummy UDF using mapInArrow
End of explanation
%%time
import awkward as ak
import numpy as np
def UDF_awkward_array_square(iterator):
for batch in iterator:
b = ak.from_arrow(batch)
b2 = ak.zip({"col3": np.square(b["col3"])}, depth_limit=1)
yield from ak.to_arrow_table(b2).to_batches()
df.mapInArrow(UDF_awkward_array_square, df.schema).write.format("noop").mode("overwrite").save()
Explanation: Test 5: square the array using awkward array
End of explanation
%%time
df2 = df.selectExpr("transform(col3, x -> x * x) as col3_squared")
df2.write.format("noop").mode("overwrite").save()
Explanation: DataFrame API with higher-order functions
Square the array using Spark higher order function for array processing
End of explanation |
13,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1、随机生成1万个整数,范围在0-10万之间,分别进行简单选择排序、快速排序(自行递归实现的)以及内置sort函数3种排序,打印出3种排序的运行时间。
假设有快速排序算法quick_sort(seq),可以实现快速排序。
令left_seq = [], right_seq = []
令待排序序列区间的第一个元素为p,即p=seq[0]
对seq的[start+1,end]区间中的每一个元素:
如果元素 < p
Step1: 2、随机生成1万个整数,范围在0-10万之间,求其中每个整数出现的次数。并按照整数大小排序输出整数及出现次数。
Step2: 3、对本任务中的语料.txt文件,随机抽取其5001-10000行存为test1.txt文件,写函数,可得到其与本任务中test.txt文件的共用字以及独用字(相关概念自行百度)。 | Python Code:
import random
import time
def simple_sort(numbers):
for i in range(len(numbers)):
for j in range(i+1,len(numbers)):
min=i
if numbers[min]>numbers[j]:
min=j
numbers[i],numbers[min]=numbers[min],numbers[i]
def quick_sort(seq):
left_seq=[]
right_seq=[]
p=seq[0]
start=0
end=len(seq)
for i in range(start+1,end):
if seq[i]<=p:
left_seq.append(seq[i])
else :
right_seq.append(seq[i])
if len(left_seq)!=0:
quick_sort(left_seq)
elif len(right_seq)!=0:
quick_sort(right_seq)
else :
left_seq.append(p)
left_seq.extend(right_seq)
return (left_seq)
n=[]
for i in range(100000):
n.append(random.randint(1,100000))
nums1=[]
nums2=[]
nums1.extend(n)
nums2.extend(n)
start_time=time.time()
quick_sort(nums2)
end_time=time.time()
print("time for quick-sort:",end_time-start_time,"-"*30)
start_time=time.time()
num=sorted(n)
end_time=time.time()
print("time for sort-function:",end_time-start_time,"-"*30)
#start_time=time.time()
#simple_sort(nums1)
#end_time=time.time()
#print("简单排序所用时间:",end_time-start_time,"-"*30)
print("too much time for the simple-sort! I have no patience for the outcome(facepalm) though I've coded it above.")
Explanation: 1、随机生成1万个整数,范围在0-10万之间,分别进行简单选择排序、快速排序(自行递归实现的)以及内置sort函数3种排序,打印出3种排序的运行时间。
假设有快速排序算法quick_sort(seq),可以实现快速排序。
令left_seq = [], right_seq = []
令待排序序列区间的第一个元素为p,即p=seq[0]
对seq的[start+1,end]区间中的每一个元素:
如果元素 < p:
将该元素加入到left_seq中
否则:
将该元素加入到right_seq中
如left_seq非空,利用快速排序算法quick_sort,对left_seq进行快速排序
如right_seq非空,利用快速排序算法quick_sort,对right_seq进行快速排序
返回:left_seq + p + right_seq
End of explanation
import random
from collections import Counter
def numbers_and_freq(numbers):
list_needed=Counter()
for i in range(len(numbers)):
list_needed += Counter(i for i in numbers)
return list_needed
n=[random.randint(0,100000) for i in range(100)]#test for list of lenth 100
the_list=numbers_and_freq(n)
print(the_list)
Explanation: 2、随机生成1万个整数,范围在0-10万之间,求其中每个整数出现的次数。并按照整数大小排序输出整数及出现次数。
End of explanation
from collections import defaultdict
import random
#制造test1.txt
def chose_lines(yuliao.txt):
num_of_line=0
randomline=[]
with open(file) as fh:
fhh=[line for line in fh.split("/")]
num_of_line+=1
for i in range(5001):
randomline=[fhh[i] for i in random.randint(num_of_line)]
return randomline
def practice3(file,file2):
Explanation: 3、对本任务中的语料.txt文件,随机抽取其5001-10000行存为test1.txt文件,写函数,可得到其与本任务中test.txt文件的共用字以及独用字(相关概念自行百度)。
End of explanation |
13,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
United States carries most of the weight of the total electricity consumption in the household market in N. America in the perdio 1990-2014. US is followed in consumption by Canada and Mexico. The average consumption in the US is about 6 times more than in Canada and about 10 times the one in Mexico.
Step1: Consumption in Europe is led by Germany followed by France and the United Kingdom. Spain is in the 5th place with a household consumption during the period of less than half the one of Germany. The tail of consumptionis led by Poland followed by Belgium and the Netherlands. It seems that there is a correlation between the size of the country and the electricity consumption.
Step2: Electricity consumption between 1990 and 2014 in the household market in Central & SOuth America is led by Brazil frollowed by Argentina & Venezuela. Although it was expected Chile to be between the first three due to its economic development, its in the 5 place after Colombia. Compared to Brazil (first place) households consumption in Argentina (second place) is about 4 times less.
Step3: The comparison between North America, Europe and Central & South America shows that average eletricity consumption in North America is 8.5 times bigger than the one in Europe (comparing the best in breed in each case). Europe compared to Central & South America has an average consumption 1.8 bigger. Within each regions variations are high concentrating most of the region´s consumption in less than 10 contries.
Step4: There is an asymetric distribution of electricity consumtpion values in the world. While most of them are in the range from 0-10 000 GWh, contries like the US has a consumption of 120 times bigger. Additionally, frequency rises to 0.95 when the electricity consumption reaches 80 000 GWh which is similar to the consumption in Brazil.
Step5: There is a sustained growth in the electricity consumption in Spain from 1990 to 2014. This is a good indicator of the economic growth of the country although between 2005 and 2015 there is a decrease in the interannual grouwth due to aggressive energy efficiency measures.
Step6: The electricity consumption experiments a moderate growth from 1990 to 2015. There is a higher growth between 1990 and 2005 than from 2005 onwards. In the last 10 years of the period under analysis, the UK´s electricity consumption in the household segment has decreased. At the end of the period electricity consumption levels have fallen to those in the year 2000. | Python Code:
#Europe
df5 = df4.loc[df4.index.isin(['Austria', 'Belgium', 'Bulgaria','Croatia', 'Cyprus', 'Czechia','Denmark', 'Estonia','Finland','France','Germany','Greece','Hungary','Ireland','Italy','Latvia','Lithuania','Luxembourg','Malta','Netherlands','Poland','Portugal','Romania','Slovakia', 'Slovenia','Spain', 'Sweden', 'United Kingdom'])]
df6= df5.sort_values(ascending=[False])
plt.figure(figsize=(10, 5))
plt.ylabel('GWh')
plt.title('Average Electricity Consumption in Europe: Household Market 1990-2014')
df6.plot.bar()
Explanation: United States carries most of the weight of the total electricity consumption in the household market in N. America in the perdio 1990-2014. US is followed in consumption by Canada and Mexico. The average consumption in the US is about 6 times more than in Canada and about 10 times the one in Mexico.
End of explanation
#Central & South America
df7 = df4.loc[df4.index.isin(['Antigua and Barbuda', 'Argentina', 'Bahamas','Barbados', 'Belize', 'Bolivia (Plur. State of)','Brazil','Chile','Colombia','Costa Rica','Cuba','Dominica','Dominican Republic','Ecuador','El Salvador','Grenada','Guatemala','Guyana','Haiti','Honduras','Jamaica','Nicaragua','Panama', 'Paraguay','Peru', 'St. Kitts-Nevis', 'St. Lucia','St. Vincent-Grenadines','Suriname','Trinidad and Tobago','Uruguay','Venezuela (Bolivar. Rep.)'])]
df8= df7.sort_values(ascending=[False])
plt.figure(figsize=(10, 5))
plt.ylabel('GWh')
plt.title('Average Electricity Consumption in Central & South America: Household Market 1990-2014')
df8.plot.bar()
Explanation: Consumption in Europe is led by Germany followed by France and the United Kingdom. Spain is in the 5th place with a household consumption during the period of less than half the one of Germany. The tail of consumptionis led by Poland followed by Belgium and the Netherlands. It seems that there is a correlation between the size of the country and the electricity consumption.
End of explanation
#Plotting all the figures together for comparison.
#North America has a different scale that Europe & "Central & South America"
plt.figure(figsize=(20, 7))
plt.subplot(1, 3, 1)
df10.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,1200000)
plt.title('Av. Elect. Cons. in N. America: Households 1990-2014')
plt.subplot(1, 3, 2)
df6.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,140000)
plt.title('Av. Elect. Cons. in Europe: Households 1990-2014')
plt.subplot(1, 3, 3)
df8.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,140000)
plt.title('Av. Elect. Cons. in Central & South America: Households 1990-2014')
#plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
#Correct the problem of skewness when the 3 graphs are represented together by normalizing with Log the data.
plt.figure(figsize=(20, 7))
plt.subplot(1, 3, 1)
np.log(df10).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in N. America: Households 1990-2014')
plt.subplot(1, 3, 2)
np.log(df6).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in Europe: Households 1990-2014')
plt.subplot(1, 3, 3)
np.log(df8).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in Central & South America: Households 1990-2014')
#plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
Explanation: Electricity consumption between 1990 and 2014 in the household market in Central & SOuth America is led by Brazil frollowed by Argentina & Venezuela. Although it was expected Chile to be between the first three due to its economic development, its in the 5 place after Colombia. Compared to Brazil (first place) households consumption in Argentina (second place) is about 4 times less.
End of explanation
#Histograms showing consumption in the World 1990-2014
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.xlabel("Electricity Consumption")
plt.ylabel("Frequency")
plt.hist(df3['Quantity (GWh)'], bins=5000 ,facecolor='green', alpha=0.5)
plt.axis([0, 20000, 0, 2000])
plt.ylabel('frequency')
plt.xlabel('Year')
plt.title('Distribution of Electricity Consumption in the World 1990-2014')
plt.subplot(1, 2, 2)
plt.xlabel("Electricity Consumption")
plt.ylabel("Frequency")
plt.hist(df3['Quantity (GWh)'], bins=5000 ,facecolor='red', normed=1, cumulative=1, alpha=0.5)
plt.axis([0, 80000, 0, 1])
plt.ylabel('frequency')
plt.xlabel('Year')
plt.title('Cumulative distribution of Electricity Consumption in the World')
plt.tight_layout()
plt.show()
Explanation: The comparison between North America, Europe and Central & South America shows that average eletricity consumption in North America is 8.5 times bigger than the one in Europe (comparing the best in breed in each case). Europe compared to Central & South America has an average consumption 1.8 bigger. Within each regions variations are high concentrating most of the region´s consumption in less than 10 contries.
End of explanation
#Dynamic analysis of the electricity consumption in Spain (delving into the details of Europe)
#To see this cell properly, it needs to be run individually while screening through the notebook.
#When 'Cell-Run all" is used the graph and an 'error message' appears.
df1 = df.ix[lambda df: w['Country or Area'] == "Spain", :]
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker = 'o', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain by household 1990-2014')
plt.show()
Explanation: There is an asymetric distribution of electricity consumtpion values in the world. While most of them are in the range from 0-10 000 GWh, contries like the US has a consumption of 120 times bigger. Additionally, frequency rises to 0.95 when the electricity consumption reaches 80 000 GWh which is similar to the consumption in Brazil.
End of explanation
#Dynamic analysis of electricity consumption in The UK
df2 = df.ix[lambda df: w['Country or Area'] == "United Kingdom", :]
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker = 'x', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in UK by household 1990-2014')
plt.show()
Explanation: There is a sustained growth in the electricity consumption in Spain from 1990 to 2014. This is a good indicator of the economic growth of the country although between 2005 and 2015 there is a decrease in the interannual grouwth due to aggressive energy efficiency measures.
End of explanation
#Dynamic Comparison of the Electricity consumption between The UK & Spain
plt.figure(figsize=(20, 5))
plt.subplot(1, 3, 1)
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker = 'o', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain by household 1990-2014')
plt.subplot(1, 3, 2)
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker = 'x', s = 30)
plt.ylabel('GWn')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in UK by household 1990-2014')
plt.subplot(1, 3, 3)
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker= "o", s= 30, label="Spain")
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker ="x", s= 30, label="UK")
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain, UK by household 1990-2014')
plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
Explanation: The electricity consumption experiments a moderate growth from 1990 to 2015. There is a higher growth between 1990 and 2005 than from 2005 onwards. In the last 10 years of the period under analysis, the UK´s electricity consumption in the household segment has decreased. At the end of the period electricity consumption levels have fallen to those in the year 2000.
End of explanation |
13,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hyper-study
The time series models built with the help of bayesloop are called hierarchical models, since the parameters of the observation model are in turn controlled by hyper-parameters that are possibly included in the transition model. In the previous section, we optimized two hyper-parameters of a serially combined transition model
Step1: It is important to note here, that the evidence value of $\approx 10^{-73.5}$ is smaller compared to the value of $\approx 10^{-72.7}$ obtained in a previous analysis here. There, we optimized the hyper-parameter values and assumed that these optimal values are not subject to uncertainty, therefore over-estimating the model evidence. In contrast, the hyper-study explicitly considers the uncertainty tied to the hyper-parameter values.
While the joint distribution of two hyper-parameters may uncover possible correlations between the two quantities, the 3D plot is often difficult to integrate into existing figures. To plot the marginal distribution of a single hyper-parameter in a simple 2D histogram/bar plot, use the plot method, just as for the parameters of the observation model
Step2: Finally, the temporal evolution of the model parameter may be displayed using, again, the plot method | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt # plotting
import seaborn as sns # nicer plots
sns.set_style('whitegrid') # plot styling
import numpy as np
import bayesloop as bl
S = bl.HyperStudy()
S.loadExampleData()
L = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))
T = bl.tm.SerialTransitionModel(bl.tm.Static(),
bl.tm.BreakPoint('t_1', 1885),
bl.tm.Deterministic(lambda t, slope=bl.cint(-0.4, 0, 20): slope*t,
target='accident_rate'),
bl.tm.BreakPoint('t_2', 1895),
bl.tm.GaussianRandomWalk('sigma', bl.cint(0, 0.8, 20),
target='accident_rate'))
S.set(L, T)
S.fit()
S.getJointHyperParameterDistribution(['slope', 'sigma'], plot=True, color=[0.1, 0.8, 0.1])
plt.xlim([-0.5, 0.1])
plt.ylim([-0.1, 0.9]);
Explanation: Hyper-study
The time series models built with the help of bayesloop are called hierarchical models, since the parameters of the observation model are in turn controlled by hyper-parameters that are possibly included in the transition model. In the previous section, we optimized two hyper-parameters of a serially combined transition model: the slope of the decrease in coal mining disasters from 1885 to 1895, and the magnitude of the fluctuations afterwards. While the optimization routine yields the most probable values of these hyper-parameters, one might also be interested in the uncertainty tied to these "optimal" values. bayesloop therefore provides the HyperStudy class that allows to compute the full distribution of hyper-parameters by defining a discrete grid of hyper-parameter values.
While the HyperStudy instance can be configured just like a standard Study instance, one may supply not only single hyper-parameter values, but also lists/arrays of values (Note: It will fall back to the standard fit method if only one combination of hyper-parameter values is supplied). Here, we test a range on hyper-parameter values by supplying regularly spaced hyper-parameter values using the cint function (one can of course also use similar functions like numpy.linspace). In the example below, we return to the serial transition model used here and compute a two-dimensional distribution of the two hyper-parameters slope (20 steps from -0.4 to 0.0) and sigma (20 steps from 0.0 to 0.8).
After running the fit-method for all value-tuples of the hyper-grid, the model evidence values of the individual fits are used as weights to compute weighted average parameter distributions. These average parameter distributions allow to assess the temporal evolution of the model parameters, explicitly considering the uncertainty of the hyper-parameters. However, in case one is only interested in the hyper-parameter distribution, setting the keyword-argument evidenceOnly=True of the fit method shortens the computation time but skips the evaluation of parameter distributions.
Finally, bayesloop provides several methods to plot the results of the hyper-study. To display the joint distribution of two hyper-parameters, choose getJointHyperParameterDistribution (or shorter: getJHPD). The method automatically computes the marginal distribution for the two specified hyper-parameters and returns three arrays, two for the hyper-parameters values and one with the corresponding probability values. Here, the first argument represents a list of two hyper-parameter names. If the keyword-argument plot=True is set, a visualization is created using the bar3d function from the mpl_toolkits.mplot3d module.
End of explanation
plt.figure(figsize=(16,4))
plt.subplot(121)
S.plot('slope', color='g', alpha=.8);
plt.subplot(122)
S.plot('sigma', color='g', alpha=.8);
Explanation: It is important to note here, that the evidence value of $\approx 10^{-73.5}$ is smaller compared to the value of $\approx 10^{-72.7}$ obtained in a previous analysis here. There, we optimized the hyper-parameter values and assumed that these optimal values are not subject to uncertainty, therefore over-estimating the model evidence. In contrast, the hyper-study explicitly considers the uncertainty tied to the hyper-parameter values.
While the joint distribution of two hyper-parameters may uncover possible correlations between the two quantities, the 3D plot is often difficult to integrate into existing figures. To plot the marginal distribution of a single hyper-parameter in a simple 2D histogram/bar plot, use the plot method, just as for the parameters of the observation model:
End of explanation
plt.figure(figsize=(8, 4))
plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5)
S.plot('accident_rate')
plt.xlim([1851, 1962])
plt.xlabel('year');
Explanation: Finally, the temporal evolution of the model parameter may be displayed using, again, the plot method:
End of explanation |
13,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Functions
1. pairwise_correlation( df )
Step3: 2. corr_rowi_rowj( df , i , j )
Step5: 3. corr_rowi_vs_all1( df )
Step7: 4. corr_rowi_vs_all2( df , i )
Step8: Test
1. pairwise_correlation( df )
Step9: 2. corr_rowi_rowj( df , i , j )
Step10: 3. corr_rowi_vs_all1( df )
Step11: 4. corr_rowi_vs_all2( df , i )
Step12: Try test_df_utils.py | Python Code:
#Try now
def pairwise_correlation(df):
#Print data first to make it easy to check.
print("data:")
print(df,"\n\nP value:")
metrix = pd.DataFrame()
labels=[]
#Use 'iterrows()' to get rows repeatedly.
#There are two for loops.
The outer for loop is used to get each row,
and the inner for loop is to make each row from the outer loop compare to each row(including itself).
#'np.corrcoef(X,Y)' is used to caculate the Pearson correlation coefficient.
It will yield the matrix:
P(X,X) P(X,Y)
P(Y,X) P(Y,Y)
So I use 'np.corrcoef(X,Y)[0,1]' just to get value P(X,Y) at positon (0,1) in matrix.
It can also be used if there are three or more arrays, like 'np.corrcoef(X,Y,Z,......)'.
#Because the output of rows from 'interrows()' is a series, I use 'tolist()' to make it bacome an array,
and then can be used in 'np.corrcoef(X,Y)'.
#I remain the original way I did printing out all the P valuse line by line.
After Monday class, I appended all Ps into a data frame,
and then change the labels of index and columns by using a list 'labels',
and use 'labels.append('row'+str(index+1))' in outer for loop to create labels.
for index, row in df.iterrows():
labels.append('row'+str(index+1))
for index2, row2 in df.iterrows():
P = np.corrcoef(row.tolist(), row2.tolist())[0,1]
print("P value of row%d and row%d is %.2f." %(index+1,index2+1,P))
metrix.loc[index,index2] = P
metrix.columns = labels
metrix.index = labels
print("\nTable of P values")
print(metrix)
return metrix
du.pairwise_correlation(df)
Explanation: Functions
1. pairwise_correlation( df ): It can calculate the Pearson correlation coefficients(P) for each 2 rows in data frame.
df : data frame input
End of explanation
#Try now
def corr_rowi_rowj(df,i,j):
#Print data first to make it easy to check.
print("data:")
print(df,"\n\nP value:")
#'np.corrcoef(X,Y)' is used to caculate the Pearson correlation coefficient.
It will yield the matrix:
P(X,X) P(X,Y)
P(Y,X) P(Y,Y)
So I use 'np.corrcoef(X,Y)[0,1]' just to get value P(X,Y) at positon (0,1) in matrix.
It can also be used if there are three or more arrays, like 'np.corrcoef(X,Y,Z,......)'.
#'iloc[i-1]'is used to locate the row.
'i-1' is used because the difference between index (from 0) and actual row is 1.
#Because the output of rows from 'iloc[i-1]' is a series, I use 'tolist()' to make it bacome an array,
and then can be used in 'np.corrcoef(X,Y)'.
P = np.corrcoef(df.iloc[i-1].tolist(), df.iloc[j-1].tolist())[0,1]
print("P value of row%d and row%d is %.2f." %(i,j,P))
return P
du.corr_rowi_rowj(df,2,3)
#Note: In this case, row2(row of index 1) and row3 (row of index 2) are compared.
Explanation: 2. corr_rowi_rowj( df , i , j ): It can calculate the Pearson correlation coefficients(P) for selected 2 rows.
df : data frame input
i : the first row you select
j : the second row you select
Note: i and j are rows in a data frame, not index in python.
End of explanation
#Try now
def corr_rowi_vs_all1(df):
#Print data first to make it easy to check.
print("data:")
print(df,"\n\nP value:")
metrix = pd.DataFrame()
labels = []
#Use 'iterrows()' to get rows repeatedly.
#There are two for loops.
The outer for loop is used to get each row,
and the inner for loop is to make each row from the outer loop compare to each row(including itself).
#There is a if loop in the inner for loop to avoid calculating P between 2 same rows
by checking the index values between 2 rows. I use "P=1" instead of "1" to clearly show it.
#'np.corrcoef(X,Y)' is used to caculate the Pearson correlation coefficient.
It will yield the matrix:
P(X,X) P(X,Y)
P(Y,X) P(Y,Y)
So I use 'np.corrcoef(X,Y)[0,1]' just to get value P(X,Y) at positon (0,1) in matrix.
It can also be used if there are three or more arrays, like 'np.corrcoef(X,Y,Z,......)'.
#Because the output of rows from 'interrows()' is a series, I use 'tolist()' to make it bacome an array,
and then can be used in 'np.corrcoef(X,Y)'.
#I remain the original way I did printing out all the P valuse line by line.
After Monday class, I appended all Ps into a data frame,
and then change the labels of index and columns by using a list 'labels',
and use 'labels.append('row'+str(index+1))' in outer for loop to create labels.
for index, row in df.iterrows():
labels.append('row'+str(index+1))
for index2, row2 in df.iterrows():
if index==index2:
metrix.loc[index,index] = "P=1"
else:
P = np.corrcoef(row.tolist(), row2.tolist())[0,1]
print("P value of row%d and row%d is %.2f." %(index+1,index2+1,P))
metrix.loc[index, index2] = P
metrix.columns = labels
metrix.index = labels
print("\nTables for P values:")
print(metrix)
return metrix
du.corr_rowi_vs_all1(df)
Explanation: 3. corr_rowi_vs_all1( df ): It can calculate the Pearson correlation coefficients(P) for each 2 rows (not including itself) in data frame.
df : data frame input
End of explanation
#Try now
def corr_rowi_vs_all2(df,i):
#Print data first to make it easy to check.
print("data:")
print(df,"\n\nP value:")
metrix = pd.DataFrame()
labels = []
#Use a for loop and 'iterrows()' to get rows repeatedly.
#There is a if loop in the for loop to avoid calculating P between 2 same rows
by checking the index values between 2 rows. I use "P=1" instead of "1" to clearly show it.
#'np.corrcoef(X,Y)' is used to caculate the Pearson correlation coefficient.
It will yield the matrix:
P(X,X) P(X,Y)
P(Y,X) P(Y,Y)
So I use 'np.corrcoef(X,Y)[0,1]' just to get value P(X,Y) at positon (0,1) in matrix.
It can also be used if there are three or more arrays, like 'np.corrcoef(X,Y,Z,......)'.
#Because the output of rows from 'interrows()' is a series, I use 'tolist()' to make it bacome an array,
and then can be used in 'np.corrcoef(X,Y)'.
#I remain the original way I did printing out all the P valuse line by line.
After Monday class, I appended all Ps into a data frame,
and then change the labels of index and columns by using a list 'labels',
and use 'labels.append('row'+str(index+1))' in outer for loop to create labels.
for index, row in df.iterrows():
labels.append('row'+str(index+1))
if (index+1)==i:
metrix.loc[index,index] = "P=1"
else:
P = np.corrcoef(df.iloc[i-1].tolist(), row.tolist())[0,1]
metrix.loc[i-1,index] = P
print("P value of row%d and row%d is %.2f." %(i,index+1,P))
metrix.columns = labels
metrix.index = ["row"+str(i)]
print("\nTable for P values:")
print(metrix)
return metrix
du.corr_rowi_vs_all2(df,4)
#Note: In this case, row4(row of index 3) is selected.
Explanation: 4. corr_rowi_vs_all2( df , i ): It can calculate the Pearson correlation coefficients(P) for a selected row to all the other rows.
df : data frame input
i : the row you select
Note: i is a row in a data frame, not an index in python.
End of explanation
def test_pairwise_correlation():
df = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])
assert int(du.pairwise_correlation(df).iloc[0,0]) == 1, "Diagonal elements not handled properly"
assert int(du.pairwise_correlation(df).iloc[0,1]) == -1, "Anticorrelated elements not handled properly"
assert int(du.pairwise_correlation(df).iloc[0,2]) == 0, "Uncorrelated elements not handled properly"
assert int(du.pairwise_correlation(df).iloc[1,2]) == int(du.pairwise_correlation(df).iloc[2,1]), "Data not appended properly"
return
test_pairwise_correlation()
Explanation: Test
1. pairwise_correlation( df ):
End of explanation
def test_corr_rowi_rowj():
df = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])
assert int(du.corr_rowi_rowj(df,1,1)) == 1, "Diagonal elements not handled properly"
assert int(du.corr_rowi_rowj(df,1,2)) == -1, "Anticorrelated elements not handled properly"
assert int(du.corr_rowi_rowj(df,1,3)) == 0, "Uncorrelated elements not handled properly"
return
test_corr_rowi_rowj()
Explanation: 2. corr_rowi_rowj( df , i , j ):
End of explanation
def test_corr_rowi_vs_all1():
df = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])
assert type(du.corr_rowi_vs_all1(df).iloc[0,0]) == str, "Diagonal elements not handled properly"
assert int(du.corr_rowi_vs_all1(df).iloc[0,1]) == -1, "Anticorrelated elements not handled properly"
assert int(du.corr_rowi_vs_all1(df).iloc[0,2]) == 0, "Uncorrelated elements not handled properly"
assert int(du.corr_rowi_vs_all1(df).iloc[1,2]) == int(du.corr_rowi_vs_all1(df).iloc[2,1]), "Data not appended properly"
return
test_corr_rowi_vs_all1()
Explanation: 3. corr_rowi_vs_all1( df ):
End of explanation
def test_corr_rowi_vs_all2():
df = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])
assert type(du.corr_rowi_vs_all2(df,2).iloc[0,1]) == str, "Diagonal elements not handled properly"
assert int(du.corr_rowi_vs_all2(df,2).iloc[0,0]) == -1, "Anticorrelated elements not handled properly"
assert int(du.corr_rowi_vs_all2(df,2).iloc[0,2]) == 0, "Uncorrelated elements not handled properly"
return
test_corr_rowi_vs_all2()
Explanation: 4. corr_rowi_vs_all2( df , i ):
End of explanation
import test_df_utils as ts
ts.test_pairwise_correlation()
ts.test_corr_rowi_rowj()
ts.test_corr_rowi_vs_all1()
ts.test_corr_rowi_vs_all2()
Explanation: Try test_df_utils.py:
End of explanation |
13,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2
Step1: Note
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS | Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
conn.rollback()
Explanation: Note: As I'm operating on Windows I had to change the above connect argument to what you see. Mac users would need to change it back to
conn = pg8000.connect(database="homework2")
I guess ;)
If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
cursor = conn.cursor()
statement = "select movie_title from uitem where horror = 1 and scifi = 1 order by release_date DESC;"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
cursor = conn.cursor()
statement = "select count(*) from uitem where musical = 1 or childrens = 1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
cursor = conn.cursor()
statement = "select occupation, count(occupation) from uuser group by occupation having count(*) > 50;"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
cursor = conn.cursor()
statement = "select distinct(movie_title) from uitem join udata on uitem.movie_id = udata.item_id where uitem.documentary = 1 and uitem.release_date < '1992-01-01' and udata.rating = 5 order by movie_title;"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
cursor = conn.cursor()
statement = "select movie_title, avg(rating) from uitem join udata on uitem.movie_id = udata.item_id where horror = 1 group by uitem.movie_title order by avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
cursor = conn.cursor()
statement = "select movie_title, avg(rating) from uitem join udata on uitem.movie_id = udata.item_id where horror = 1 group by uitem.movie_title having count(udata.rating) > 10 order by avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation |
13,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple example on how to use jQAssistant with Python Pandas
I'm a huge fan of the software analysis framework jQAssistant (http
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6
Step6: Step 7 | Python Code:
import py2neo
import pandas as pd
Explanation: A simple example on how to use jQAssistant with Python Pandas
I'm a huge fan of the software analysis framework jQAssistant (http://www.jqassistant.org). It's a great tool for scanning and validating various software artifacts (get a glimpse at https://buschmais.github.io/spring-petclinic/). But I also love Python Pandas (http://http://pandas.pydata.org) as a powerful tool in combination with Jupyter notebook (http://http://jupyter.org/) for reproducible Software Analytics (https://en.wikipedia.org/wiki/Software_analytics).
Combining these tools is near at hand. So I've created a quick demonstration for "first contact" :-)
Step 0: Preliminary work
For this quick example, I use the jQAssistant example project (https://www.github.com/buschmais/spring-petclinic/) based on the famous Spring PetClinic project. The authors of jQAssistant added a few validation rules and jQAssistant just works plain simple due to the clever Maven integration. If you want to do the same analysis, just clone the project and execute a <tt>mvn clean install</tt>. jQAssistant will then scan the software artifacts and store various data about their structure into the embedded graph database Neo4j (https://neo4j.com). After this command, start the neo4j database instance with <tt>mvn jqassistant:server</tt>. Optional: Check out http://localhost:7474 for directly accessing the Neo4j database.
Step 1: The imports
Nothing spectacular here. We use the py2neo-Neo4j-connector (http://www.http://py2neo.org) for accessing the underlying Neo4j database instance that jQAssistant brings along. Just install the connector with a <tt>pip install py2neo</tt>. We also import Pandas with a nice short name.
End of explanation
graph = py2neo.Graph()
Explanation: Step 2: Connecting to jQAssistant's embedded neo4j database
The embedded Neo4j installation comes with the standard configuration for port, username, password and an open HTTP port for accessing the database via web services. So there is no need to configure py2neo's connection at all. We just create a Graph object for later usage.
End of explanation
query = "MATCH (a:Method) RETURN a"
result = graph.data(query)
result[0:3]
Explanation: Step 3: Executing Cypher queries
For this demonstration, we simply list all the methods that are stored in our database (and marked by the label "Method"). As an example analysis, we would like to know if our application consists just of getters and setters or some real business methods, too. Our query is written in Neo4j's graph query language Cypher (https://neo4j.com/developer/cypher-query-language/) and returns some values (only the first three are displayed).
End of explanation
df = pd.DataFrame.from_dict([data['a'] for data in result]).dropna(subset=['name'])
df.head()
Explanation: Step 4: Creating a Pandas DataFrame
For the following analysis, we iterate through the dictionary that we'd received from the Neo4j database. We don't need the <tt>"a"</tt> keys that were returned, but only the corresponding values. This is accomplished via Python's list comprehension. We also avoid getting some Nan values in the 'name' column, so we simply drop all empty entries there. We end up with a nice, fully filled DataFrame (only the five rows are displayed).
End of explanation
# filter out all the constructor "methods"
df = df[df['name'] != "<init>"]
# assumption 1: getter start with "get"
df.loc[df['name'].str.startswith("get"), "method_type"] = "Getter"
# assumption 2: "is" is just the same as a getter, just for boolean values
df.loc[df['name'].str.startswith("is"), "method_type"] = "Getter"
# assumption 3: setter start with "set"
df.loc[df['name'].str.startswith("set"), "method_type"] = "Setter"
# assumption 4: all other methods are "Business Methods"
df['method_type'] = df['method_type'].fillna('Business Methods')
df[['name', 'signature', 'visibility', 'method_type']][20:30]
Explanation: Step 5: The analysis
Next we simply work on the <tt>"name"</tt> column to retrieve some information we need for our analysis. In the code, we document our assumptions / heuristics for retrieving the getters and setters (just a subset is displayed for layout reasons).
End of explanation
grouped_data = df.groupby('method_type').count()['name']
grouped_data
Explanation: Step 6: Preparing the output
Now we group the data by their method type. We simply count the occurence of each entry and take only the 'name' column for further analysis.
End of explanation
import matplotlib.pyplot as plt
# some configuration for displaying nice diagrams directly in the notebook
%matplotlib inline
plt.style.use('fivethirtyeight')
# apply additional style for getting a blank background
plt.style.use('seaborn-white')
# plot a nice business people compatible pie chart
ax = grouped_data.plot(kind='pie', figsize=(5,5), title="Business methods or just Getters or Setters?")
# get rid of the distracting label for the y-axis
ax.set_ylabel("")
Explanation: Step 7: Visualization
Until now, we could have done most of the work directly in the Neo4j database. But what we want is to create a nice little diagram to display our results. We use matplotlib (http://matplotlib.org) that is integrated with Pandas' DataFrame in a very good way.
End of explanation |
13,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
13,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
13,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decoding sensor space data with generalization across time and conditions
This example runs the analysis described in
Step1: We will train the classifier on all left visual vs auditory trials
and test on all right visual vs auditory trials.
Step2: Score on the epochs where the stimulus was presented to the right.
Step3: Plot | Python Code:
# Authors: Jean-Remi King <[email protected]>
# Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
events_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels
raw.filter(1., 30., fir_design='firwin') # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
proj=True, picks=picks, baseline=None, preload=True,
reject=dict(mag=5e-12), decim=decim, verbose='error')
Explanation: Decoding sensor space data with generalization across time and conditions
This example runs the analysis described in :footcite:KingDehaene2014. It
illustrates how one can
fit a linear classifier to identify a discriminatory topography at a given time
instant and subsequently assess whether this linear model can accurately
predict all of the time samples of a second set of conditions.
End of explanation
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=None,
verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
y=epochs['Left'].events[:, 2] > 2)
Explanation: We will train the classifier on all left visual vs auditory trials
and test on all right visual vs auditory trials.
End of explanation
scores = time_gen.score(X=epochs['Right'].get_data(),
y=epochs['Right'].events[:, 2] > 2)
Explanation: Score on the epochs where the stimulus was presented to the right.
End of explanation
fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show()
Explanation: Plot
End of explanation |
13,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing surrogate models
Tim Head, July 2016.
Step1: Bayesian optimization or sequential model-based optimization uses a surrogate model
to model the expensive to evaluate function func. There are several choices
for what kind of surrogate model to use. This notebook compares the performance of
Step2: This shows the value of the two-dimensional branin function and the three minima.
Objective
The objective of this example is to find one of these minima in as few iterations
as possible. One iteration is defined as one call to the branin function.
We will evaluate each model several times using a different seed for the
random number generator. Then compare the average performance of these
models. This makes the comparison more robust against models that get
"lucky".
Step3: Note that this can take a few minutes. | Python Code:
import numpy as np
np.random.seed(123)
%matplotlib inline
import matplotlib.pyplot as plt
plt.set_cmap("viridis")
Explanation: Comparing surrogate models
Tim Head, July 2016.
End of explanation
from skopt.benchmarks import branin as _branin
def branin(x, noise_level=0.):
return _branin(x) + noise_level * np.random.randn()
from matplotlib.colors import LogNorm
def plot_branin():
fig, ax = plt.subplots()
x1_values = np.linspace(-5, 10, 100)
x2_values = np.linspace(0, 15, 100)
x_ax, y_ax = np.meshgrid(x1_values, x2_values)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (100, 100))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()))
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14, lw=0, label="Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plot_branin()
Explanation: Bayesian optimization or sequential model-based optimization uses a surrogate model
to model the expensive to evaluate function func. There are several choices
for what kind of surrogate model to use. This notebook compares the performance of:
gaussian processes,
extra trees, and
random forests
as surrogate models. A purely random optimization strategy is also used as a baseline.
Toy model
We will use the branin function as toy model for the expensive function. In
a real world application this function would be unknown and expensive to evaluate.
End of explanation
from functools import partial
from skopt import gp_minimize, forest_minimize, dummy_minimize
func = partial(branin, noise_level=2.0)
bounds = [(-5.0, 10.0), (0.0, 15.0)]
n_calls = 60
def run(minimizer, n_iter=5):
return [minimizer(func, bounds, n_calls=n_calls, random_state=n)
for n in range(n_iter)]
# Random search
dummy_res = run(dummy_minimize)
# Gaussian processes
gp_res = run(gp_minimize)
# Random forest
rf_res = run(partial(forest_minimize, base_estimator="RF"))
# Extra trees
et_res = run(partial(forest_minimize, base_estimator="ET"))
Explanation: This shows the value of the two-dimensional branin function and the three minima.
Objective
The objective of this example is to find one of these minima in as few iterations
as possible. One iteration is defined as one call to the branin function.
We will evaluate each model several times using a different seed for the
random number generator. Then compare the average performance of these
models. This makes the comparison more robust against models that get
"lucky".
End of explanation
from skopt.plots import plot_convergence
plot = plot_convergence(("dummy_minimize", dummy_res),
("gp_minimize", gp_res),
("forest_minimize('rf')", rf_res),
("forest_minimize('et)", et_res),
true_minimum=0.397887, yscale="log")
plot.legend(loc="best", prop={'size': 6}, numpoints=1);
Explanation: Note that this can take a few minutes.
End of explanation |
13,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
travelTime Analysis
Different analyses of data collected using https
Step1: Load data
Step2: Convert the unix timestamp to a datetime object
Step3: Add a new column with the duration in hours
Step4: Let's have a quick visualization
Step5: Week by Week plots
Identify weeks in the dataset and plot them
Step6: Day plots
Pick a day to compare across weeks
Step7: Peak/valley detection
Detect highs and lows | Python Code:
%matplotlib inline
import pandas as pd, matplotlib.pyplot as plt, matplotlib.dates as dates, math
from datetime import datetime
from utils import find_weeks, find_days # custom
from pytz import timezone
from detect_peaks import detect_peaks
from ipywidgets import interact, interactive, fixed, interact_manual
Explanation: travelTime Analysis
Different analyses of data collected using https://github.com/amadeuspzs/travelTime/blob/master/travelTime.py
End of explanation
filename = 'data/home-montauk.csv'
tz = timezone('US/Eastern')
data = pd.read_csv(filename)
data.head(5)
Explanation: Load data
End of explanation
data.Timestamp=data.apply(lambda row: datetime.fromtimestamp(int(row['Timestamp']),tz),axis=1)
data.head(5)
Explanation: Convert the unix timestamp to a datetime object:
End of explanation
data['Duration(h)']=data.apply(lambda row: float(row['Duration(s)'])/(60*60),axis=1)
data.head(5)
Explanation: Add a new column with the duration in hours
End of explanation
ax = data.plot(x='Timestamp',y='Duration(h)')
Explanation: Let's have a quick visualization:
End of explanation
weeks = find_weeks(data)
num_cols = 2
num_rows = int(math.ceil(len(weeks) / float(num_cols)))
ylim = [min([min(data[week[0]:week[1]+1]['Duration(h)']) for week in weeks]),
max([max(data[week[0]:week[1]+1]['Duration(h)']) for week in weeks])]
plt.figure(1,figsize=(14, 7))
for e, week in enumerate(weeks):
ax = plt.subplot(num_rows,num_cols,e+1)
data[week[0]:week[1]].plot(x='Timestamp',y='Duration(h)',ax=ax)
ax.grid()
ax.set_ylim(ylim)
plt.tight_layout()
Explanation: Week by Week plots
Identify weeks in the dataset and plot them:
End of explanation
days = find_days(data,5) #Friday
num_cols = 3
num_rows = int(math.ceil(len(weeks) / float(num_cols)))
ylim = [min([min(data[day[0]:day[1]+1]['Duration(h)']) for day in days]),
max([max(data[day[0]:day[1]+1]['Duration(h)']) for day in days])]
plt.figure(1,figsize=(14, 7))
for e, day in enumerate(days):
ax = plt.subplot(num_rows,num_cols,e+1)
data[day[0]:day[1]].plot(x='Timestamp',y='Duration(h)',ax=ax)
ax.xaxis.set_major_formatter(dates.DateFormatter('%H',tz))
ax.xaxis.set_major_locator(dates.HourLocator(interval=1))
ax.grid()
ax.set_ylim(ylim)
plt.tight_layout()
Explanation: Day plots
Pick a day to compare across weeks:
End of explanation
week = find_weeks(data)[2] # choose one week
week_data = data[week[0]:week[1]+1]
@interact(mpd=50,mph=1.0)
def peaks(mpd, mph):
indexes = detect_peaks(week_data['Duration(h)'],mpd=mpd,mph=mph,show=True)
for index in indexes:
print week_data.iloc[[index]].Timestamp.dt.strftime("%a %H:%M").values[0]
@interact(mpd=130)
def peaks(mpd):
indexes = detect_peaks(week_data['Duration(h)'],valley=True,mpd=mpd,show=True)
for index in indexes:
print week_data.iloc[[index]].Timestamp.dt.strftime("%a %H:%M").values[0]
Explanation: Peak/valley detection
Detect highs and lows
End of explanation |
13,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evolution of sequence, structure and dynamics with Evol and SignDy
This tutorial has two parts, focusing on two related parts of ProDy for studying evolution
Step1: We also configure ProDy to put all the PDB files in a particular folder seeing as there are so many of them.
Step2: 1. Sequence evolution with Evol
Fetching, parsing and refining MSAs from Pfam
The protein families database Pfam provides multiple sequence alignments of related protein domains, which we are often used as starting points for sequence evolution analyses. We can fetch such MSAs using the function fetchPfamMSA as follows
Step3: We can then parse the MSA into ProDy using the parseMSA function, which can handle various types of MSA files including Stockholm, SELEX, CLUSTAL, PIR and FASTA formats.
Step4: This alignment can be indexed to extract individual sequences (rows) and residue positions (columns)
Step5: This alignment contains many redundant sequences as well as lots of rows and columns with large numbers of gaps. Therefore, we refine it using refineMSA, which we can do based on the sequence of RNAS1_BOVIN
Step6: Measuring sequence conservation with Shannon entropy
We calculate use calcShannonEntropy to calculate the entropy of the refined MSA, which is a measure of sequence variability.
Shannon's entropy measures the degree of uncertainty that exists in a system. In the case of multiple sequence alignments, the Shannon entropy of each protein site (column) can be computed according to
Step7: We can also show the Shannon entropy on a bar chart
Step8: Comparisons of sequence evolution and structural dynamics
Next, we obtain residue fluctuations or mobility for a protein member of the above family using the GNM.
We will use chain B of PDB structure 2W5I, which corresponds to our reference sequence RNAS1_BOVIN.
Step9: The next step is to select the corresponding residues from the AtomGroup to match the sequence alignment. We can identify these using alignSequenceToMSA. We give it the Calpha atoms only so the residue numbers aren't repeated.
Step10: We see that there are extra residues in the PDB sequence compared to the reference sequence so we identify their residue numbers to make a selection.
Step11: They are numbered from 1 to 124, two residues are missing from the beginning, and three residues are missing from the end, so we select residues 3 to 121. This now makes the two sequences match.
Step12: We perform GNM analysis as follows
Step13: We can then visually compare the behaviour at the individual residue level as follows
Step14: Coevolution Calculation
In addition to the conservation/variation of individual positions, we can also calculate the coevolution between positions due to correlated mutations.
One simple and common method for this is to compute the mutual information between the columns in the MSA
Step15: We can improve this with the widely used average product correction
Step16: We can change the colour scale normalisation to eliminate the effect of the diagonal. However, the mutual information matrix is still pretty noisy.
Step17: Therefore, more sophisticated analyses have also been developed including the Direct Information (DI; also known as direct coupling analysis (DCA), which is very successful for contact prediction. This method can also be used in ProDy as follows
Step18: If we compare the brighter regions on this map to the contact matrix then we see that they indeed match pretty well
Step19: We can also apply a rank-ordering to the DI and corrected MI matrix entries, which helps identify the strongest signals
Step20: 2. Signature Dynamics analysis with SignDy
This tutorial describes how to calculate signature dynamics for a family of proteins with similar structures using Elastic Network Models (ENMs). This method (also called ensemble normal mode analysis) creates an ensemble of aligned structures and calculates statistics such as means and standard deviations on various dynamic properties including mode profiles, mean square fluctuations and cross-correlation matrices. It also includes tools for classifying family members based on their sequence, structure and dynamics.
The theory and usage of this toolkit is described in our recent paper
Step21: Overview
The first step in signature dynamics analysis is to collect a set of related protein structures and build a PDBEnsemble. This can be achieved by multiple routes
Step22: The Dali search often remains in the queue longer than the timeout time. We therefore have a fetch method, which can be run later to fetch the data. We can run this in a loop with a wait of a couple of minutes in between fetches to make sure we get the result.
Step23: Next, we get the lists of PDB IDs and mappings from dali_rec, and parse the pdb_ids to get a list of AtomGroup instances
Step24: Then we provide ags together with mappings to buildPDBEnsemble. We set the keyword argument seqid=20 to account for the low sequence identity between some of the structures.
Step25: Finally, we save the ensemble for later processing
Step26: Step 2
Step27: Then we calculate GNM modes for each member of the ensemble using calcEnsembleENMs. There are options to select the model (GNM by default) and the way of considering non-aligned residues by setting the trim option (default is reduceModel, which treats them as environment).
Step28: We can save the mode ensemble as follows
Step29: We can also load in a previously saved mode ensemble such as the one we saved above
Step30: Slicing and Indexing Mode Ensembles
We can index the ModeEnsemble object in two different dimensions. The first dimension corresponds to ensemble members as shown below for extracting the mode set for the first member (numbered 0).
Step31: The second dimension corresponds to particular modes of all ensemble members as shown below for extracting the first mode (numbered 0). The colon means we select everything from the first dimension.
Step32: We can also slice out ranges of members and modes and index them both at the same time. E.g. to get the five members from 5 up to but not including 10 (5, 6, 7, 8, 9), and the two modes from 2 up to but not including 4 (modes with indices 2 and 3 in the reference), we'd use the following code.
Step33: We can also use indexing to extract individual modes from individual members, e.g.
Step34: Remember that we usually talk about modes counting from 1 so this is "Mode 3" or "the 3rd global mode" in conversation but Python counts from 0 so it has index 2. Likewise this is the "6th member" of the ensemble but has index 5.
Step 3
Step35: We can also show such results for properties involving multiple modes such as the mean square fluctuations from the first 5 modes or the cross-correlations from the first 20.
Step36: We can also look at distributions over values across different members of the ensemble such as inverse eigenvalue. We can show a bar above this with individual members labelled
like in
Krieger J, Bahar I, Greger IH.
Structure, Dynamics, and Allosteric Potential of Ionotropic Glutamate Receptor N-Terminal Domains. Biophys. J. 2015 109(6)
Step37: We plot the variance bar for the first five modes (showing a function of the inverse eigenvalues related to the resultant relative size of motion) above the inverse eigenvalue distributions for each of those modes. To arrange the plots like this, we use the GridSpec function of Matplotlib.
Step38: We can also extract the eigenvalues and eigenvectors directly from the mode ensemble and analyse them ourselves
Step39: These are stored in instances of the sdarray class that we designed specifically for signature dynamics analysis. It is an extension of the standard NumPy ndarray but has additional attributes and some modified methods. The first axis is reserved for ensemble members and the mean, min, max and std are altered to average over this dimension rather than all dimensions.
We can look at the shape of these arrays and index them just like ndarray and ModeEnsemble objects. The eigenvalues are arranged in eigvals such that the first axis is the members and the second is the modes as in the mode ensemble.
Step40: The eigenvectors are arranged in eigvecs such that the first axis is over the members, and the remaining dimensions are as in other eigenvector arrays - the second is over atoms and the third is mode index. Each atom has a weight, which varies between members and is important in calculating the mean, std, etc.
Step41: Step 4
Step42: We can also obtain a spectral distance matrix (sd_matrix) from calcEnsembleSpectralOverlaps by giving it an additional argument
Step43: We can then use this distance to calculate a tree. The labels from the mode ensemble as used as names for the leaves of the tree and are stored in their own variable/object for later use.
Step44: We can show this tree using the function showTree
Step45: We can also use this tree to reorder the so_matrix and obtain indices for reordering other objects
Step46: As in the tree, we see 2-3 clusters with some finer structure within them as in the tree. These correspond to different subtypes of iGluRs called AMPA receptors (subunit paralogues GluA1-4, top) and kainate receptors (subunit paralogues GluK1-5, bottom) based on their preferred agonists as well as delta receptors at the bottom (these are flipped relative to the tree).
To show the matrix in the same order as the tree, we can add the option origin='upper'
Step47: We can also show the tree along the y-axis of the matrix as follows
Step48: We can also use the resulting indices to reorder the ModeEnsemble and PDBEnsemble
Step49: Lists can only be used for indexing arrays not lists so we need to perform a type conversion prior to indexing in order to reorder the labels
Step50: Comparing with sequence and structural distances
The sequence distance is given by the (normalized) Hamming distance, which is
calculated by subtracting the percentage identity (fraction) from 1, and the
structural distance is the RMSD. We can also calculate and show the matrices
and trees for these from the PDB ensemble.
First we calculate the sequence distance matrix
Step51: We can also construct a tree based on seqdist_matrix and use that to reorder it
Step52: We can reorder seqdist_matrix with seqdist_tree as we did above with so_tree
Step53: This shows us even clearer groups than the dynamic spectrum-based analysis. We see one subunit by itself at the bottom that is from a delta-type iGluR (GluD2), then two groups of kainate receptors (GluK5 and GluK2 with GluK3), and four groups of AMPARs (GluA1, GluA2, GluA4, and many structures from GluA3).
Similarily, once we obtain the RMSD matrix and tree using the getRMSDs method of the PDBEnsemble, we
can calculate the structure-based tree
Step54: It could be of interest to put all three trees constructed based on different
distance metrics side by side and compare them. We can do this using the subplot function from Matplotlib.
Step55: Likewise, we can place the matrices side-by-side after having them all reordered the same way. We'll reorder by seqdist in this example
Step56: This analysis is quite sensitive to how many modes are used. As the number of modes approaches the full number,
the dynamic distance order approaches the RMSD order. With smaller numbers, we see finer distinctions and there is a point where the dynamic distances are more in line with the sequence distances, which we call the low-to-intermediate frequency regime. In the current case where we used just one global mode (with the lowest frequency), we see small spectral distances but some subfamily differentiation is still apparent.
The same analysis could also be performed with a larger ensemble by selecting lower sequence identity and Z-score cutoffs as we did in our paper.
Now we have finished this tutorial, we reset the default path to the PDB folder, so that we aren't surprised next time we download PDBs and can't find them | Python Code:
from prody import *
from pylab import *
%matplotlib inline
confProDy(auto_show=False)
Explanation: Evolution of sequence, structure and dynamics with Evol and SignDy
This tutorial has two parts, focusing on two related parts of ProDy for studying evolution:
The sequence sub-package Evol is for fetching, parsing and refining multiple sequence alignments (MSAs), and calculating residue-level properties such as conservation and coevolution as well as sequence-level properties such as percentage identity.
The signature dynamics module SignDy calculates ENM normal modes for ensembles of related protein structures and evaluates the conservation and differentiation of signature dynamics across families and subfamilies. It also allows classification of ensemble/family members based upon their dynamics, allowing the evolution of protein dynamics to be compared with the evolution of sequence and structure.
We first make the required imports:
End of explanation
pathPDBFolder('./pdbs/')
Explanation: We also configure ProDy to put all the PDB files in a particular folder seeing as there are so many of them.
End of explanation
filename = fetchPfamMSA('PF00074')
filename
Explanation: 1. Sequence evolution with Evol
Fetching, parsing and refining MSAs from Pfam
The protein families database Pfam provides multiple sequence alignments of related protein domains, which we are often used as starting points for sequence evolution analyses. We can fetch such MSAs using the function fetchPfamMSA as follows:
End of explanation
msa = parseMSA(filename)
msa
Explanation: We can then parse the MSA into ProDy using the parseMSA function, which can handle various types of MSA files including Stockholm, SELEX, CLUSTAL, PIR and FASTA formats.
End of explanation
msa[:10,:10]
seq0 = msa[0]
seq0
str(seq0)
Explanation: This alignment can be indexed to extract individual sequences (rows) and residue positions (columns):
End of explanation
msa_refined = refineMSA(msa, label='RNAS1_BOVIN', rowocc=0.8, seqid=0.98)
msa_refined
Explanation: This alignment contains many redundant sequences as well as lots of rows and columns with large numbers of gaps. Therefore, we refine it using refineMSA, which we can do based on the sequence of RNAS1_BOVIN:
End of explanation
entropy = calcShannonEntropy(msa_refined)
Explanation: Measuring sequence conservation with Shannon entropy
We calculate use calcShannonEntropy to calculate the entropy of the refined MSA, which is a measure of sequence variability.
Shannon's entropy measures the degree of uncertainty that exists in a system. In the case of multiple sequence alignments, the Shannon entropy of each protein site (column) can be computed according to:
$$H(p_1, p_2, \ldots, p_n) = -\sum_{i=1}^n p_i \log_2 p_i $$
where $p_i$ is the frequency of amino acid $i$ in that site. If a column is completely conserved then Shannon entropy is 0. The maximum variability, where each amino acid occurs with frequency 1/20, yields an entropy of 4.32
End of explanation
showShannonEntropy(msa_refined);
Explanation: We can also show the Shannon entropy on a bar chart:
End of explanation
ag = parsePDB('2W5I', chain='B')
ag
Explanation: Comparisons of sequence evolution and structural dynamics
Next, we obtain residue fluctuations or mobility for a protein member of the above family using the GNM.
We will use chain B of PDB structure 2W5I, which corresponds to our reference sequence RNAS1_BOVIN.
End of explanation
aln, idx_1, idx_2 = alignSequenceToMSA(ag.ca, msa_refined, label='RNAS1_BOVIN')
showAlignment(aln, indices=[idx_1, idx_2])
Explanation: The next step is to select the corresponding residues from the AtomGroup to match the sequence alignment. We can identify these using alignSequenceToMSA. We give it the Calpha atoms only so the residue numbers aren't repeated.
End of explanation
print(ag.ca.getResnums())
Explanation: We see that there are extra residues in the PDB sequence compared to the reference sequence so we identify their residue numbers to make a selection.
End of explanation
chB = ag.select('resid 3 to 121')
chB
print(msa_refined['RNAS1_BOVIN'])
print(chB.ca.getSequence())
Explanation: They are numbered from 1 to 124, two residues are missing from the beginning, and three residues are missing from the end, so we select residues 3 to 121. This now makes the two sequences match.
End of explanation
gnm = GNM('2W5I')
gnm.buildKirchhoff(chB.ca)
gnm.calcModes(n_modes=None) # calculate all modes
Explanation: We perform GNM analysis as follows:
End of explanation
mobility = calcSqFlucts(gnm)
figure(figsize=(13,6))
# plot entropy as grey bars
bar(chB.ca.getResnums(), entropy, width=1.2, color='grey', label='entropy');
# rescale mobility
mobility = mobility*(max(entropy)/max(mobility))
# plot mobility as a blue line
showAtomicLines(mobility, atoms=chB.ca, color='b', linewidth=2, label='mobility');
legend()
Explanation: We can then visually compare the behaviour at the individual residue level as follows:
End of explanation
mutinfo = buildMutinfoMatrix(msa_refined)
showMutinfoMatrix(msa_refined, cmap='inferno');
title(None);
Explanation: Coevolution Calculation
In addition to the conservation/variation of individual positions, we can also calculate the coevolution between positions due to correlated mutations.
One simple and common method for this is to compute the mutual information between the columns in the MSA:
End of explanation
mi_apc = applyMutinfoCorr(mutinfo)
showMatrix(mi_apc, cmap='inferno');
Explanation: We can improve this with the widely used average product correction:
End of explanation
showMatrix(mi_apc, cmap='inferno', norm=Normalize(0, 0.5));
Explanation: We can change the colour scale normalisation to eliminate the effect of the diagonal. However, the mutual information matrix is still pretty noisy.
End of explanation
di = buildDirectInfoMatrix(msa_refined)
showDirectInfoMatrix(msa_refined, cmap='inferno');
title(None);
Explanation: Therefore, more sophisticated analyses have also been developed including the Direct Information (DI; also known as direct coupling analysis (DCA), which is very successful for contact prediction. This method can also be used in ProDy as follows:
End of explanation
showContactMap(gnm, origin='lower', cmap='Greys');
Explanation: If we compare the brighter regions on this map to the contact matrix then we see that they indeed match pretty well:
End of explanation
di_rank_row, di_rank_col, di_zscore_sort = calcRankorder(di, zscore=True)
print('row: ', di_rank_row[:5])
print('column:', di_rank_col[:5])
mi_rank_row, mi_rank_col, mi_zscore_sort = calcRankorder(mi_apc, zscore=True)
print('row: ', mi_rank_row[:5])
print('column:', mi_rank_col[:5])
Explanation: We can also apply a rank-ordering to the DI and corrected MI matrix entries, which helps identify the strongest signals:
End of explanation
import time
Explanation: 2. Signature Dynamics analysis with SignDy
This tutorial describes how to calculate signature dynamics for a family of proteins with similar structures using Elastic Network Models (ENMs). This method (also called ensemble normal mode analysis) creates an ensemble of aligned structures and calculates statistics such as means and standard deviations on various dynamic properties including mode profiles, mean square fluctuations and cross-correlation matrices. It also includes tools for classifying family members based on their sequence, structure and dynamics.
The theory and usage of this toolkit is described in our recent paper:
Zhang S, Li H, Krieger J, Bahar I.
Shared signature dynamics tempered by local fluctuations enables fold adaptability and specificity. Mol. Biol. Evol. 2019 36(9):2053–2068
In this tutorial, we will have a quick walk-through on the SignDy calculations and functions using the example of type-I
periplasmic binding protein (PBP-I) domains. The data is collected using the Dali server (http://ekhidna2.biocenter.helsinki.fi/dali/).
Holm L, Rosenström P.
Dali server: conservation mapping in 3D.
Nucleic Acids Res. 2010 10(38):W545-9
In addition to the previous imports, we also import time so that we can use the sleep function to reduce the load on the Dali server.
End of explanation
dali_rec = searchDali('3H5V','A')
dali_rec
Explanation: Overview
The first step in signature dynamics analysis is to collect a set of related protein structures and build a PDBEnsemble. This can be achieved by multiple routes: a query search of the PDB using blastPDB or Dali, extraction of PDB IDs from the Pfam database (as above) or the CATH database, or input of a pre-defined list.
We demonstrate the Dali method here in the first part of the tutorial. The usage of CATH methods is described in the website tutorial and the function blastPDB is described in the Structure Analysis Tutorial.
We apply these methods to the PBP-I domains, a group of protein structures originally found in bacteria for transport of solutes across the periplasmic space and later seen in various eukaryotic receptors including ionotropic and metabotropic glutamate receptors. We use the N-terminal domain of AMPA receptor subunit GluA2 (gene name GRIA2; https://www.uniprot.org/uniprot/P42262) as a query.
The second step is then to calculate ENM normal modes for all members of the PDBEnsemble, creating a ModeEnsemble. We usually use the GNM for this as will be shown here, but the ANM can be used too.
The third step is then to analyse conserved and divergent behaviours to identify signature dynamics of the whole family or individual subfamilies. This is aided calculations of overlaps and distances between the mode spectra (step 4), which can be used to create phylogenetic trees that can be compared to sequence and structural conservation and divergence.
Step 1: Prepare Ensemble (using Dali)
First we use the function searchDali to search the PDB with Dali, which returns a DaliRecord object that contains a list of PDB IDs and their corresponding mappings to the reference structure.
End of explanation
while not dali_rec.isSuccess:
dali_rec.fetch()
time.sleep(120)
dali_rec
Explanation: The Dali search often remains in the queue longer than the timeout time. We therefore have a fetch method, which can be run later to fetch the data. We can run this in a loop with a wait of a couple of minutes in between fetches to make sure we get the result.
End of explanation
pdb_ids = dali_rec.filter(cutoff_len=0.7, cutoff_rmsd=1.0, cutoff_Z=30)
mappings = dali_rec.getMappings()
ags = parsePDB(pdb_ids, subset='ca')
len(ags)
Explanation: Next, we get the lists of PDB IDs and mappings from dali_rec, and parse the pdb_ids to get a list of AtomGroup instances:
End of explanation
dali_ens = buildPDBEnsemble(ags, mapping=mappings, seqid=20, labels=pdb_ids)
dali_ens
Explanation: Then we provide ags together with mappings to buildPDBEnsemble. We set the keyword argument seqid=20 to account for the low sequence identity between some of the structures.
End of explanation
saveEnsemble(dali_ens, 'PBP-I')
Explanation: Finally, we save the ensemble for later processing:
End of explanation
dali_ens = loadEnsemble('PBP-I.ens.npz')
Explanation: Step 2: Mode ensemble
For this analysis we'll build a ModeEnsemble by calculating normal modes for each member of the PDBEnsemble.
You can load a PDB ensemble at this stage if you already have one. We demonstrate this for the one we just saved.
End of explanation
gnms = calcEnsembleENMs(dali_ens, model='GNM', trim='reduce')
gnms
Explanation: Then we calculate GNM modes for each member of the ensemble using calcEnsembleENMs. There are options to select the model (GNM by default) and the way of considering non-aligned residues by setting the trim option (default is reduceModel, which treats them as environment).
End of explanation
saveModeEnsemble(gnms, 'PBP-I')
Explanation: We can save the mode ensemble as follows:
End of explanation
gnms = loadModeEnsemble('PBP-I.modeens.npz')
Explanation: We can also load in a previously saved mode ensemble such as the one we saved above:
End of explanation
gnms[0]
Explanation: Slicing and Indexing Mode Ensembles
We can index the ModeEnsemble object in two different dimensions. The first dimension corresponds to ensemble members as shown below for extracting the mode set for the first member (numbered 0).
End of explanation
gnms[:,0]
Explanation: The second dimension corresponds to particular modes of all ensemble members as shown below for extracting the first mode (numbered 0). The colon means we select everything from the first dimension.
End of explanation
gnms[5:10,2:4]
Explanation: We can also slice out ranges of members and modes and index them both at the same time. E.g. to get the five members from 5 up to but not including 10 (5, 6, 7, 8, 9), and the two modes from 2 up to but not including 4 (modes with indices 2 and 3 in the reference), we'd use the following code.
End of explanation
gnms[5,2]
Explanation: We can also use indexing to extract individual modes from individual members, e.g.
End of explanation
showSignatureMode(gnms[:, 0]);
Explanation: Remember that we usually talk about modes counting from 1 so this is "Mode 3" or "the 3rd global mode" in conversation but Python counts from 0 so it has index 2. Likewise this is the "6th member" of the ensemble but has index 5.
Step 3: Signature dynamics
Signatures are calculated as the mean and standard deviation of various properties
such as mode shapes and mean square fluctations.
For example, we can show the average and standard deviation of the shape of the first
mode (second index 0). The first index of the mode ensemble is over conformations.
End of explanation
showSignatureSqFlucts(gnms[:, :5]);
showSignatureCrossCorr(gnms[:, :20]);
Explanation: We can also show such results for properties involving multiple modes such as the mean square fluctuations from the first 5 modes or the cross-correlations from the first 20.
End of explanation
highlights = {'3h5vA': 'GluA2','3o21C': 'GluA3',
'3h6gA': 'GluK2', '3olzA': 'GluK3',
'5kc8A': 'GluD2'}
Explanation: We can also look at distributions over values across different members of the ensemble such as inverse eigenvalue. We can show a bar above this with individual members labelled
like in
Krieger J, Bahar I, Greger IH.
Structure, Dynamics, and Allosteric Potential of Ionotropic Glutamate Receptor N-Terminal Domains. Biophys. J. 2015 109(6):1136-48.
In this automated version, the bar is coloured from white to dark red depending on how many structures have values at that point.
We can select particular members to highlight with arrows by putting their names and labels in a dictionary:
End of explanation
gs = GridSpec(ncols=1, nrows=2, height_ratios=[1, 10], hspace=0.15)
subplot(gs[0]);
showVarianceBar(gnms[:, :5], fraction=True, highlights=highlights);
xlabel('');
subplot(gs[1]);
showSignatureVariances(gnms[:, :5], fraction=True, bins=80, alpha=0.7);
xlabel('Fraction of inverse eigenvalue');
Explanation: We plot the variance bar for the first five modes (showing a function of the inverse eigenvalues related to the resultant relative size of motion) above the inverse eigenvalue distributions for each of those modes. To arrange the plots like this, we use the GridSpec function of Matplotlib.
End of explanation
eigvals = gnms.getEigvals()
eigvals
eigvecs = gnms.getEigvecs()
eigvecs
Explanation: We can also extract the eigenvalues and eigenvectors directly from the mode ensemble and analyse them ourselves:
End of explanation
eigvals.shape
eigvals[0:5,0:5]
Explanation: These are stored in instances of the sdarray class that we designed specifically for signature dynamics analysis. It is an extension of the standard NumPy ndarray but has additional attributes and some modified methods. The first axis is reserved for ensemble members and the mean, min, max and std are altered to average over this dimension rather than all dimensions.
We can look at the shape of these arrays and index them just like ndarray and ModeEnsemble objects. The eigenvalues are arranged in eigvals such that the first axis is the members and the second is the modes as in the mode ensemble.
End of explanation
eigvecs.shape
Explanation: The eigenvectors are arranged in eigvecs such that the first axis is over the members, and the remaining dimensions are as in other eigenvector arrays - the second is over atoms and the third is mode index. Each atom has a weight, which varies between members and is important in calculating the mean, std, etc.
End of explanation
so_matrix = calcEnsembleSpectralOverlaps(gnms[:, :1])
figure(figsize=(8,8))
showMatrix(so_matrix);
Explanation: Step 4: Spectral overlap and distance
Spectral overlap, also known as covariance overlap, measures the overlap between two covariance matrices, or the overlap of a subset of the modes (a mode spectrum). This can also be converted into a distance using its arccosine as will be shown below.
We can calculate a matrix of spectral overlaps (so_matrix) over any slice of the ModeEnsemble that is still a mode ensemble itself, e.g.
End of explanation
sd_matrix = calcEnsembleSpectralOverlaps(gnms[:, :1], distance=True)
figure(figsize=(8,8)); showMatrix(sd_matrix);
Explanation: We can also obtain a spectral distance matrix (sd_matrix) from calcEnsembleSpectralOverlaps by giving it an additional argument:
End of explanation
labels = dali_ens.getLabels()
so_tree = calcTree(names=labels, distance_matrix=sd_matrix, method='upgma')
Explanation: We can then use this distance to calculate a tree. The labels from the mode ensemble as used as names for the leaves of the tree and are stored in their own variable/object for later use.
End of explanation
showTree(so_tree);
Explanation: We can show this tree using the function showTree:
End of explanation
reordered_so, new_so_indices = reorderMatrix(names=labels, matrix=so_matrix, tree=so_tree)
figure(figsize=(8,8))
showMatrix(reordered_so, ticklabels=new_so_indices);
Explanation: We can also use this tree to reorder the so_matrix and obtain indices for reordering other objects:
End of explanation
figure(figsize=(8,8))
showMatrix(reordered_so, ticklabels=new_so_indices, origin='upper');
Explanation: As in the tree, we see 2-3 clusters with some finer structure within them as in the tree. These correspond to different subtypes of iGluRs called AMPA receptors (subunit paralogues GluA1-4, top) and kainate receptors (subunit paralogues GluK1-5, bottom) based on their preferred agonists as well as delta receptors at the bottom (these are flipped relative to the tree).
To show the matrix in the same order as the tree, we can add the option origin='upper':
End of explanation
figure(figsize=(11,8))
showMatrix(reordered_so, ticklabels=new_so_indices, origin='upper',
y_array=so_tree);
Explanation: We can also show the tree along the y-axis of the matrix as follows:
End of explanation
so_reordered_ens = dali_ens[new_so_indices]
so_reordered_gnms = gnms[new_so_indices, :]
Explanation: We can also use the resulting indices to reorder the ModeEnsemble and PDBEnsemble:
End of explanation
so_reordered_labels = np.array(labels)[new_so_indices]
Explanation: Lists can only be used for indexing arrays not lists so we need to perform a type conversion prior to indexing in order to reorder the labels:
End of explanation
seqid_matrix = buildSeqidMatrix(so_reordered_ens.getMSA())
seqdist_matrix = 1. - seqid_matrix
figure(figsize=(8,8));
showMatrix(seqdist_matrix);
Explanation: Comparing with sequence and structural distances
The sequence distance is given by the (normalized) Hamming distance, which is
calculated by subtracting the percentage identity (fraction) from 1, and the
structural distance is the RMSD. We can also calculate and show the matrices
and trees for these from the PDB ensemble.
First we calculate the sequence distance matrix:
End of explanation
seqdist_tree = calcTree(names=so_reordered_labels, distance_matrix=seqdist_matrix, method='upgma')
showTree(seqdist_tree);
Explanation: We can also construct a tree based on seqdist_matrix and use that to reorder it:
End of explanation
reordered_seqdist_seqdist, new_seqdist_indices = reorderMatrix(names=so_reordered_labels,
matrix=seqdist_matrix, tree=seqdist_tree)
figure(figsize=(8,8));
showMatrix(reordered_seqdist_seqdist, ticklabels=new_seqdist_indices);
Explanation: We can reorder seqdist_matrix with seqdist_tree as we did above with so_tree:
End of explanation
rmsd_matrix = so_reordered_ens.getRMSDs(pairwise=True)
figure(figsize=(8,8)); showMatrix(rmsd_matrix);
rmsd_tree = calcTree(names=so_reordered_labels,
distance_matrix=rmsd_matrix,
method='upgma')
Explanation: This shows us even clearer groups than the dynamic spectrum-based analysis. We see one subunit by itself at the bottom that is from a delta-type iGluR (GluD2), then two groups of kainate receptors (GluK5 and GluK2 with GluK3), and four groups of AMPARs (GluA1, GluA2, GluA4, and many structures from GluA3).
Similarily, once we obtain the RMSD matrix and tree using the getRMSDs method of the PDBEnsemble, we
can calculate the structure-based tree:
End of explanation
figure(figsize=(20,8));
subplot(1, 3, 1);
showTree(seqdist_tree, format='plt');
title('Sequence');
subplot(1, 3, 2);
showTree(rmsd_tree, format='plt');
title('Structure');
subplot(1, 3, 3);
showTree(so_tree, format='plt');
title('Dynamics');
Explanation: It could be of interest to put all three trees constructed based on different
distance metrics side by side and compare them. We can do this using the subplot function from Matplotlib.
End of explanation
reordered_rmsd_seqdist, new_seqdist_indices = reorderMatrix(names=so_reordered_labels,
matrix=rmsd_matrix, tree=seqdist_tree)
reordered_sd_seqdist, new_seqdist_indices = reorderMatrix(names=so_reordered_labels,
matrix=sd_matrix, tree=seqdist_tree)
figure(figsize=(20,8));
subplot(1, 3, 1);
showMatrix(reordered_seqdist_seqdist, ticklabels=new_seqdist_indices, origin='upper');
title('Sequence');
subplot(1, 3, 2);
showMatrix(reordered_rmsd_seqdist, ticklabels=new_seqdist_indices, origin='upper');
title('Structure');
subplot(1, 3, 3);
showMatrix(reordered_sd_seqdist, ticklabels=new_seqdist_indices, origin='upper');
title('Dynamics');
Explanation: Likewise, we can place the matrices side-by-side after having them all reordered the same way. We'll reorder by seqdist in this example:
End of explanation
pathPDBFolder('')
Explanation: This analysis is quite sensitive to how many modes are used. As the number of modes approaches the full number,
the dynamic distance order approaches the RMSD order. With smaller numbers, we see finer distinctions and there is a point where the dynamic distances are more in line with the sequence distances, which we call the low-to-intermediate frequency regime. In the current case where we used just one global mode (with the lowest frequency), we see small spectral distances but some subfamily differentiation is still apparent.
The same analysis could also be performed with a larger ensemble by selecting lower sequence identity and Z-score cutoffs as we did in our paper.
Now we have finished this tutorial, we reset the default path to the PDB folder, so that we aren't surprised next time we download PDBs and can't find them:
End of explanation |
13,742 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
convert an BGR image to RGB image
| Python Code::
import cv2
import numpy as np
array_of_image = np.array(image)
image_rgb = cv2.cvtColor(array_of_image, cv2.COLOR_BGR2RGB)
cv2.imshow('image', image_rgb)
|
13,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-nearest neighbors
Step1: Let's imagine we measure 2 quantities, $x_1$ and $x_2$ for some objects, and we know the classes that these objects belong to, e.g., "star", 0, or "galaxy", 1 (maybe we classified these objects by hand, or knew through some other means). We now observe ($x_1$, $x_2$) for some new object and want to know whether it belongs in class 0 or 1.
We'll first generate some fake data with known classes
Step2: We now observe a new point, and would like to know which class it belongs to
Step3: KNN works by predicting the class of a new point based on the classes of the K training data points closest to the new point. The two things that can be customized about this method are K, the number of points to use, and the distance metric used to compute the distances between the new point and the training data. If the dimensions in your data are measured with different units or with very different measurement uncertainties, you might need to be careful with the way you choose this metric. For simplicity, we'll start by fixing K=16 and use a Euclidean distance to see how this works in practice
Step4: All of the closest points are from class 1, so we would classify the new point as class=1. If there is a mixture of possible classes, take the class with more neighbors. If it's a tie, choose a class at random. That's it! Let's see how to use the KNN classifier in scikit-learn
Step5: Let's visualize the decision boundary of this classifier by evaluating the predicted class for a grid of trial data
Step6: KNN is very simple, but is very fast and is therefore useful in problems with large or wide datasets.
Let's now look at a more complicated example where the training data classes overlap significantly
Step7: What does the decision boundary look like in this case, as a function of the number of neighbors, K | Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('notebook.mplstyle')
%matplotlib inline
from scipy.stats import mode
Explanation: K-nearest neighbors
End of explanation
a = np.random.multivariate_normal([1., 0.5],
[[4., 0.],
[0., 0.25]], size=512)
b = np.random.multivariate_normal([10., 8.],
[[1., 0.],
[0., 25]], size=1024)
X = np.vstack((a,b))
y = np.concatenate((np.zeros(len(a)),
np.ones(len(b))))
X.shape, y.shape
plt.figure(figsize=(6,6))
plt.scatter(X[:,0], X[:,1], c=y, cmap='RdBu', marker='.', alpha=0.4)
plt.xlim(-10, 20)
plt.ylim(-10, 20)
plt.title('Training data')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.tight_layout()
Explanation: Let's imagine we measure 2 quantities, $x_1$ and $x_2$ for some objects, and we know the classes that these objects belong to, e.g., "star", 0, or "galaxy", 1 (maybe we classified these objects by hand, or knew through some other means). We now observe ($x_1$, $x_2$) for some new object and want to know whether it belongs in class 0 or 1.
We'll first generate some fake data with known classes:
End of explanation
np.random.seed(42)
new_pt = np.random.uniform(-10, 20, size=2)
plt.figure(figsize=(6,6))
plt.scatter(X[:,0], X[:,1], c=y, cmap='RdBu', marker='.', alpha=0.5, linewidth=0)
plt.scatter(new_pt[0], new_pt[1], marker='+', color='g', s=100, linewidth=3)
plt.xlim(-10, 20)
plt.ylim(-10, 20)
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.tight_layout()
Explanation: We now observe a new point, and would like to know which class it belongs to:
End of explanation
K = 16
def distance(pts1, pts2):
pts1 = np.atleast_2d(pts1)
pts2 = np.atleast_2d(pts2)
return np.sqrt( (pts1[:,0]-pts2[:,0])**2 + (pts1[:,1]-pts2[:,1])**2)
# compute the distance between all training data points and the new point
dists = distance(X, new_pt)
# get the classes (from the training data) of the K nearest points
nearest_classes = y[np.argsort(dists)[:K]]
nearest_classes
Explanation: KNN works by predicting the class of a new point based on the classes of the K training data points closest to the new point. The two things that can be customized about this method are K, the number of points to use, and the distance metric used to compute the distances between the new point and the training data. If the dimensions in your data are measured with different units or with very different measurement uncertainties, you might need to be careful with the way you choose this metric. For simplicity, we'll start by fixing K=16 and use a Euclidean distance to see how this works in practice:
End of explanation
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=16)
clf.fit(X, y)
clf.predict(new_pt.reshape(1, -1)) # input has to be 2D
Explanation: All of the closest points are from class 1, so we would classify the new point as class=1. If there is a mixture of possible classes, take the class with more neighbors. If it's a tie, choose a class at random. That's it! Let's see how to use the KNN classifier in scikit-learn:
End of explanation
grid_1d = np.linspace(-10, 20, 256)
grid_x1, grid_x2 = np.meshgrid(grid_1d, grid_1d)
grid = np.stack((grid_x1.ravel(), grid_x2.ravel()), axis=1)
y_grid = clf.predict(grid)
plt.figure(figsize=(6,6))
plt.pcolormesh(grid_x1, grid_x2, y_grid.reshape(grid_x1.shape),
cmap='Set3', alpha=1.)
plt.scatter(X[:,0], X[:,1], marker='.', alpha=0.65, linewidth=0)
plt.xlim(-10, 20)
plt.ylim(-10, 20)
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.tight_layout()
Explanation: Let's visualize the decision boundary of this classifier by evaluating the predicted class for a grid of trial data:
End of explanation
a = np.random.multivariate_normal([6., 0.5],
[[8., 0.],
[0., 0.25]], size=512)
b = np.random.multivariate_normal([10., 4.],
[[2., 0.],
[0., 8]], size=1024)
X2 = np.vstack((a,b))
y2 = np.concatenate((np.zeros(len(a)),
np.ones(len(b))))
plt.figure(figsize=(6,6))
plt.scatter(X2[:,0], X2[:,1], c=y2, cmap='RdBu', marker='.', alpha=0.4)
plt.xlim(-10, 20)
plt.ylim(-10, 20)
plt.title('Training data')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.tight_layout()
Explanation: KNN is very simple, but is very fast and is therefore useful in problems with large or wide datasets.
Let's now look at a more complicated example where the training data classes overlap significantly:
End of explanation
for K in [4, 16, 64, 256]:
clf2 = KNeighborsClassifier(n_neighbors=K)
clf2.fit(X2, y2)
y_grid2 = clf2.predict(grid)
plt.figure(figsize=(6,6))
plt.pcolormesh(grid_x1, grid_x2, y_grid2.reshape(grid_x1.shape),
cmap='Set3', alpha=1.)
plt.scatter(X2[:,0], X2[:,1], marker='.', alpha=0.65, linewidth=0)
plt.xlim(-10, 20)
plt.ylim(-10, 20)
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.title("$K={0}$".format(K))
plt.tight_layout()
Explanation: What does the decision boundary look like in this case, as a function of the number of neighbors, K:
End of explanation |
13,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Text generation using an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download the Shakespeare dataset
Change the following line to run this code on your own data.
Step3: Read the data
First, look in the text.
Step4: Process the text
Vectorize the text
Before training, we need to map strings to a numerical representation. Create two lookup tables
Step5: Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to len(unique).
Step6: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
Step7: The batch method lets us easily convert these individual characters to sequences of the desired size.
Step8: For each sequence, duplicate and shift it to form the input and target text by using the map method to apply a simple function to each batch
Step9: Print the first examples input and target values
Step10: Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and tries to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN considers the previous step context in addition to the current input character.
Step11: Create training batches
We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
Step12: Build The Model
Use tf.keras.Sequential to define the model. For this simple example three layers are used to define our model
Step13: Next define a function to build the model.
Use CuDNNGRU if running on GPU.
Step14: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character
Step15: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length
Step16: To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note
Step17: This gives us, at each timestep, a prediction of the next character index
Step18: Decode these to see the text predicted by this untrained model
Step19: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because our model returns logits, we need to set the from_logits flag.
Step20: Configure the training procedure using the tf.keras.Model.compile method. We'll use tf.train.AdamOptimizer with default arguments and the loss function.
Step21: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training
Step22: Execute the training
To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
Step23: Generate text
Restore the latest checkpoint
To keep this prediction step simple, use a batch size of 1.
Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.
To run the model with a different batch_size, we need to rebuild the model and restore the weights from the checkpoint.
Step24: The prediction loop
The following code block generates the text
Step25: The easiest thing you can do to improve the results it to train it for longer (try EPOCHS=30).
You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions.
Advanced | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow.compat.v1 as tf
import numpy as np
import os
import time
Explanation: Text generation using an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/sequences/text_generation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/sequences/text_generation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Note: Enable GPU acceleration to execute this notebook faster. In Colab: Runtime > Change runtime type > Hardware acclerator > GPU. If running locally make sure TensorFlow version >= 1.11.
This tutorial includes runnable code implemented using tf.keras and eager execution. The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:
The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
Setup
Import TensorFlow and other libraries
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Explanation: Download the Shakespeare dataset
Change the following line to run this code on your own data.
End of explanation
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
Explanation: Read the data
First, look in the text.
End of explanation
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
Explanation: Process the text
Vectorize the text
Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
End of explanation
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
Explanation: Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to len(unique).
End of explanation
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
Explanation: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
End of explanation
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
Explanation: The batch method lets us easily convert these individual characters to sequences of the desired size.
End of explanation
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
Explanation: For each sequence, duplicate and shift it to form the input and target text by using the map method to apply a simple function to each batch:
End of explanation
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
Explanation: Print the first examples input and target values:
End of explanation
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
Explanation: Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and tries to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN considers the previous step context in addition to the current input character.
End of explanation
# Batch size
BATCH_SIZE = 64
steps_per_epoch = examples_per_epoch//BATCH_SIZE
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
Explanation: Create training batches
We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
End of explanation
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
Explanation: Build The Model
Use tf.keras.Sequential to define the model. For this simple example three layers are used to define our model:
tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map the numbers of each character to a vector with embedding_dim dimensions;
tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use a LSTM layer here.)
tf.keras.layers.Dense: The output layer, with vocab_size outputs.
End of explanation
if tf.config.list_physical_devices('GPU'):
rnn = tf.keras.layers.CuDNNGRU
else:
import functools
rnn = functools.partial(
tf.keras.layers.GRU, recurrent_activation='sigmoid')
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
rnn(rnn_units,
return_sequences=True,
recurrent_initializer='glorot_uniform',
stateful=True),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
Explanation: Next define a function to build the model.
Use CuDNNGRU if running on GPU.
End of explanation
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
End of explanation
model.summary()
Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:
End of explanation
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
Explanation: To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
End of explanation
sampled_indices
Explanation: This gives us, at each timestep, a prediction of the next character index:
End of explanation
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
Explanation: Decode these to see the text predicted by this untrained model:
End of explanation
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
Explanation: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because our model returns logits, we need to set the from_logits flag.
End of explanation
model.compile(
optimizer = tf.train.AdamOptimizer(),
loss = loss)
Explanation: Configure the training procedure using the tf.keras.Model.compile method. We'll use tf.train.AdamOptimizer with default arguments and the loss function.
End of explanation
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
Explanation: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training:
End of explanation
EPOCHS=3
history = model.fit(dataset.repeat(), epochs=EPOCHS, steps_per_epoch=steps_per_epoch, callbacks=[checkpoint_callback])
Explanation: Execute the training
To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
End of explanation
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
Explanation: Generate text
Restore the latest checkpoint
To keep this prediction step simple, use a batch size of 1.
Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.
To run the model with a different batch_size, we need to rebuild the model and restore the weights from the checkpoint.
End of explanation
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
Explanation: The prediction loop
The following code block generates the text:
It starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.
Get the prediction distribution of the next character using the start string and the RNN state.
Use a multinomial distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.
The RNN state returned by the model is fed back into the model so that it now has more context, instead of only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
Looking at the generated text, you'll see the model knows when to capitalize and make paragraphs, and it imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
End of explanation
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.train.AdamOptimizer()
# Training step
EPOCHS = 1
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
# initially hidden is None
hidden = model.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
with tf.GradientTape() as tape:
# feeding the hidden state back into the model
# This is the interesting step
predictions = model(inp)
loss = tf.losses.sparse_softmax_cross_entropy(target, predictions)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {:.4f}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
Explanation: The easiest thing you can do to improve the results it to train it for longer (try EPOCHS=30).
You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions.
Advanced: Customized Training
The above training procedure is simple, but does not give you much control.
So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point, for example, to implement curriculum learning to help stabilize the model's open-loop output.
We will use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.
The procedure works as follows:
First, initialize the RNN state. We do this by calling the tf.keras.Model.reset_states method.
Next, iterate over the dataset (batch by batch) and calculate the predictions associated with each.
Open a tf.GradientTape, and calculate the predictions and loss in that context.
Calculate the gradients of the loss with respect to the model variables using the tf.GradientTape.grads method.
Finally, take a step downwards by using the optimizer's tf.train.Optimizer.apply_gradients method.
End of explanation |
13,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is NLP?
NLP is a field of linguistics and machine learning focused on understanding everything related to human language. The aim of NLP tasks is not only to understand single words individually, but to be able to understand the context of those words.
The following is a list of common NLP tasks, with some examples of each
Step1: Some of the currently available pipelines are
Step2: This pipeline is called zero-shot because you don’t need to fine-tune the model on your data to use it. It can directly return probability scores for any list of labels you want!
Step4: Using models from Hugging Face Hub in a pipeline
Model hub | Python Code:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
classifier("I've been waiting for a HuggingFace course my whole life.")
## passing multiple sentences
classifier([
"I've been waiting for a HuggingFace course my whole life.",
"I hate this so much!"
])
Explanation: What is NLP?
NLP is a field of linguistics and machine learning focused on understanding everything related to human language. The aim of NLP tasks is not only to understand single words individually, but to be able to understand the context of those words.
The following is a list of common NLP tasks, with some examples of each:
Classifying whole sentences: Getting the sentiment of a review, detecting if an email is spam, determining if a sentence is grammatically correct or whether two sentences are logically related or not
Classifying each word in a sentence: Identifying the grammatical components of a sentence (noun, verb, adjective), or the named entities (person, location, organization)
Generating text content: Completing a prompt with auto-generated text, filling in the blanks in a text with masked words
Extracting an answer from a text: Given a question and a context, extracting the answer to the question based on the information provided in the context
Generating a new sentence from an input text: Translating a text into another language, summarizing a text
NLP isn’t limited to written text though. It also tackles complex challenges in speech recognition and computer vision, such as generating a transcript of an audio sample or a description of an image.
Pipeline
The most basic object in the 🤗 Transformers library is the pipeline. It connects a model with its necessary preprocessing and postprocessing steps, allowing us to directly input any text and get an intelligible answer
End of explanation
## Zero Shot Classification
zero_shot_classifer = pipeline('zero-shot-classification')
zero_shot_classifer(
"This is a sample course about Hugging face and transformers.",
candidate_labels = ['education','politics','sports']
)
Explanation: Some of the currently available pipelines are:
feature-extraction (get the vector representation of a text)
fill-mask
ner (named entity recognition)
question-answering
sentiment-analysis
summarization
text-generation
translation
zero-shot-classification
End of explanation
## Text generation
text_gen_classifier = pipeline("text-generation")
text_gen_classifier("In this course, we will teach you how to")
text_gen_classifier("In this course, we will teach you how to", num_return_sequences=2, max_length=15)
Explanation: This pipeline is called zero-shot because you don’t need to fine-tune the model on your data to use it. It can directly return probability scores for any list of labels you want!
End of explanation
## using distilgpt2 to generate text
generator = pipeline('text-generation', model='distilgpt2')
generator("My name is Rishu and i", max_length=30, num_return_sequences=2)
## Filling masks. Predicting new masks words in the models
unmasker = pipeline("fill-mask")
unmasker("This course will teach you all about <mask> models.", top_k=2)
### Named entity recognition
ner = pipeline('ner', grouped_entities=True)
ner("My name is Sylvain and I work at Hugging Face in Brooklyn.")
## Question Answering
question_answerer = pipeline("question-answering")
question_answerer(
question="Where do I work?",
context="My name is Sylvain and I work at Hugging Face in Brooklyn"
)
# Summarization
summarizer = pipeline("summarization")
summarizer(
America has changed dramatically during recent years. Not only has the number of
graduates in traditional engineering disciplines such as mechanical, civil,
electrical, chemical, and aeronautical engineering declined, but in most of
the premier American universities engineering curricula now concentrate on
and encourage largely the study of engineering science. As a result, there
are declining offerings in engineering subjects dealing with infrastructure,
the environment, and related issues, and greater concentration on high
technology subjects, largely supporting increasingly complex scientific
developments. While the latter is important, it should not be at the expense
of more traditional engineering.
Rapidly developing economies such as China and India, as well as other
industrial countries in Europe and Asia, continue to encourage and advance
the teaching of engineering. Both China and India, respectively, graduate
six and eight times as many traditional engineers as does the United States.
Other industrial countries at minimum maintain their output, while America
suffers an increasingly serious decline in the number of engineering graduates
and a lack of well-educated engineers.
)
## Translation
translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en")
translator("Ce cours est produit par Hugging Face.")
Explanation: Using models from Hugging Face Hub in a pipeline
Model hub: https://huggingface.co/models
End of explanation |
13,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Instantiate-CPPN" data-toc-modified-id="Instantiate-CPPN-1"><span class="toc-item-num">1 </span>Instantiate CPPN</a></span></li><li><span><a href="#Generate-Image" data-toc-modified-id="Generate-Image-2"><span class="toc-item-num">2 </span>Generate Image</a></span><ul class="toc-item"><li><span><a href="#Export" data-toc-modified-id="Export-2.1"><span class="toc-item-num">2.1 </span>Export</a></span></li></ul></li><li><span><a href="#Animation" data-toc-modified-id="Animation-3"><span class="toc-item-num">3 </span>Animation</a></span><ul class="toc-item"><li><span><a href="#Marching-Cubes" data-toc-modified-id="Marching-Cubes-3.1"><span class="toc-item-num">3.1 </span>Marching Cubes</a></span></li></ul></li><li><span><a href="#Parameters-Grid-Search" data-toc-modified-id="Parameters-Grid-Search-4"><span class="toc-item-num">4 </span>Parameters Grid Search</a></span></li></ul></div>
Step1: Instantiate CPPN
Step2: Generate Image
Step3: Export
Step4: Animation
Step5: Marching Cubes
Step6: Parameters Grid Search | Python Code:
import numpy as np
import yaml
import os
import cv2
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import animation
from datetime import datetime
from pathlib import Path
plt.rcParams['animation.ffmpeg_path'] = str(Path.home() / "anaconda3/envs/image-processing/bin/ffmpeg")
%matplotlib notebook
%load_ext autoreload
%autoreload 2
from CPPN import CPPN
from ds_utils.voxel_utils import get_sphere_mask
from ds_utils.video_utils import generate_video
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Instantiate-CPPN" data-toc-modified-id="Instantiate-CPPN-1"><span class="toc-item-num">1 </span>Instantiate CPPN</a></span></li><li><span><a href="#Generate-Image" data-toc-modified-id="Generate-Image-2"><span class="toc-item-num">2 </span>Generate Image</a></span><ul class="toc-item"><li><span><a href="#Export" data-toc-modified-id="Export-2.1"><span class="toc-item-num">2.1 </span>Export</a></span></li></ul></li><li><span><a href="#Animation" data-toc-modified-id="Animation-3"><span class="toc-item-num">3 </span>Animation</a></span><ul class="toc-item"><li><span><a href="#Marching-Cubes" data-toc-modified-id="Marching-Cubes-3.1"><span class="toc-item-num">3.1 </span>Marching Cubes</a></span></li></ul></li><li><span><a href="#Parameters-Grid-Search" data-toc-modified-id="Parameters-Grid-Search-4"><span class="toc-item-num">4 </span>Parameters Grid Search</a></span></li></ul></div>
End of explanation
res_path = Path.home() / 'Documents/generated_data/cppn'
extra_funs= {
'base': lambda x,y,z: x*y*z,
'cos_sin': lambda x,y,z: np.cos(x)*np.sin(y)*np.sin(z),
'cube': lambda x,y,z: x**3 + 3*y - y**3 -3*x + z**2 -z,
'rand': lambda x,y,z: np.sqrt(x*x+y*y+z*z) + (x*x) + np.tan(y) + 3*z,
}
# load config file
with open('cppn_config.yaml', 'r') as f:
model_config = yaml.load(f, Loader=yaml.FullLoader)
model_config = model_config['base_bw']
model_config
# OPTIONALLY customize model config
model_config['nb_hidden_layers'] = 8
#model_config['kernel_init_stddev'] = 1.
model_config['kernel_init_mean'] = 0.
model_config['nb_channels'] = 1
#model_config['inner_architecture_key'] = 'residual'
# init model
batch_size = 5
img_width = img_height = 100
img_depth = 1
img_size = (img_width, img_height)
cppn = CPPN(batch_size=batch_size, img_width=img_width, img_height=img_height, img_depth=img_depth,
**model_config)
cppn.model.summary()
Explanation: Instantiate CPPN
End of explanation
x, y, z, r, e = cppn.get_data(extra_fun=extra_funs['base'])
latent = cppn.get_latent()
result_imgs = cppn.generate_imgs(x, y, z, r, e, latent)
# as the results are 3D, we select only one slice through the 3rd dimension
plt.imshow(result_imgs[0, :, :, 0], cmap='gray')
Explanation: Generate Image
End of explanation
# export results as numpy
np.save(str(res_path / 'bw_3d_100_sphere.npy'), result_imgs[0])
# test load results
np.load(str(res_path / 'numpy_exports/bw_1080.npy')).shape
# export as images
for i, img in enumerate(result_imgs):
plt.imsave(str(res_path / f'numpy_exports/sample_{i}.png'), img, cmap='gray')
Explanation: Export
End of explanation
def animate_cppn(cppn, nb_frames: int, add_val: float, animate_data=False):
cppn_snapshot = []
latent = cppn.get_latent()
x, y, z, r, e = cppn.get_data(scale=data_config['scale'],
translation=data_config['translation'],
rotation=data_config['rotation'],
extra_fun=extra_funs[data_config['extra_fun']])
for i in range(nb_frames):
if i%10 == 0:
print(i)
latent_idx = int(i/(int(nb_frames/model_config['latent_dim'])))
#latent[0][latent_idx] += add_val
latent[0] += add_val
if animate_data:
x, y, z, r, e = cppn.get_data(scale=data_config['scale']+i*data_config['scale_speed'],
translation=data_config['translation']+i*data_config['translation_speed'],
rotation=data_config['rotation']+i*data_config['rotation_speed'],
extra_fun=extra_funs[data_config['extra_fun']])
cppn_snapshot.append(cppn.generate_imgs(x, y, z, r, e, latent)[0])
cppn_snapshot = np.array(cppn_snapshot)
return cppn_snapshot
save_anim = False # whether to save animation to file
animate_data = True # whether to retrieve new input data at each animation frame
FRAMES = 100
batch_size = 1
img_width = img_height = img_depth = 50
img_size = (img_width, img_height)
# Init model and data
with open('cppn_config.yaml', 'r') as f:
model_config = yaml.load(f, Loader=yaml.FullLoader)
data_config = model_config['test_config']
model_config = model_config['base_bw']
cppn = CPPN(batch_size=batch_size, img_width=img_width, img_height=img_height, img_depth=img_depth,
**model_config)
cppn_snapshot = []
latent_max_val = 1
#latent = np.zeros((1, model_config['z_dim']))-latent_max_val
add_val = (latent_max_val*1)/FRAMES
#add_val = 1.0
cppn_snapshot = animate_cppn(cppn, FRAMES, add_val)
np.save(res_path / f'test_3D_anim.npy', cppn_snapshot)
# Setup plot
dpi = 100
if save_anim:
fig, ax = plt.subplots(dpi=dpi, figsize=(img_width/dpi, img_height/dpi))
else:
fig, ax = plt.subplots(dpi=dpi, figsize=(5, 5))
plt.axis('off')
def animate(i, ax, cppn_snapshot):
ax.imshow(cppn_snapshot[i], cmap='gray')
# Animate
ani = animation.FuncAnimation(fig, animate, frames=FRAMES, interval=100,
fargs=[ax, cppn_snapshot])
if save_anim:
ani.save(str(res_path / 'tests' / 'anim_{}.mp4'.format(datetime.strftime(datetime.now(), "%Y-%m-%d_%H-%M"))),
animation.FFMpegFileWriter(fps=30))
Explanation: Animation
End of explanation
import mcubes
vertices, triangles = mcubes.marching_cubes(cppn_snapshot[0], 0.2)
#mcubes.export_obj(vertices, triangles, res_path / 'test.obj')
triangles
Explanation: Marching Cubes
End of explanation
from sklearn.model_selection import ParameterGrid
param_grid = {
#'inner_architecture_key': ['base', 'residual', 'softplus'],
'kernel_init_stddev': np.linspace(0.7, 4., num=4),
#'scale': np.linspace(-2., 2., num=5),
#'translation': np.linspace(-4., 4., num=3),
#'rotation': np.linspace(1, 360, num=4),
#'nb_hidden_layers': np.arange(3, 7, 2),
#'z_dim': [8, 16, 32, 64],
'hidden_dim': [8, 16],
'extra_fun': ['base', 'cos_sin', 'cube', 'rand']
}
grid = ParameterGrid(param_grid)
# Init model and data
with open('cppn_config.yaml', 'r') as f:
model_config = yaml.load(f, Loader=yaml.FullLoader)
data_config = model_config['test_config']
model_config = model_config['base_bw']
nb_frames = 100
batch_size = 1
img_height = img_width = 100
img_depth = img_height
out_dir = res_path / 'gs' / f'{img_width}x3_{nb_frames}frames'
out_dir.mkdir(parents=True, exist_ok=False)
sphere_mask = get_sphere_mask((img_height,img_width,img_depth), (img_height//2)-1)
with open(str(out_dir / 'logs.txt'), 'w+') as f:
for run, params in enumerate(grid):
print("Params {}: {}".format(run, params))
current_config = model_config.copy()
current_data_config = data_config.copy()
current_config.update(params)
current_data_config.update(params)
cppn = CPPN(batch_size=batch_size, img_width=img_width, img_height=img_height, img_depth=img_depth,
**current_config)
latent_max_val = 1
#latent = np.zeros((1, model_config['z_dim']))-latent_max_val
add_val = (latent_max_val*1)/nb_frames
#add_val = 1.0
cppn_snapshot = animate_cppn(cppn, nb_frames, add_val)
# write out config
f.write(str(current_config) + str(current_data_config) + '\n')
# write out numpy 4D tensor
np.save(out_dir / f'run_{run}.npy', np.array(cppn_snapshot, dtype=np.float16))
# write out as sliced videos
run_out_path = out_dir / f'vid_run_{run:03}'
run_out_path.mkdir(exist_ok=False, parents=True)
for z_coord in range(img_depth):
generate_video(str(run_out_path / f"{z_coord}.mp4"),
(img_width, img_height),
frame_gen_fun=lambda i: cv2.normalize(cppn_snapshot[i, :, :, z_coord], None, 255, 0, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U),
nb_frames=len(cppn_snapshot), is_color=False, disable_tqdm=True)
#sphere masked
cppn_snapshot = cppn_snapshot * sphere_mask[ np.newaxis, :, :, :]
np.save(out_dir / f'run_{run}_sphere.npy', np.array(cppn_snapshot, dtype=np.float16))
#imgs = cppn.generate_imgs(x, y, r, e, z)
#for j, img in enumerate(imgs):
# plt.imsave(str(out_dir / f'sample_{j}.png'), img, cmap='gray')
Explanation: Parameters Grid Search
End of explanation |
13,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: 1. Load trajectory
Read the heat current from a simple column-formatted file. The desired columns are selected based on their header (e.g. with LAMMPS format).
For other input formats see the corresponding example.
Step2: We need to load the energy flux (flux) and one mass flux (vcm[1], that is the velocity of the center of mass of one species).
Step3: 2. Heat Current
Define a HeatCurrent from the trajectory, with the correct parameters. The difference with respect to the single-component case is only here
Step4: Compute the Reduced periodogram $\bar{\mathcal{S}}^0_k$ and filter it for visualization. You can notice the difference with respect to the energ-flux periodogram $\mathcal{S}^0_k$.
Step5: 3. Resampling
If the Nyquist frequency is very high (i.e. the sampling time is small), such that the log-spectrum goes to low values, you may want resample your time series to obtain a maximum frequency $f^$.
Before performing that operation, the time series is automatically filtered to reduce the amount of aliasing introduced. Ideally you do not want to go too low in $f^$. In an intermediate region the results should not change.
To perform resampling you can choose the resampling frequency $f^$ or the resampling step (TSKIP). If you choose $f^$, the code will try to choose the closest value allowed.
The resulting PSD is visualized to ensure that the low-frequency region is not affected.
Step6: 4. Cepstral Analysis
Perform Cepstral Analysis. The code will
Step7: Plot the thermal conductivity $\kappa$ as a function of the cutoff $P^*$
Step8: Print the results
Step9: You can now visualize the filtered PSD... | Python Code:
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
try:
import sportran as st
except ImportError:
from sys import path
path.append('..')
import sportran as st
c = plt.rcParams['axes.prop_cycle'].by_key()['color']
%matplotlib notebook
Explanation: Example 2: Cepstral Analysis of liquid NaCl
This example shows the basic usage of sportran to compute the thermal conductivity of a classical MD simulation of molten NaCl, which requires the multi-component theory.
End of explanation
jfile = st.i_o.TableFile('./data/NaCl.dat', group_vectors=True)
Explanation: 1. Load trajectory
Read the heat current from a simple column-formatted file. The desired columns are selected based on their header (e.g. with LAMMPS format).
For other input formats see the corresponding example.
End of explanation
jfile.read_datalines(start_step=0, NSTEPS=0, select_ckeys=['Temp', 'flux', 'vcm[1]'])
Explanation: We need to load the energy flux (flux) and one mass flux (vcm[1], that is the velocity of the center of mass of one species).
End of explanation
DT_FS = 5.0 # time step [fs]
TEMPERATURE = np.mean(jfile.data['Temp']) # temperature [K]
VOLUME = 40.21**3 # volume [A^3]
print('T = {:f} K'.format(TEMPERATURE))
print('V = {:f} A^3'.format(VOLUME))
j = st.HeatCurrent([jfile.data['flux'], jfile.data['vcm[1]']], UNITS='metal', DT_FS=DT_FS,
TEMPERATURE=TEMPERATURE, VOLUME=VOLUME)
# trajectory
f, ax = plt.subplots(2, sharex=True)
ax[0].plot(j.timeseries()/1000., j.traj);
ax[1].plot(j.timeseries()/1000., j.otherMD[0].traj);
plt.xlim([0, 1.0])
plt.xlabel(r'$t$ [ps]')
ax[0].set_ylabel(r'$J^0$ [eV A/ps]');
ax[1].set_ylabel(r'$J^1$ [A/ps]');
Explanation: 2. Heat Current
Define a HeatCurrent from the trajectory, with the correct parameters. The difference with respect to the single-component case is only here: we load a list of currents instead of a single one.
End of explanation
# Periodogram with given filtering window width
ax = j.plot_periodogram(PSD_FILTER_W=0.4, kappa_units=True, label=r'$\bar{\mathcal{S}}^0_k$')
print(j.Nyquist_f_THz)
# compare with the spectrum of the energy flux
jen = st.HeatCurrent(jfile.data['flux'], UNITS='metal', DT_FS=DT_FS,
TEMPERATURE=TEMPERATURE, VOLUME=VOLUME)
ax = jen.plot_periodogram(axes=ax, PSD_FILTER_W=0.4, kappa_units=True, label=r'$\mathcal{S}^0_k$')
plt.xlim([0, 20])
ax[0].set_ylim([0, 0.8]);
ax[1].set_ylim([7, 18]);
ax[0].legend(); ax[1].legend();
Explanation: Compute the Reduced periodogram $\bar{\mathcal{S}}^0_k$ and filter it for visualization. You can notice the difference with respect to the energ-flux periodogram $\mathcal{S}^0_k$.
End of explanation
FSTAR_THZ = 14.0
jf, ax = j.resample(fstar_THz=FSTAR_THZ, plot=True, freq_units='thz')
plt.xlim([0, 20])
ax[1].set_ylim([7, 18]);
ax = jf.plot_periodogram(PSD_FILTER_W=0.1)
ax[1].set_ylim([7, 18]);
Explanation: 3. Resampling
If the Nyquist frequency is very high (i.e. the sampling time is small), such that the log-spectrum goes to low values, you may want resample your time series to obtain a maximum frequency $f^$.
Before performing that operation, the time series is automatically filtered to reduce the amount of aliasing introduced. Ideally you do not want to go too low in $f^$. In an intermediate region the results should not change.
To perform resampling you can choose the resampling frequency $f^$ or the resampling step (TSKIP). If you choose $f^$, the code will try to choose the closest value allowed.
The resulting PSD is visualized to ensure that the low-frequency region is not affected.
End of explanation
jf.cepstral_analysis()
# Cepstral Coefficients
print('c_k = ', jf.dct.logpsdK)
ax = jf.plot_ck()
ax.set_xlim([0, 25])
ax.set_ylim([-0.2, 1.0])
ax.grid();
# AIC function
f = plt.figure()
plt.plot(jf.dct.aic, '.-', c=c[0])
plt.xlim([0, 50])
plt.ylim([1400, 1500]);
print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin))
print('AIC_min = {:f}'.format(jf.dct.aic_min))
Explanation: 4. Cepstral Analysis
Perform Cepstral Analysis. The code will:
1. the parameters describing the theoretical distribution of the PSD are computed
2. the Cepstral coefficients are computed by Fourier transforming the log(PSD)
3. the Akaike Information Criterion is applied
4. the resulting $\kappa$ is returned
End of explanation
# L_0 as a function of cutoff K
ax = jf.plot_L0_Pstar()
ax.set_xlim([0, 50])
ax.set_ylim([14.75, 16.5]);
print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin))
print('AIC_min = {:f}'.format(jf.dct.aic_min))
# kappa as a function of cutoff K
ax = jf.plot_kappa_Pstar()
ax.set_xlim([0, 50])
ax.set_ylim([0, 1.0]);
print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin))
print('AIC_min = {:f}'.format(jf.dct.aic_min))
Explanation: Plot the thermal conductivity $\kappa$ as a function of the cutoff $P^*$
End of explanation
results = jf.cepstral_log
print(results)
Explanation: Print the results :)
End of explanation
# filtered log-PSD
ax = j.plot_periodogram(0.5, kappa_units=True)
ax = jf.plot_periodogram(0.5, axes=ax, kappa_units=True)
ax = jf.plot_cepstral_spectrum(axes=ax, kappa_units=True)
ax[0].axvline(x = jf.Nyquist_f_THz, ls='--', c='r')
ax[1].axvline(x = jf.Nyquist_f_THz, ls='--', c='r')
plt.xlim([0, 20])
ax[0].set_ylim([0, 0.8]);
ax[1].set_ylim([7, 18]);
ax[0].legend(['original', 'resampled', 'cepstrum-filtered'])
ax[1].legend(['original', 'resampled', 'cepstrum-filtered']);
Explanation: You can now visualize the filtered PSD...
End of explanation |
13,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Jupyter Notebooks on Navigating Complexity
<img src="http
Step1: Or how many unique words are there.
Step2: Or something more complicated, such as defining how a graph is produced.
Step3: Or even something interactive, such as | Python Code:
len("in all honesty, counting all the words in a sentence is best done in a computers mind. It doesn't mind counting at all".split())
Explanation: Introduction to Jupyter Notebooks on Navigating Complexity
<img src="http://jupyter.org/assets/jupyterpreview.png" style="height: 300px; float: right;"> </img>
On Navigating Complexity you keep "a lab notebook". It serves as your journal, notebook, sketchbook, diary, and more. Unlike finished products such as publications, notebooks are sketchy and very messy and they document the process as it is taking place. Science in the making. Your notes are your own, and freeform.
For note keeping, we use Jupyter Notebooks. There are five submissions during the course, on weeks 7, 10 and 13 you will submit your lab notebook. Also some of the course content, e.g. Thursday exercises and extra content are in this format.
Jupyter Notebooks are a document format and what makes them special, is that parts of the document are meant for humans, and parts for computers. For the latter, we use the Python programming language.
Notebooks are structured much like a text, running from beginning to the end. The prose can be interleaved with code, and computed output. It can be something very simple such as
End of explanation
len(set("in all honesty, counting all the words in a sentence is best done in a computers mind. It doesn't mind counting at all".split()))
Explanation: Or how many unique words are there.
End of explanation
# Setup by importing some libraries and configuring a few things
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
import IPython
%matplotlib inline
# Create two random datasets
data = np.random.random(15)
smallerdata = np.random.random(15) * 0.3
# Define a plotting function
def drawPlot():
fig, ax = plt.subplots()
ax.plot(range(len(data)), data, label="random data");
ax.plot(range(len(smallerdata)), smallerdata, 'r--', label="smaller random data");
plt.title("Two random dataset compared");
ax.grid(axis='y');
ax.legend(loc='upper right');
return fig, ax
# Draw the plot
fig, ax = drawPlot()
plt.show()
Explanation: Or something more complicated, such as defining how a graph is produced.
End of explanation
# Define an interactive plotting function
def updatePlot(s=0):
print("data {0:.2f}, smallerdata {1:.2f}".format(data[s], smallerdata[s]))
fig, ax = drawPlot()
ax.axvline(s, color="grey", linestyle='dotted')
ax.annotate(s=round(data[s], 2),
xy=(s, data[s]),
xytext=(s + 2, 0.5),
arrowprops={'arrowstyle': '->'});
ax.annotate(s=round(smallerdata[s], 2),
xy=(s, smallerdata[s]),
xytext=(s + 2, 0.3),
arrowprops={'arrowstyle': '->'});
plt.show();
# Define an interactive slider for exploring the data
slider = ipywidgets.interactive(updatePlot, s=(0, len(data) - 1, 1));
IPython.display.display(slider)
Explanation: Or even something interactive, such as
End of explanation |
13,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
Simple Linear Regression
Running a SLR in Python is fairly simple once you know how to use the relevant functions. What might be confusing is that there exist several packages which provide functions for linear regression. We will use functions from the statsmodels (sub-)package. Other packages such as e.g. scikit-learn have linear regression functions too, but what makes statsmodels stand out from other packages is its broad set of auxiliary functions for regression diagnostics. As usual we start by importing the packages needed for our task.
Step1: As toy data we will use the 'Advertising' data set introduced in the chapter 4 of the script. The data is taken from James et al. (2013). A copy is provided on the book's website where we will download it from.
Step2: Next we run a linear regression of TV on sales to calculate the coefficients and print a summary output.
Step3: Side note
Step4: Plotting the Fit
The sm.OLS() function calculates all kind of regression-related results. All this information is attached to the reg object. For example to plot the data we would need the model's fitted values. These can be accessed by combining the regression variable/object with the attribute .fittedvalues as in reg.fittedvalues. In below plot it is shown how this can be of use. We plot the data and fit using the standard plotting functions.
Step5: Let us plot residuals versus fitted values to do some visual regression diagnostics
Step6: The above two plots are just standard matplotlib plots serving the purpose of visual diagnostics. Beyond that, the statsmodel package has separate built-in plotting functions suited for visual regression diagnostics. We will not discuss them here but if interested you'll find plenty of sources on the web (e.g. here, here or here)
Accessing the Output
The reg object contains a ton of information which you can all access. To see what there is, type reg. and press tab. Two examples are shown below. Notice that some are attributes (like x.shape, x.size) and do not need paranthesis to call them. Others are methods (similar to .sum(), .min()) and require parenthesis.
Step7: Confidence Intervals & Hypthesis Testing
The 95%-confidence interval (CI) is printed in the summary above. If one wishes to calculate it for a different significance level (alpha), it is done as follows
Step8: The regression summary provides $t$-statistic and p-value for the null hypothesis $H_0
Step9: If you wish to test a different null hypothesis, e.g. $H_0
Step10: As far as I know, Statsmodels does not provide a function to calculate 'greater than' or 'smaller than' alternative hypothesis. Reason being
Step11: Regression Diagnostics
Test of Homoskedasticity
In general we assume a constant variance of the error term (homoskedasticity; $Var(\epsilon_i) = \sigma^2$ for $i = 1, \ldots, N$). From the residuals vs. fitted plot we have to question this assumption. To test it mathematically, you can run a heteroskedasticity test. The stats package offers several test options; the more common ones are White's or the one from Breusch-Pagan. See here for more details on tests on heteroskedasticity.
Below the White test is applied as an example. The parameter 'reg.model.exog' simply contains the matrix X (here it is a [200 x 2] matrix with constant 1 in first column and values for TV in second). The output should become more clear when you check the function's help page (use ?sm.stats.diagnostic.het_white).
The null hypothesis is that the error variance does not depend on X, thus is homoskedastic. Based on the large f-statistic value we can gently reject the null-hypothesis that the error variance is homoskedastic.
Step12: If you wish to run tests with heteroskedastistic robust standard errors you can either access the reg object's robust standard errors (reg.HC0_se, reg.HC1_se, reg.HC2_se, reg.HC3_se) or, more conveniently, directly define the covariance estimator (e.g. 'HC3' as below) when you generate the object in the first place. Below example shows how you can do this. See also here or here for some further information.
Step13: Other Relevant Checks
The statsmodels package offers many more functions to run regression diagnostics, e.g. checks for autocorrelation, non-linearity, normality of residuals etc. These functions are applicable to both simple as well as multiple linear regression models. There's a short Jupyther notebook detailing some of the options.
Application
Step14: The dataframe df contains monthly closing prices on all SMI-stocks (incl. SMI index) with a date index in descending order. Let us create a separate Pandas dataframe with the returns of the past 60 months (dfRets) for both GEBN as well as the index.
Step15: Having done that, we are already set to run the regression and print the results.
Step16: Based on the regression output we have no reason to reject the null of Geberit's stock beta being equal to zero. The $R^2$ measure, though, shows that only a small portion of the variation in Geberits monthly returns is explained by SMI's monthly returns.
Step17: What we calculated above is often referred to as the raw beta. The beta value of a stock has been found to be on average closer to the mean value of 1.0, the beta of an average-systematic-risk portfolio, than to the value of the raw beta (Pinto et al. (2016)). This is why data providers such as Bloomberg publish the adjusted beta as first introduced by Blume (1971), which is calculated as
$$ \text{Adjusted beta} = 2/3 \cdot \text{raw beta} + 1/3 \cdot 1$$
Now, let us assume we are given the task to investigate whether a beta indeed regresses to 1 over time. For that we could, as a starting point, assess a stock's rolling beta over the past years. Note that this is just an example of use. Computationally it would be much faster to calculate the stock beta via the covariance/variance formula.
Step18: Now we are ready to call the function. This time we use the last two years of monthly returns. Thus we set window=24 to overwrite the default value of 60.
Step19: Though this is far away from a thorough analysis, plotting the results shows that at least in Geberit's case, there is indeed some truth to the assessment, that the beta exhibits some reversion to the market beta value of 1.
Step20: Multiple Linear Regression
Estimating the Regression Coefficients
Simple linear regression serves well to introduce the concept and to build a good understanding. However, in reality we often have to work with more than one predictor. In the advertising data set for example we had not only data on TV advertising spendings but also on radio newspaper. It thus makes sense to extend the simple to a multiple linear regression model.
We again use the Advertising data set to see how this is done in Python. The same functions from the statsmodels package apply to multiple linear regression. We run the following regression
$$\text{sales} = \beta_0 + \beta_1 \text{TV} + \beta_2 \text{radio} + \beta_3 \text{newspaper} + \epsilon$$
Step21: The coefficient for radio (0.1885) tells us, that - holding all other factors fixed - an additional 1'000 dollars in radio advertising spendings will boost the product's sales by 188.5 units.
Again
Step22: The output shows that we fail to reject the null hypothesis that $\beta_{TV} = 0.0475$.
Beyond the element-wise hypothesis tests the regression summary also provides F-statistic (and the corresponding p-value) on the combined hypothesis that
$$\begin{align}
H_0&
Step23: See the documentation page for further examples on how this function can be used.
Coefficient of Determination
The $R^2$ measure for the MLR is the same as for the SLR. However, in the case of MLR it has one drawback
Step24: Application
Step25: Side Note
Step26: Variable data is a dictionary with three entries
Step27: Next we calculate the log-returns of all SMI stocks. The share prices are taken from dataframe df which we loaded above.
Step28: Calculate Fama-French Coefficients
In order to run the regression we need to have matching indices. Fama-French's index is 'yyyy-mm' while our dataframe with returns has format 'yyyy-mm-dd'. Since we know that the length of both dataframes is equal, we can simply overwrite the index of one of the dataframes.
Step29: We are now in a position to run the multiple linear regression. We will again use the past 60 months. From the Fama French set we just need the first three columns. Column 4 is the risk free rate which we do not use.
Step30: And here are our factors
Step31: Hedge a Portolio
Now that we have all the factors, let's assume we want to build a portfolio with all 20 SMI stocks that maximizes the Sharpe ratio (SR). As a further condition we want to limit our exposure to the SMB factor to, let's say, $\beta_p^{SMB} = 0$. How would we allocate our investment under these conditions? In mathematical terms we have the following optimization problem
Step32: Python's scipy package has a sublibrary for constrained optimization problems. We will use the minimize function and minimize the negative value of the Sharpe ratio (which is obviously equal to maximizing the SR)
Step33: Our constraints are as stated above | Python Code:
# Load relevant packages
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
plt.rcParams['font.size'] = 14
Explanation: Linear Regression
Simple Linear Regression
Running a SLR in Python is fairly simple once you know how to use the relevant functions. What might be confusing is that there exist several packages which provide functions for linear regression. We will use functions from the statsmodels (sub-)package. Other packages such as e.g. scikit-learn have linear regression functions too, but what makes statsmodels stand out from other packages is its broad set of auxiliary functions for regression diagnostics. As usual we start by importing the packages needed for our task.
End of explanation
# From Advertising data set read cols 2:4
url = 'https://www.statlearning.com/s/Advertising.csv'
ad = pd.read_csv(url, sep=',', usecols=(np.arange(1, 5, 1)))
print(ad.head())
Explanation: As toy data we will use the 'Advertising' data set introduced in the chapter 4 of the script. The data is taken from James et al. (2013). A copy is provided on the book's website where we will download it from.
End of explanation
# Run regression and calculate fit
reg = sm.OLS(ad.sales, exog=sm.add_constant(ad.TV)).fit()
# Alternatively: reg = sm.OLS(ad.sales, sm.add_constant(ad.TV)).fit()
print(reg.summary())
Explanation: Next we run a linear regression of TV on sales to calculate the coefficients and print a summary output.
End of explanation
reg.summary().tables[1]
Explanation: Side note: If you are an R guy and prefer their syntax, you could import the statsmodels.formula.api subpackage and run something like:
from statsmodels.formula.api import ols
reg = ols("Sales ~ TV", data=ad).fit()
reg.summary()
Intercept will automatically be calculated in the above setting.
Both p-values for intercept ($\hat{\beta}_0$) and slope ($\hat{\beta}_1$) are smaller than any reasonable significance level and thus we can reject the null hypothesis that either of the coefficients is zero (or irrelevant).
Instead of printing the whole summary, we could also access each of the three summary as follows:
End of explanation
# Plot scatter & lm
plt.figure(figsize=(12, 8))
plt.scatter(ad.TV, ad.sales, marker='.', label='Sample') # Training data
plt.plot(ad.TV, reg.fittedvalues, c='k', label='Fit') # Linear fit
plt.ylabel('Sales')
plt.xlabel('TV')
plt.legend();
Explanation: Plotting the Fit
The sm.OLS() function calculates all kind of regression-related results. All this information is attached to the reg object. For example to plot the data we would need the model's fitted values. These can be accessed by combining the regression variable/object with the attribute .fittedvalues as in reg.fittedvalues. In below plot it is shown how this can be of use. We plot the data and fit using the standard plotting functions.
End of explanation
plt.figure(figsize=(12, 8))
plt.scatter(ad.TV, reg.resid)
plt.axhline(y=0, c='k') #Black horizontal line at 0
plt.xlabel('TV (fitted)')
plt.ylabel('Residuals');
Explanation: Let us plot residuals versus fitted values to do some visual regression diagnostics:
End of explanation
# Regression coefficients
print(reg.params, '\n')
print(reg.resid.head())
Explanation: The above two plots are just standard matplotlib plots serving the purpose of visual diagnostics. Beyond that, the statsmodel package has separate built-in plotting functions suited for visual regression diagnostics. We will not discuss them here but if interested you'll find plenty of sources on the web (e.g. here, here or here)
Accessing the Output
The reg object contains a ton of information which you can all access. To see what there is, type reg. and press tab. Two examples are shown below. Notice that some are attributes (like x.shape, x.size) and do not need paranthesis to call them. Others are methods (similar to .sum(), .min()) and require parenthesis.
End of explanation
# 99% CI (alpha = 1%) based on t-distribution
reg.conf_int(alpha=0.01)
Explanation: Confidence Intervals & Hypthesis Testing
The 95%-confidence interval (CI) is printed in the summary above. If one wishes to calculate it for a different significance level (alpha), it is done as follows:
End of explanation
print(reg.tvalues, '\n')
print(reg.pvalues)
Explanation: The regression summary provides $t$-statistic and p-value for the null hypothesis $H_0: \hat{\beta}_j = 0$, $H_a: \hat{\beta}_j \neq 0$. You can call the resulting $t$-statistic and p-value with its attributes.
End of explanation
reg.t_test('TV=0.054')
Explanation: If you wish to test a different null hypothesis, e.g. $H_0: \hat{\beta}{TV} = 0.054$ vs. $H_1: \hat{\beta}{TV} \neq 0.054$ use the following code:
End of explanation
# R squared measure
reg.rsquared
Explanation: As far as I know, Statsmodels does not provide a function to calculate 'greater than' or 'smaller than' alternative hypothesis. Reason being: because with symmetric distributions, the one-sided test can be derived from the two-sided test. A one-sided p-value is just half of the two-sided p-value. This means that given p and $t$ values from a two-tailed test, you would reject the null hypothesis of a greater-than test when p/2 < alpha and $t$ > 0, and of a less-than test when p/2 < alpha and $t$ < 0.
Coefficient of Determination
The $R^2$ measure, or "coefficient of determination", displays the proportion of the variability in $y$ that is well explained by the regression fit. It is defined as
$$\begin{equation}
R^2 = \frac{TSS - SSR}{TSS} = 1 - \frac{SSR}{TSS}
\end{equation}$$
where TSS is the total sum of squares, defined as $TSS = \sum (y_i - \bar{y})^2$, and SSR is the sum of squared residuals, given by $SSR = \sum (y_i - \hat{y}_i)^2$.
It is easy to call the $R^2$ value from the regression object reg as the following line shows.
End of explanation
# Test for heteroskedasticity with White test
wht = sm.stats.diagnostic.het_white(resid=reg.resid, exog=reg.model.exog)
print('f-statistic: {0:>19.4f} \n'
'p-value for f-statistic: {1:>7.4f}'.format(wht[2], wht[3]))
Explanation: Regression Diagnostics
Test of Homoskedasticity
In general we assume a constant variance of the error term (homoskedasticity; $Var(\epsilon_i) = \sigma^2$ for $i = 1, \ldots, N$). From the residuals vs. fitted plot we have to question this assumption. To test it mathematically, you can run a heteroskedasticity test. The stats package offers several test options; the more common ones are White's or the one from Breusch-Pagan. See here for more details on tests on heteroskedasticity.
Below the White test is applied as an example. The parameter 'reg.model.exog' simply contains the matrix X (here it is a [200 x 2] matrix with constant 1 in first column and values for TV in second). The output should become more clear when you check the function's help page (use ?sm.stats.diagnostic.het_white).
The null hypothesis is that the error variance does not depend on X, thus is homoskedastic. Based on the large f-statistic value we can gently reject the null-hypothesis that the error variance is homoskedastic.
End of explanation
regRobust = sm.OLS(ad.sales, exog=sm.add_constant(ad.TV)).fit(cov_type='HC3')
print(regRobust.HC3_se, '\n')
print(reg.HC3_se)
Explanation: If you wish to run tests with heteroskedastistic robust standard errors you can either access the reg object's robust standard errors (reg.HC0_se, reg.HC1_se, reg.HC2_se, reg.HC3_se) or, more conveniently, directly define the covariance estimator (e.g. 'HC3' as below) when you generate the object in the first place. Below example shows how you can do this. See also here or here for some further information.
End of explanation
df = pd.read_csv('Data/SMIDataMonthly.csv', sep=',',
parse_dates=['Date'], dayfirst=True,
index_col=['Date'])
df.head()
Explanation: Other Relevant Checks
The statsmodels package offers many more functions to run regression diagnostics, e.g. checks for autocorrelation, non-linearity, normality of residuals etc. These functions are applicable to both simple as well as multiple linear regression models. There's a short Jupyther notebook detailing some of the options.
Application: Stock Beta
A stock beta measures the systematic risk of a security, the tendency of a security to respond to swings in the broad market. Typically, a large, well diversified index is taken as a proxy for the market portfolio (e.g. S&P500, Euro Stoxx 50, SPI, etc.). There are different ways to calculate a stock beta. We will show the regression approach, where a stock's beta is the slope of the following linear regression:
$$\begin{equation}
r - r_f = \alpha + \beta(r_M - r_f) + e
\end{equation}$$
Let us look into Geberit's stock beta. As a proxy for the market portfolio we use the Swiss market index (SMI). The risk free rate is set to $r_f=0$, which is a fairly reasonable approach in light of the Swiss national bank's (SNB) interest rates for the past eight years (at the time of this writing in 2021). We will work with monthly returns for the past five years (60 months) - though other approaches (e.g. last 24 monthly returns, weekly returns for last 2 years, etc.) are reasonable choices too. The stock and SMI data we will load from a csv that was sourced through a financial data provider.
End of explanation
# Calculate returns and assign to variable dfRets
dfRets = pd.DataFrame()
dfRets['GEBNrets'] = np.log(df['GEBN'] / df['GEBN'].shift(-1))
dfRets['SMIrets'] = np.log(df['SMI'] / df['SMI'].shift(-1))
print(dfRets.head())
Explanation: The dataframe df contains monthly closing prices on all SMI-stocks (incl. SMI index) with a date index in descending order. Let us create a separate Pandas dataframe with the returns of the past 60 months (dfRets) for both GEBN as well as the index.
End of explanation
# Set observation period (last 60 monthly returns)
months = 60
# Create OLS object, run regression and calculate fit
regBeta = sm.OLS(endog=dfRets.iloc[:months, 0],
exog=sm.add_constant(dfRets.iloc[:months, 1])).fit()
# Show table on coefficients
print(regBeta.summary())
Explanation: Having done that, we are already set to run the regression and print the results.
End of explanation
# Get relevant information
beta = regBeta.params['SMIrets']
alpha = regBeta.params["const"]
rsqr = regBeta.rsquared
# Plot scatter & lm; add text with alpha, beta , R2
plt.figure(figsize=(12, 8))
plt.scatter(dfRets.iloc[:months, 1],
dfRets.iloc[:months, 0],
marker='.', label='Monthly Returns')
plt.plot(dfRets.iloc[:months, 1], regBeta.fittedvalues, c='k', label='Fit')
plt.gca().set_aspect('equal')
plt.gca().set_xlim(-0.1, 0.15)
plt.ylabel('Geberit Monthly Returns')
plt.xlabel('SMI Monthly Returns')
plt.legend(loc='lower right')
plt.text(-0.08, 0.13, 'Beta: {0: .2f}'.format(beta))
plt.text(-0.08, 0.12, 'Alpha: {0: .2f}'.format(alpha))
plt.text(-0.08, 0.11, 'R^2: {0: .2f}'.format(rsqr));
Explanation: Based on the regression output we have no reason to reject the null of Geberit's stock beta being equal to zero. The $R^2$ measure, though, shows that only a small portion of the variation in Geberits monthly returns is explained by SMI's monthly returns.
End of explanation
def rollingBeta(df, window=60):
'''Calculates the running beta of a stock.
Parameters
==========
df : [n x 2] pandas dataframe with log-returns for
stock and market portfolio. Index should be
datetime series.
window : rolling window with default value 60 [optional]
Returns
=======
rb : Pandas dataframe with (backward-looking) rolling beta.
'''
# Drop NA rows from df
df = df.dropna()
# Set up empty results array
res = np.empty(len(df) - window + 1)
# Loop through df
for i in range(0, len(df)):
# As long as remaining subset is >= window, we proceed
if (len(df) - i) >= window:
# Subset df
sub = df.iloc[i:window+i, :]
# Run Regression
model = sm.OLS(endog=sub.iloc[:, 0],
exog=sm.add_constant(sub.iloc[:, 1])).fit()
# Read out beta coefficient
res[i] = model.params[1]
# Format output to dataframe
rb = pd.DataFrame(data=res, index=df.index[:(len(df)-window+1)])
rb.columns = ['RollingBeta']
return rb
Explanation: What we calculated above is often referred to as the raw beta. The beta value of a stock has been found to be on average closer to the mean value of 1.0, the beta of an average-systematic-risk portfolio, than to the value of the raw beta (Pinto et al. (2016)). This is why data providers such as Bloomberg publish the adjusted beta as first introduced by Blume (1971), which is calculated as
$$ \text{Adjusted beta} = 2/3 \cdot \text{raw beta} + 1/3 \cdot 1$$
Now, let us assume we are given the task to investigate whether a beta indeed regresses to 1 over time. For that we could, as a starting point, assess a stock's rolling beta over the past years. Note that this is just an example of use. Computationally it would be much faster to calculate the stock beta via the covariance/variance formula.
End of explanation
# Call function and save output to 'rollBeta'
rollBeta = rollingBeta(df=dfRets, window=24)
Explanation: Now we are ready to call the function. This time we use the last two years of monthly returns. Thus we set window=24 to overwrite the default value of 60.
End of explanation
# Import 'mdates' library to format dates in x-axis
import matplotlib.dates as mdates
# Plot rolling beta
fig, ax = plt.subplots(1, figsize=(12, 8))
ax.plot(rollBeta, label='Geberit Rolling Beta')
ax.axhline(y=1, c='gray', linestyle=':') # Horizontal line
ax.legend(fontsize=12)
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=6))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%Y'))
fig.autofmt_xdate(); # Autorotate x-axis for readability
Explanation: Though this is far away from a thorough analysis, plotting the results shows that at least in Geberit's case, there is indeed some truth to the assessment, that the beta exhibits some reversion to the market beta value of 1.
End of explanation
# Assign features and response to X and y
y = ad.sales
X = ad[['TV', 'radio', 'newspaper']]
X = sm.add_constant(X)
# Run regression and print summary
mlReg = sm.OLS(endog=y, exog=X).fit()
print(mlReg.summary())
Explanation: Multiple Linear Regression
Estimating the Regression Coefficients
Simple linear regression serves well to introduce the concept and to build a good understanding. However, in reality we often have to work with more than one predictor. In the advertising data set for example we had not only data on TV advertising spendings but also on radio newspaper. It thus makes sense to extend the simple to a multiple linear regression model.
We again use the Advertising data set to see how this is done in Python. The same functions from the statsmodels package apply to multiple linear regression. We run the following regression
$$\text{sales} = \beta_0 + \beta_1 \text{TV} + \beta_2 \text{radio} + \beta_3 \text{newspaper} + \epsilon$$
End of explanation
# t-test on H0: beta(TV) = 0.0475
mlReg.t_test('TV=0.0475')
Explanation: The coefficient for radio (0.1885) tells us, that - holding all other factors fixed - an additional 1'000 dollars in radio advertising spendings will boost the product's sales by 188.5 units.
Again: If you are an R guy and prefer their syntax, you could import the statsmodels.formula.api subpackage and run something like:
from statsmodels.formula.api import ols
mlReg = ols("Sales ~ TV + radio + newspaper", data=ad).fit()
mlReg.summary()
Hypothesis Tests
Again the summary above provides $t$-statistic and p-value for each individual regression coefficient. As was the case for the simple linear regression, the underlying null hypothesis is that each parameter is zero ($H_0: \beta_{j,\, H_0} = 0$). For TV and Radio we reject the null even at the 1% significance level. However, given the large p-value for Newspaper we fail to reject the null for $\beta_{\text{Newspaper}} = 0$ at any reasonable level. Thus we can conclude that leaving Newspaper data out might be a reasonable option. If other null hypothesis' ought to be tested, we can use the same command as shown above.
End of explanation
# Test H0: beta(radio) = beta(newspaper) = 0.1
mlReg.f_test('radio = newspaper = 0.1')
Explanation: The output shows that we fail to reject the null hypothesis that $\beta_{TV} = 0.0475$.
Beyond the element-wise hypothesis tests the regression summary also provides F-statistic (and the corresponding p-value) on the combined hypothesis that
$$\begin{align}
H_0&: \quad \beta_j = \beta_1, \beta_2, \ldots \beta_p = 0 \
H_a&: \quad \beta_j \neq 0 \text{ for at least one $j$}
\end{align}$$
On the basis of the corresponding p-value (i.e. 1.58e-96) we can reject the null at any reasonable significance level. Should we be interested in assessing a particular hypothesis, say
$$\begin{align}
H_0&: \quad \beta_{TV} = \beta_{\text{Radio}} = 0.1 \
H_a&: \quad \beta_{TV} \neq \beta_{\text{Radio}} \neq 0.1
\end{align}$$
we use the .f_test() method.
End of explanation
mlReg.rsquared_adj
Explanation: See the documentation page for further examples on how this function can be used.
Coefficient of Determination
The $R^2$ measure for the MLR is the same as for the SLR. However, in the case of MLR it has one drawback: the value will always increase when more explanatory variables are added to the model - even if those variables are only weakly associated with the response. To make good on this disadvantage a modificated measure is often used: adjusted $R^2$.
$$\begin{equation}
R^2_{adj} = 1 - (1-R^2) \frac{n-1}{n-p-1}
\end{equation}$$
To get this measure in Python, simply use the OLS object and call the .rsquareed_adj attribute.
End of explanation
import pandas_datareader as web
# Define obs. period, start & enddate
months = 60
startdate = '2012-06-01'
enddate = '2017-12-31'
Explanation: Application: Factor Models
Fama-French Three Factor Model
We will apply the concept of multiple linear regression in the context of Fama-French's three factor model (Fama and French (1993)). Their model follows Ross' arbitrage pricinge theory which postulates that excess returns are linearly related to a set of systematic risk factors (Ross et al. (1973)). The factors can be returns on other assets, such as the market portfolio, or any other variable (e.g. interest rates, inflation, consumption growth, market sentiment, hedging demands etc.. Fama-French empirically discovered three factors to capture the systematic risk: (A) firm size, (B) book-to-market ratio (B/M) and (C) market risk. To quantify their findings, Fama-French constructed zero-net-investment factor portfolios capturing the systematic risk on firm size (factor is labeled 'small minus big' (SMB) and is constructed by going long on small and short on big size stocks) and B/M (labeled 'high minus low' (HML), i.e. going long on high B/M, short on low B/M stocks). The sensitivity of individual stocks to the three factors is then given by the estimated coefficients of a multiple linear regression. As a group the three factors combine for the total risk premium.
The expected excess return $R_{it}$ of asset $i$ at time $t$ in the Fama-French three-factor model is described by
$$\begin{equation}
R_{it} = \alpha_i + \beta_i^{M} (r_{M,t} - r_{f,t}) + \beta_i^{SMB} SMB_t + \beta_i^{HML} HML_t + \epsilon_{it}
\end{equation}$$
The above Fama-French factors are calculated on a monthly basis and published on Kenneth R. French's website. There you will also find information on the methodology of the model and lots of other possible factors besides the three we look into here. To run this regression in Python we use a shortcut. The pandas_datareader package is capable of loading the data without having to download a txt or csv file in a separate step.
We will calculate the factor beta for all 20 SMI stocks. For that we use Fama-French's 'European 3 Factors' data. Following our 'Stock Beta' example from above and for the sake of simplicity, the risk free rate for Switzerland is again assumed to be zero (wrt. $R_{it}$).
Prepare Data for Fama-French Model
We start by importing the pandas_datareader.data package and defining some key parameter.
End of explanation
# Load FF factors
data = web.DataReader('Europe_3_Factors', data_source='famafrench',
start=startdate, end=enddate)
Explanation: Side Note: If you want to know what data is available (and their labels), you can run the get_available_datasets() function.
from pandas_datareader.famafrench import get_available_datasets
get_available_datasets()
or simply check Kenneth R. French's website
End of explanation
# Select monthly data
ff = data[0]
# Sort data in descending order
ff = ff.sort_index(ascending=False)
# Convert returns to decimal percentages
ff = ff/100
print(ff.head(3))
Explanation: Variable data is a dictionary with three entries: monhtly data, annual data and a description. We select the monthly rates in dictionary 0 and format the data.
End of explanation
shsRets = np.log(df / df.shift(-1))
shsRets = shsRets['2012-06-01':'2017-12-31']
shsRets = shsRets.iloc[:, :-1] # We exclude the last column (with SMI data)
shsRets.head(3)
Explanation: Next we calculate the log-returns of all SMI stocks. The share prices are taken from dataframe df which we loaded above.
End of explanation
# Create matching indices
ff.index = shsRets.index
Explanation: Calculate Fama-French Coefficients
In order to run the regression we need to have matching indices. Fama-French's index is 'yyyy-mm' while our dataframe with returns has format 'yyyy-mm-dd'. Since we know that the length of both dataframes is equal, we can simply overwrite the index of one of the dataframes.
End of explanation
# Add constant to matrix for alphas (=intercept)
X = sm.add_constant(ff.iloc[:months, :3])
# Assign ticker to variable
tickers = shsRets.columns
# Create results matrix to paste beta factors
res = np.empty(shape=(5, len(tickers)))
# Run regression for each ticker
for i in range(0, len(tickers)):
# Select returns of share i
sub = shsRets.iloc[:months, i]
# Run regression
model = sm.OLS(endog=sub, exog=X).fit()
# Paste beta factors to 'res' matrix
res[0:4, i] = model.params
res[4, i] = model.rsquared_adj
# Format output to dataframe
ff3f = pd.DataFrame(data=res, index=['Alpha', 'BetaMkt', 'BetaSMB', 'BetaHML', 'R2_adj'])
ff3f.columns = tickers
Explanation: We are now in a position to run the multiple linear regression. We will again use the past 60 months. From the Fama French set we just need the first three columns. Column 4 is the risk free rate which we do not use.
End of explanation
ff3f
# Transpose matrix (.T) and display stats summary
print(ff3f.T.describe())
Explanation: And here are our factors:
End of explanation
# Define rf and (equally spread) start weights
rf = 0
wghts = np.repeat(1. / len(tickers), len(tickers))
# Expected stock returns based on ff3f model
expShsRets = rf + ff3f.T.Alpha + \
ff3f.T.BetaMkt * ff['Mkt-RF'].mean() + \
ff3f.T.BetaSMB * ff.SMB.mean() + \
ff3f.T.BetaHML * ff.HML.mean()
def pfStats(weights):
'''Returns basic measures for a portfolio
Parameters
==========
weights : array-like
weights for different securities in portfolio
Returns
=======
expPfRet : float
weighted, annualized expected portfolio return based on ff3f model
pfVol : float
historical annualized portfolio volatility
SR : float
portfolio Sharpe ratio for given riskfree rate
'''
expPfRet = np.sum(weights * expShsRets) * 12
pfVol = np.sqrt(np.dot(weights.T, np.dot(shsRets.cov() * 12, weights)))
SR = (expPfRet - rf) / pfVol
return np.array([expPfRet, pfVol, SR])
Explanation: Hedge a Portolio
Now that we have all the factors, let's assume we want to build a portfolio with all 20 SMI stocks that maximizes the Sharpe ratio (SR). As a further condition we want to limit our exposure to the SMB factor to, let's say, $\beta_p^{SMB} = 0$. How would we allocate our investment under these conditions? In mathematical terms we have the following optimization problem:
$$\begin{equation}
\max_{w_i} SR = \frac{\mathbb{E}[r_p] - r_f}{\sigma_p} \qquad s.t. \qquad
\begin{cases}
\sum w_i &= 1 \
\beta_p^{SMB} &= 0
\end{cases}
\end{equation}$$
Usually, to calculate the expected return $\mathbb{E}[r_p]$, historical returns are taken. For our case here, we will take the expected returns given by our Fama-French 3 Factor model (denoted $\mathbf{R_{ff}}$). The portfolio variance $\sigma_p$ however, we estimate using historical data. Alternatively one could think of taking the SMI volatility index value as proxy. But this is only approximately true because we will not have the same weights per stock as the SMI and this thus might be questionable. With that we have
$$\begin{equation}
\max_{w_i} SR = \frac{\mathbf{w}^T \left(r_f + \mathbf{\alpha} + \mathbf{\beta}^{M} (r_{M} - r_{f}) + \mathbf{\beta}^{SMB} SMB + \mathbf{\beta}^{HML} HML \right) - r_f}{\mathbf{w}^T \mathbf{\Sigma}\mathbf{w}} \qquad s.t. \qquad
\begin{cases}
\sum w_i &= 1 \
\beta_p^{SMB} &= 0
\end{cases}
\end{equation}$$
Python can solve this problem numerically. We first set the stage by defining a auxiliary function pfStats that returns the expected portfolio return, volatility and Sharpe ratio given a vector of weights. Note that the function also makes use of other data like monthly returns and the riskfree rate as previously defined (which is set to 0) but only weights are a function input value. This is necessary for the optimization function.
End of explanation
import scipy.optimize as sco
def minSR(wghts):
return -pfStats(wghts)[2]
Explanation: Python's scipy package has a sublibrary for constrained optimization problems. We will use the minimize function and minimize the negative value of the Sharpe ratio (which is obviously equal to maximizing the SR)
End of explanation
# Constraints and bounds
constr = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1},
{'type': 'eq', 'fun': lambda x: np.sum(x * ff3f.T.BetaSMB) - 0}]
bnds = tuple((-1,1) for x in range(len(tickers)))
# Minimization function
optPf = sco.minimize(minSR, x0=wghts, method='SLSQP', bounds=bnds, constraints=constr)
# Check if conditions are actually met
print('Sum of weights: ', np.sum(optPf['x']))
print('Beta SMB factor: ', np.sum(optPf['x'] * ff3f.T.BetaSMB))
# Calculate portfolio stats given optimal weights
rsltsOptPf = pfStats(optPf['x'])
# Format weights into dataframe with Tickers as heading
optWghts = pd.DataFrame(data=optPf['x'], index=tickers)
optWghts.columns = ['optimalWghts']
# Print results
print('Portfolio return: ', str(rsltsOptPf[0]))
print('Portfolio volatility: ', str(rsltsOptPf[1]))
print('Portfolio SR: ', str(rsltsOptPf[2]), '\n')
print(str(optWghts))
Explanation: Our constraints are as stated above: $\sum w_i = 1$, $\beta_P^{SMB} = 0$. Additionally we set bounds for the weights such that short/long position are allowed but only up to 100% per share ($w_i \in [-1, 1]\; \forall i \in [1, 2, \ldots, n]$).
End of explanation |
13,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variable pitch solenoid model
A.M.C. Dawes - 2015
A model to design a variable pitch solenoid and calculate the associated on-axis B-field.
Step2: Parameters
Step3: Design discussion and comparison of two methods
Step4: The new way (as arrays)
Step5: The original way
Step6: Comparison | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import matplotlib as mpl
mpl.rcParams['legend.fontsize'] = 10
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
I = 10 #amps
mu = 4*np.pi*1e-7 #This gives B in units of Tesla
Explanation: Variable pitch solenoid model
A.M.C. Dawes - 2015
A model to design a variable pitch solenoid and calculate the associated on-axis B-field.
End of explanation
R = 0.02 #meters
length = 0.15 #meters
c1 = 0.0
c2 = -5.0
c3 = .1
fig = plt.figure()
ax = fig.gca(projection='3d')
p = np.linspace(0, 2 * np.pi, 5000)
theta = c1*p + c2*p**2 + c3*p**3
x = R * np.cos(theta)
y = R * np.sin(theta)
z = p*length/(2*np.pi)
dp = p[1] - p[0]
ax.plot(x, y, z, label='solenoid')
ax.legend()
ax.set_aspect('equal')
plt.show()
plt.plot(theta)
def B(zprime):
Returns B field in Tesla at point zprime on the z-axis
r = np.vstack((x,y,z-zprime)).transpose()
r_mag = np.sqrt(r[:,0]**2 + r[:,1]**2 + r[:,2]**2)
r_mag = np.vstack((r_mag,r_mag,r_mag)).transpose()
dr = r[1:,:] - r[:-1,:]
drdp = dr/dp
crossterm = np.cross(drdp,r[:-1,:])
return mu*I/(4*np.pi) * np.nansum(crossterm / r_mag[:-1,:]**3 * dp,axis=0)
zpoints = np.arange(0,0.15,0.001)
#Bdata = np.zeros((len(zpoints),3))
Bdata = [1e4*B(zpoint) for zpoint in zpoints]
plt.plot(zpoints,Bdata)
ax = plt.gca()
ax.axvspan(0.03,0.12,alpha=0.2,color="green")
plt.ylabel("B-field (G)")
plt.xlabel("z (m)")
Explanation: Parameters:
End of explanation
#Calculate r vector:
r = np.vstack((x,y,z)).transpose()
plt.plot(r)
# Calculate dr vector:
dr = r[1:,:] - r[:-1,:]
plt.plot(dr)
# Calculate dp vector:
dp = p[1:] - p[:-1]
plt.plot(dp)
# or the smart way since p is linear:
dp = p[1] - p[0]
dp
r_mag = np.sqrt(r[:,0]**2 + r[:,1]**2 + r[:,2]**2)
plt.plot(r_mag)
Explanation: Design discussion and comparison of two methods:
The following are remnants of the design of this notebook but may be useful for verification and testing of the method.
End of explanation
def B2(zprime):
r = np.vstack((x,y,z-zprime)).transpose()
r_mag = np.sqrt(r[:,0]**2 + r[:,1]**2 + r[:,2]**2)
r_mag = np.vstack((r_mag,r_mag,r_mag)).transpose()
dr = r[1:,:] - r[:-1,:]
drdp = dr/dp
crossterm = np.cross(drdp,r[:-1,:])
return mu*I/(4*np.pi) * np.nansum(crossterm / r_mag[:-1,:]**3 * dp,axis=0)
B2list = []
for i in np.arange(0,0.15,0.001):
B2list.append(B2(i))
plt.plot(B2list)
Explanation: The new way (as arrays):
Converted the for loops to numpy array-based operations. Usually this just means taking two shifted arrays and subtracting them (for the delta quantities). But we also do some stacking to make the arrays easier to handle. For example, we stack x y and z into the r array. Note, this uses dp, and x,y,z as defined above, all other quantities are calculated in the loop because r is always relative to the point of interest.
End of explanation
def B(zprime):
B = 0
for i in range(len(x)-1):
dx = x[i+1] - x[i]
dy = y[i+1] - y[i]
dz = z[i+1] - z[i]
dp = p[i+1] - p[i]
drdp = [dx/dp, dy/dp, dz/dp]
r = [x[i],y[i],z[i]-zprime]
r_mag = np.sqrt(x[i]**2 + y[i]**2 + (z[i]-zprime)**2)
B += mu*I/(4*np.pi) * np.cross(drdp,r) / r_mag**3 * dp
return B
Blist = []
for i in np.arange(0,0.15,0.001):
Blist.append(B(i))
plt.plot(Blist)
Explanation: The original way:
Warning, this is slow!
End of explanation
Blist_arr = np.asarray(Blist)
B2list_arr = np.asarray(B2list)
plt.plot(Blist_arr - B2list_arr)
Explanation: Comparison:
Convert lists to arrays, then plot the difference:
End of explanation |
13,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Stats Quality for 2016 College Nationals
As one of the biggest tournaments hosted by USAU, the Club Nationals is one of the few tournaments where player statistics are relatively reliably tracked. For each tournament game, each player's aggregate scores, assists, Ds, and turns are counted, although its quite possible the definition of a "D" or a "Turn" could differ across stat-keepers.
Data below was scraped from the USAU website. First we'll set up some imports to be able to load this data.
Step2: Since we should already have the data downloaded as csv files in this repository, we will not need to re-scrape the data. Omit this cell to directly download from the USAU website (may be slow).
Step3: Let's take a look at the games for which the sum of the player goals/assists is less than the final score of the game
Step4: There are a total of 69 unreported scorers and 86 unreported assisters (although its possible some of those 17 scores were callahans). At a quick glance a lot of these missing results are from less important games, such as the Machine-Madison Club placement game.
Step5: All games had reported turnovers | Python Code:
import usau.reports
import usau.fantasy
from IPython.display import display, HTML
import pandas as pd
pd.options.display.width = 200
pd.options.display.max_colwidth = 200
pd.options.display.max_columns = 200
def display_url_column(df):
Helper for formatting url links
df.url = df.url.apply(lambda url: "<a href='{base}{url}'>Match Report Link</a>"
.format(base=usau.reports.USAUResults.BASE_URL, url=url))
display(HTML(df.to_html(escape=False)))
Explanation: Stats Quality for 2016 College Nationals
As one of the biggest tournaments hosted by USAU, the Club Nationals is one of the few tournaments where player statistics are relatively reliably tracked. For each tournament game, each player's aggregate scores, assists, Ds, and turns are counted, although its quite possible the definition of a "D" or a "Turn" could differ across stat-keepers.
Data below was scraped from the USAU website. First we'll set up some imports to be able to load this data.
End of explanation
# Read data from csv files
usau.reports.club_nats_men_2016.load_from_csvs()
usau.reports.club_nats_mixed_2016.load_from_csvs()
usau.reports.club_nats_women_2016.load_from_csvs()
Explanation: Since we should already have the data downloaded as csv files in this repository, we will not need to re-scrape the data. Omit this cell to directly download from the USAU website (may be slow).
End of explanation
missing_tallies = pd.concat([usau.reports.club_nats_men_2016.missing_tallies,
usau.reports.club_nats_mixed_2016.missing_tallies,
usau.reports.club_nats_women_2016.missing_tallies,
])
display_url_column(missing_tallies[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
Explanation: Let's take a look at the games for which the sum of the player goals/assists is less than the final score of the game:
End of explanation
(missing_tallies["Score"] - missing_tallies["Gs"]).sum(), (missing_tallies["Score"] - missing_tallies["As"]).sum()
Explanation: There are a total of 69 unreported scorers and 86 unreported assisters (although its possible some of those 17 scores were callahans). At a quick glance a lot of these missing results are from less important games, such as the Machine-Madison Club placement game.
End of explanation
men_matches = usau.reports.club_nats_men_2016.match_results
mixed_matches = usau.reports.club_nats_mixed_2016.match_results
women_matches = usau.reports.club_nats_women_2016.match_results
display_url_column(pd.concat([men_matches[(men_matches.Ts == 0) & (men_matches.Gs > 0)],
mixed_matches[(mixed_matches.Ts == 0) & (mixed_matches.Gs > 0)],
women_matches[(women_matches.Ts == 0) & (women_matches.Gs > 0)]])
[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
Explanation: All games had reported turnovers:
End of explanation |
13,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
下載 ETC M06A 資料
<a href="http
Step1: 基本的資料
Step2: 反過來由檔名找日期,可以用 regexp 或者 datetime
Step3: 抓所有的壓縮檔案
Step4: 將 .tar.gz 重新打包成 .tar.xz | Python Code:
from urllib.request import urlopen, urlretrieve
import tqdm
Explanation: 下載 ETC M06A 資料
<a href="http://www.freeway.gov.tw/UserFiles/File/TIMCCC/TDCS%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8A(tanfb)v3.0-1.pdf">國道高速公路電子收費交通資料蒐集支援系統(Traffic Data Collection System,TDCS)使用手冊</a>
End of explanation
# 歷史資料網址
data_baseurl="http://tisvcloud.freeway.gov.tw/history/TDCS/M06A/"
# 壓縮檔的檔名格式
filename_format="M06A_{year:04d}{month:02d}{day:02d}.tar.gz".format
# csv 檔的路徑格式
csv_format = "M06A/{year:04d}{month:02d}{day:02d}/{hour:02d}/TDCS_M06A_{year:04d}{month:02d}{day:02d}_{hour:02d}0000.csv".format
# 下載檔案的程式
# 如果有 ipywidgets, 可以將 tqdm.tqdm 換成 tqdm.tqdm_notebook 比較 notebook 一點的界面
# 將 req 下載到檔案
def download_req(req, filename):
# 取得檔案長度
total = int(req.getheader("Content-Length"))
# tqdm 的設定
tqdm_conf = dict(total=total, desc=filename, unit='B', unit_scale=True)
# 開啟 tqdm 進度條及寫入檔案
with tqdm.tqdm(**tqdm_conf) as pbar:
with open(filename,'wb') as f:
# 從 req 每次讀入 8192 byte 的資料
for data in iter(lambda: req.read(8192), b""):
# 寫入檔案,並且更新進度條
pbar.update(f.write(data))
def download_M06A(year, month, day):
# 依照年月日來設定檔名
filename = filename_format(year=year, month=month, day=day)
# 用 urlopen 開啟連結
with urlopen(data_baseurl + filename) as req:
download_req(req, filename)
download_M06A(2016,12,18)
# 其實也可以用 urlretrieve
# 下面的寫法改自 tqdm 範例
filename = filename_format(year=2015, month=6, day=26)
with tqdm.tqdm(desc=filename, unit='B', unit_scale=True) as pbar:
last_b = 0
def tqdmhook(b, bsize, tsize):
nonlocal last_b
if tsize != -1:
pbar.total = tsize
pbar.update((b-last_b)*bsize)
last_b = b
urlretrieve(data_baseurl+filename, filename=filename, reporthook=tqdmhook)
Explanation: 基本的資料
End of explanation
import re
m=re.match("M06A_(\d{4})(\d\d)(\d\d).tar.gz" ,"M06A_20170103.tar.gz")
m.groups()
import datetime
datetime.datetime.strptime("M06A_20170103.tar.gz", "M06A_%Y%m%d.tar.gz")
Explanation: 反過來由檔名找日期,可以用 regexp 或者 datetime
End of explanation
# 使用 BeautifulSoup4 來解析
from bs4 import BeautifulSoup
# 抓下目錄頁
with urlopen(data_baseurl) as req:
data = req.read()
# 用 BeautifulSoup 解析目錄頁
soup = BeautifulSoup(data, "html.parser")
# 找到所有 <a href=... 的 tag
files = set(x.attrs['href'] for x in soup.find_all('a') if 'href' in x.attrs)
#files = set(x for x in files if x and x.endswith(".tar.gz") and x.startswith("M06A_"))
# 過濾剩下 href 開頭為 M06A_,結尾是.tar.gz 並且解出年月日
re_M06A_tgz=re.compile("M06A_(\d{4})(\d\d)(\d\d).tar.gz")
files = (re_M06A_tgz.match(x) for x in files)
files = [x.groups() for x in files if x]
files[:10]
# 結合上面來抓所有的資料
for y,m,d in files:
download_M06A(int(y), int(m), int(d))
Explanation: 抓所有的壓縮檔案
End of explanation
import glob
import lzma
import gzip
import os
import os.path
# 建立輸出目錄 xz
os.makedirs("xz", exist_ok=True)
def repack(filename):
# 原來的檔名需要是 gz 結尾
assert filename.endswith("gz")
# 檔案大小,用來顯示進度條
length = os.path.getsize(filename)
# 輸出檔名
xzfn = os.path.join("xz/", os.path.split(f)[-1][:-2]+"xz")
# 不要覆蓋已經有的檔案
if os.path.isfile(xzfn):
print("skip", filename)
return
# 開啟檔案和進度條, lzma 的 preset 可設定 0~9
with gzip.open(filename, 'r') as gzfile, \
lzma.open(xzfn, "w", preset=1) as xzfile, \
tqdm.tqdm(total=length, desc=filename, unit='B', unit_scale=True) as pbar:
# 從 .gz 解壓縮 data
for data in iter(lambda: gzfile.read(1024*1024), b""):
# 將 data 寫入 .xz
xzfile.write(data)
# 更新 pbar
pbar.update(gzfile.fileobj.tell() - pbar.n)
# 找出檔案,依序重新壓縮
for f in glob.glob("M06A_201612*.gz"):
repack(f)
Explanation: 將 .tar.gz 重新打包成 .tar.xz
End of explanation |
13,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test DFA-2-RegExp
Step1: As this regular expression is nearly unreadable, The notebook Rewrite.ipynb contains the definition of the function simplify that can be used to simplify this expression.
Step2: The function regexp_2_string takes a regular expression that is represented as a nested tuple and transforms it into a string. | Python Code:
%run DFA-2-RegExp.ipynb
%run FSM-2-Dot.ipynb
delta = { (0, 'a'): 0,
(0, 'b'): 1,
(1, 'a'): 1
}
A = {0, 1}, {'a', 'b'}, delta, 0, {1}
g, _ = dfa2dot(A)
g
r = dfa_2_regexp(A)
r
Explanation: Test DFA-2-RegExp
End of explanation
%run Rewrite.ipynb
s = simplify(r, Rules)
s
Explanation: As this regular expression is nearly unreadable, The notebook Rewrite.ipynb contains the definition of the function simplify that can be used to simplify this expression.
End of explanation
def regexp_2_string(r):
if r == 0:
return '0'
if r == '': # epsilon
return '""'
if isinstance(r, str): # single characters
return r
if r[0] == '&': # concatenation
r1, r2 = r[1:]
return regexp_2_string(r1) + regexp_2_string(r2)
if r[0] == '+':
r1, r2 = r[1:]
return '(' + regexp_2_string(r1) + '+' + regexp_2_string(r2) + ')'
if r[0] == '*':
r1 = r[1]
if isinstance(r1, str):
return regexp_2_string(r1) +'*'
else:
return '(' + regexp_2_string(r1) + ')*'
raise Exception(f'{r} is not a suitable regular expression')
print(regexp_2_string(s))
Explanation: The function regexp_2_string takes a regular expression that is represented as a nested tuple and transforms it into a string.
End of explanation |
13,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3 STYLE="background
Step1: <h3 STYLE="background
Step2: <h3 STYLE="background
Step3: <h4 style="border-bottom
Step4: 上図のように、qualityが6未満のワインと6以上のワインは volatile acidity の分布が異なるように見えます。その差が有意かどうか t検定 で確認してみましょう。
Step5: 同様に、qualityが6未満のワインと6以上のワインでは pH の分布が異なるか調べてみましょう。
Step6: <h4 style="border-bottom
Step7: class 列が 0 なら青色、1 なら赤色に彩色します。
Step8: その彩色で散布図行列を描きましょう。
Step9: 上図から、各変数と、quality の良し悪しとの関係がボンヤリとつかめてきたのではないでしょうか。続いて主成分分析をしてみます。
Step10: 分かったような分からないような結果ですね。quality の良し悪しを分類・予測するのは簡単ではなさそうです。
<h3 STYLE="background
Step11: <h3 STYLE="background
Step12: <h3 STYLE="background
Step13: 正解率の数字を出すだけなら以上でおしまいですが、具体的な予測結果を確認したい場合は次のようにします。
Step14: <h3 STYLE="background
Step15: さきほど作成した教師データを使って、これらの分類器で順番に予測して、正解率(train)と正解率(test)を計算してみましょう。
Step16: 訓練データの作成はランダムに行なうので、作成のたびに正解率の数字は変わります。場合によっては、分類器の順序が前後することもあります。それでは適切な性能評価がしにくいので、教師データを何度も作り直して正解率を計算してみましょう。
Step17: 以上、様々な分類器を用いて、ワインの品質の善し悪しを予測しました。それぞれの分類器にはそれぞれのパラメーターがありますが、上の例では全てデフォルト値を使っています。上手にパラメーターをチューニングすれば、もっと良い予測性能が出せるかもしれません。ですが今回はここまでとさせていただきます。興味があったらぜひ調べてみてください。
<h4 style="padding | Python Code:
# 数値計算やデータフレーム操作に関するライブラリをインポートする
import numpy as np
import pandas as pd
import scipy as sp
from scipy import stats
# URL によるリソースへのアクセスを提供するライブラリをインポートする。
# import urllib # Python 2 の場合
import urllib.request # Python 3 の場合
# 図やグラフを図示するためのライブラリをインポートする。
%matplotlib inline
import matplotlib.pyplot as plt
# 機械学習関連のライブラリ群
from sklearn.model_selection import train_test_split # 訓練データとテストデータに分割
from sklearn.metrics import confusion_matrix # 混合行列
from sklearn.decomposition import PCA #主成分分析
from sklearn.linear_model import LogisticRegression # ロジスティック回帰
from sklearn.neighbors import KNeighborsClassifier # K近傍法
from sklearn.svm import SVC # サポートベクターマシン
from sklearn.tree import DecisionTreeClassifier # 決定木
from sklearn.ensemble import RandomForestClassifier # ランダムフォレスト
from sklearn.ensemble import AdaBoostClassifier # AdaBoost
from sklearn.naive_bayes import GaussianNB # ナイーブ・ベイズ
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis # 線形判別分析
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis # 二次判別分析
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;">Step 5. 機械学習で二値分類</h3>
<ol>
<li><a href="#1">「ワインの品質」データ読み込み</a>
<li><a href="#2">2群に分ける</a>
<li><a href="#3">説明変数と目的変数に分ける</a>
<li><a href="#4">訓練データとテストデータに分ける</a>
<li><a href="#5">ロジスティク回帰</a>
<li><a href="#6">いろんな機械学習手法を比較する</a>
</ol>
<h4 style="border-bottom: solid 1px black;">Step 5 の目標</h4>
様々な機械学習法で二分類を行って性能評価する。
<img src="fig/cv.png">
End of explanation
# ウェブ上のリソースを指定する
url = 'https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt'
# 指定したURLからリソースをダウンロードし、名前をつける。
# urllib.urlretrieve(url, 'winequality-red.csv') # Python 2 の場合
urllib.request.urlretrieve(url, 'winequality-red.txt') # Python 3 の場合
# データの読み込み
df1 = pd.read_csv('winequality-red.txt', sep='\t', index_col=0)
df1.head() # 先頭5行だけ表示
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="1">1. 「ワインの品質」データ読み込み</a></h3>
データは <a href="http://archive.ics.uci.edu/ml/index.php" target="_blank">UC Irvine Machine Learning Repository</a> から取得したものを少し改変しました。
赤ワイン https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt
白ワイン https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-white.txt
<h4 style="border-bottom: solid 1px black;"> <a href="http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality.names">詳細</a></h4>
<ol>
<li>fixed acidity : 不揮発酸濃度(ほぼ酒石酸濃度)
<li>volatile acidity : 揮発酸濃度(ほぼ酢酸濃度)
<li>citric acid : クエン酸濃度
<li>residual sugar : 残存糖濃度
<li>chlorides : 塩化物濃度
<li>free sulfur dioxide : 遊離亜硫酸濃度
<li>total sulfur dioxide : 亜硫酸濃度
<li>density : 密度
<li>pH : pH
<li>sulphates : 硫酸塩濃度
<li>alcohol : アルコール度数
<li>quality (score between 0 and 10) : 0-10 の値で示される品質のスコア
</ol>
End of explanation
# 簡単な例
toy_data = pd.DataFrame([[1, 4, 7, 10, 13, 16], [2, 5, 8, 11, 14, 27], [3, 6, 9, 12, 15, 17], [21, 24, 27, 20, 23, 26]],
index = ['i1', 'i2', 'i3', 'i4'],
columns = list("abcdef"))
toy_data # 中身の確認
# F列の値が 20 未満の列だけを抜き出す
toy_data[toy_data['f'] < 20]
# F列の値が 20 以上の列だけを抜き出す
toy_data[toy_data['f'] >= 20]
# F列の値が 20 以上の列だけを抜き出して、そのB列を得る
pd.DataFrame(toy_data[toy_data['f'] >= 20]['b'])
# classという名の列を作り、F列の値が 20 未満なら 0 を、そうでなければ 1 を入れる
toy_data['class'] = [0 if i < 20 else 1 for i in toy_data['f'].tolist()]
toy_data # 中身を確認
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="2">2. 2群に分ける</a></h3>
ここでは、ワインの品質を「6未満(よくない)」と「6以上(よい)」の2群に分けてから、機械学習を用いて、pH や volatile acidity などの変数から品質を予測してみましょう。まずは、2群に分けることから始めます。
<h4 style="border-bottom: solid 1px black;">簡単な例で説明</h4>
データを2群に分けるにあたって、pandasの操作が少し分かりにくいので、簡単な例を用いて説明します。
End of explanation
# quality が 6 未満の行を抜き出して、先頭5行を表示する
df1[df1['quality'] < 6].head()
# quality が 6 以上の行を抜き出して、先頭5行を表示する
df1[df1['quality'] >= 6].head()
fig, ax = plt.subplots(1, 1)
# quality が 6 未満の行を抜き出して、x軸を volatile acidity 、 y軸を alcohol として青色の丸を散布する
df1[df1['quality'] < 6].plot(kind='scatter', x=u'volatile acidity', y=u'alcohol', ax=ax,
c='blue', alpha=0.5)
# quality が 6 以上の行を抜き出して、x軸を volatile acidity 、 y軸を alcohol として赤色の丸を散布する
df1[df1['quality'] >= 6].plot(kind='scatter', x=u'volatile acidity', y=u'alcohol', ax=ax,
c='red', alpha=0.5, grid=True, figsize=(5,5))
plt.show()
# quality が 6 未満のものを青色、6以上のものを赤色に彩色して volatile acidity の分布を描画
df1[df1['quality'] < 6]['volatile acidity'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='blue')
df1[df1['quality'] >= 6]['volatile acidity'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='red')
Explanation: <h4 style="border-bottom: solid 1px black;">実データに戻ります</h4>
以下、quality が6未満のワインと6以上のワインに分け、どのように違うのか調べてみましょう。
End of explanation
# 対応のないt検定
significance = 0.05
X = df1[df1['quality'] < 6]['volatile acidity'].tolist()
Y = df1[df1['quality'] >= 6]['volatile acidity'].tolist()
t, p = stats.ttest_ind(X, Y)
print( "t 値は %(t)s" %locals() )
print( "確率は %(p)s" %locals() )
if p < significance:
print("有意水準 %(significance)s で、有意な差があります" %locals())
else:
print("有意水準 %(significance)s で、有意な差がありません" %locals())
Explanation: 上図のように、qualityが6未満のワインと6以上のワインは volatile acidity の分布が異なるように見えます。その差が有意かどうか t検定 で確認してみましょう。
End of explanation
# quality が 6 未満のものを青色、6以上のものを赤色に彩色して pH の分布を描画
df1[df1['quality'] < 6]['pH'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='blue')
df1[df1['quality'] >= 6]['pH'].hist(figsize=(3, 3), bins=20, alpha=0.5, color='red')
# 対応のないt検定
significance = 0.05
X = df1[df1['quality'] <= 5]['pH'].tolist()
Y = df1[df1['quality'] > 5]['pH'].tolist()
t, p = stats.ttest_ind(X, Y)
print( "t 値は %(t)s" %locals() )
print( "確率は %(p)s" %locals() )
if p < significance:
print("有意水準 %(significance)s で、有意な差があります" %locals())
else:
print("有意水準 %(significance)s で、有意な差がありません" %locals())
Explanation: 同様に、qualityが6未満のワインと6以上のワインでは pH の分布が異なるか調べてみましょう。
End of explanation
df1['class'] = [0 if i <= 5 else 1 for i in df1['quality'].tolist()]
df1.head() # 先頭5行を表示
Explanation: <h4 style="border-bottom: solid 1px black;">分類を表す列を追加する</h4>
quality が 6 未満のワインを「0」、6以上のワインを「1」とした class 列を追加しましょう。
End of explanation
# それぞれに与える色を決める。
color_codes = {0:'#0000FF', 1:'#FF0000'}
colors = [color_codes[x] for x in df1['class'].tolist()]
Explanation: class 列が 0 なら青色、1 なら赤色に彩色します。
End of explanation
pd.plotting.scatter_matrix(df1.dropna(axis=1)[df1.columns[:10]], figsize=(20, 20), color=colors, alpha=0.5)
plt.show()
Explanation: その彩色で散布図行列を描きましょう。
End of explanation
dfs = df1.apply(lambda x: (x-x.mean())/x.std(), axis=0).fillna(0) # データの正規化
pca = PCA()
pca.fit(dfs.iloc[:, :10])
# データを主成分空間に写像 = 次元圧縮
feature = pca.transform(dfs.iloc[:, :10])
#plt.figure(figsize=(6, 6))
plt.scatter(feature[:, 0], feature[:, 1], alpha=0.5, color=colors)
plt.title("Principal Component Analysis")
plt.xlabel("The first principal component")
plt.ylabel("The second principal component")
plt.grid()
plt.show()
Explanation: 上図から、各変数と、quality の良し悪しとの関係がボンヤリとつかめてきたのではないでしょうか。続いて主成分分析をしてみます。
End of explanation
X = dfs.iloc[:, :10] # 説明変数
y = df1.iloc[:, 12] # 目的変数
X.head() # 先頭5行を表示して確認
pd.DataFrame(y).T # 目的変数を確認。縦に長いと見にくいので転置して表示。
Explanation: 分かったような分からないような結果ですね。quality の良し悪しを分類・予測するのは簡単ではなさそうです。
<h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="3">3. 説明変数と目的変数に分ける</a></h3>
ここまでで、ワインの品質を2群に分けました。次は、目的変数(ここでは、品質)と説明変数(それ以外の変数)に分けましょう。
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4) # 訓練データ・テストデータへのランダムな分割
X_train.head() # 先頭5行を表示して確認
pd.DataFrame(y_train).T # 縦に長いと見にくいので転置して表示。
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="4">4. 訓練データとテストデータに分ける</a></h3>
機械学習ではその性能評価をするために、既知データを訓練データ(教師データ、教師セットともいいます)とテストデータ(テストセットともいいます)に分割します。訓練データを用いて訓練(学習)して予測モデルを構築し、その予測モデル構築に用いなかったテストデータをどのくらい正しく予測できるかということで性能評価を行ないます。そのような評価方法を「交差検定」(cross-validation)と呼びます。ここでは、
訓練データ(全データの60%)
X_train : 訓練データの説明変数
y_train : 訓練データの目的変数
テストデータ(全データの40%)
X_test : テストデータの説明変数
y_test : テストデータの目的変数
とし、X_train と y_train の関係を学習して X_test から y_test を予測することを目指します。
End of explanation
clf = LogisticRegression() #モデルの生成
clf.fit(X_train, y_train) #学習
# 正解率 (train) : 学習に用いたデータをどのくらい正しく予測できるか
clf.score(X_train, y_train)
# 正解率 (test) : 学習に用いなかったデータをどのくらい正しく予測できるか
clf.score(X_test, y_test)
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="5">5. ロジスティック回帰</a></h3>
機械学習モデルとして有名なものの一つとして、ロジスティック回帰があります。線形回帰分析が量的変数を予測するのに対して、ロジスティック回帰分析は発生確率を予測する手法です。基本的な考え方は線形回帰分析と同じなのですが、予測結果が 0 から 1 の間を取るように、数式やその前提に改良が加えられています。
End of explanation
y_predict = clf.predict(X_test)
pd.DataFrame(y_predict).T
# 予測結果と、正解(本当の答え)がどのくらい合っていたかを表す混合行列
pd.DataFrame(confusion_matrix(y_predict, y_test), index=['predicted 0', 'predicted 1'], columns=['real 0', 'real 1'])
Explanation: 正解率の数字を出すだけなら以上でおしまいですが、具体的な予測結果を確認したい場合は次のようにします。
End of explanation
names = ["Logistic Regression", "Nearest Neighbors",
"Linear SVM", "Polynomial SVM", "RBF SVM", "Sigmoid SVM",
"Decision Tree","Random Forest", "AdaBoost", "Naive Bayes",
"Linear Discriminant Analysis","Quadratic Discriminant Analysis"]
classifiers = [
LogisticRegression(),
KNeighborsClassifier(),
SVC(kernel="linear"),
SVC(kernel="poly"),
SVC(kernel="rbf"),
SVC(kernel="sigmoid"),
DecisionTreeClassifier(),
RandomForestClassifier(),
AdaBoostClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis()]
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="6">6. いろんな機械学習手法を比較する</a></h3>
scikit-learn が用意している機械学習手法(分類器)はロジスティック回帰だけではありません。有名なものは SVM (サポートベクターマシン)などがあります。いろいろ試して、ベストなものを選択してみましょう。
まず、いろんな分類器を classifiers という名のリストの中に収納します。
End of explanation
result = []
for name, clf in zip(names, classifiers): # 指定した複数の分類機を順番に呼び出す
clf.fit(X_train, y_train) # 学習
score1 = clf.score(X_train, y_train) # 正解率(train)の算出
score2 = clf.score(X_test, y_test) # 正解率(test)の算出
result.append([score1, score2]) # 結果の格納
# test の正解率の大きい順に並べる
df_result = pd.DataFrame(result, columns=['train', 'test'], index=names).sort_values('test', ascending=False)
df_result # 結果の確認
# 棒グラフの描画
df_result.plot(kind='bar', alpha=0.5, grid=True)
Explanation: さきほど作成した教師データを使って、これらの分類器で順番に予測して、正解率(train)と正解率(test)を計算してみましょう。
End of explanation
result = []
for trial in range(20): # 20 回繰り返す
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4) # 訓練データ・テストデータの生成
for name, clf in zip(names, classifiers): # 指定した複数の分類機を順番に呼び出す
clf.fit(X_train, y_train) # 学習
score1 = clf.score(X_train, y_train) # 正解率(train)の算出
score2 = clf.score(X_test, y_test) # 正解率(test)の算出
result.append([name, score1, score2]) # 結果の格納
df_result = pd.DataFrame(result, columns=['classifier', 'train', 'test']) # 今回はまだ並べ替えはしない
df_result # 結果の確認。同じ分類器の結果が複数回登場していることに注意。
# 分類器 (classifier) 毎にグループ化して正解率の平均を計算し、test の正解率の平均の大きい順に並べる
df_result_mean = df_result.groupby('classifier').mean().sort_values('test', ascending=False)
df_result_mean # 結果の確認
# エラーバーの表示に用いる目的で、標準偏差を計算する
errors = df_result.groupby('classifier').std()
errors # 結果の確認
# 平均値と標準偏差を用いて棒グラフを描画
df_result_mean.plot(kind='bar', alpha=0.5, grid=True, yerr=errors)
Explanation: 訓練データの作成はランダムに行なうので、作成のたびに正解率の数字は変わります。場合によっては、分類器の順序が前後することもあります。それでは適切な性能評価がしにくいので、教師データを何度も作り直して正解率を計算してみましょう。
End of explanation
# 練習5.1
Explanation: 以上、様々な分類器を用いて、ワインの品質の善し悪しを予測しました。それぞれの分類器にはそれぞれのパラメーターがありますが、上の例では全てデフォルト値を使っています。上手にパラメーターをチューニングすれば、もっと良い予測性能が出せるかもしれません。ですが今回はここまでとさせていただきます。興味があったらぜひ調べてみてください。
<h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;"><a name="4">練習5.1</a></h4>
白ワインのデータ(https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-white.txt) についても同様に機械学習による二値分類を行なってください。
End of explanation |
13,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XGBoost模型调优
加载要用的库
Step1: 载入数据
上一个ipython notebook已经做了下面这些数据特征预处理
1. City因为类别太多丢掉
2. DOB生成Age字段,然后丢掉原字段
3. EMI_Loan_Submitted_Missing 为1(EMI_Loan_Submitted) 为0(EMI_Loan_Submitted缺省) EMI_Loan_Submitted丢掉
4. EmployerName丢掉
5. Existing_EMI对缺省值用均值填充
6. Interest_Rate_Missing同 EMI_Loan_Submitted
7. Lead_Creation_Date丢掉
8. Loan_Amount_Applied, Loan_Tenure_Applied 均值填充
9. Loan_Amount_Submitted_Missing 同 EMI_Loan_Submitted
10. Loan_Tenure_Submitted_Missing 同 EMI_Loan_Submitted
11. LoggedIn, Salary_Account 丢掉
12. Processing_Fee_Missing 同 EMI_Loan_Submitted
13. Source - top 2 kept as is and all others combined into different category
14. Numerical变化 和 One-Hot编码
Step2: 建模与交叉验证
写一个大的函数完成以下的功能
1. 数据建模
2. 求训练准确率
3. 求训练集AUC
4. 根据xgboost交叉验证更新n_estimators
5. 画出特征的重要度
Step3: 第1步- 对于高的学习率找到最合适的estimators个数
Step4: Tune subsample and colsample_bytree
Step5: tune subsample
Step6: 对正则化做交叉验证 | Python Code:
import pandas as pd
import numpy as np
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from sklearn import cross_validation, metrics
from sklearn.grid_search import GridSearchCV
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 4
Explanation: XGBoost模型调优
加载要用的库
End of explanation
train = pd.read_csv('train_modified.csv')
test = pd.read_csv('test_modified.csv')
train.shape, test.shape
target='Disbursed'
IDcol = 'ID'
train['Disbursed'].value_counts()
Explanation: 载入数据
上一个ipython notebook已经做了下面这些数据特征预处理
1. City因为类别太多丢掉
2. DOB生成Age字段,然后丢掉原字段
3. EMI_Loan_Submitted_Missing 为1(EMI_Loan_Submitted) 为0(EMI_Loan_Submitted缺省) EMI_Loan_Submitted丢掉
4. EmployerName丢掉
5. Existing_EMI对缺省值用均值填充
6. Interest_Rate_Missing同 EMI_Loan_Submitted
7. Lead_Creation_Date丢掉
8. Loan_Amount_Applied, Loan_Tenure_Applied 均值填充
9. Loan_Amount_Submitted_Missing 同 EMI_Loan_Submitted
10. Loan_Tenure_Submitted_Missing 同 EMI_Loan_Submitted
11. LoggedIn, Salary_Account 丢掉
12. Processing_Fee_Missing 同 EMI_Loan_Submitted
13. Source - top 2 kept as is and all others combined into different category
14. Numerical变化 和 One-Hot编码
End of explanation
#test_results = pd.read_csv('test_results.csv')
def modelfit(alg, dtrain, dtest, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)
xgtest = xgb.DMatrix(dtest[predictors].values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
early_stopping_rounds=early_stopping_rounds, show_progress=False)
alg.set_params(n_estimators=cvresult.shape[0])
#建模
alg.fit(dtrain[predictors], dtrain['Disbursed'],eval_metric='auc')
#对训练集预测
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#输出模型的一些结果
print "\n关于现在这个模型"
print "准确率 : %.4g" % metrics.accuracy_score(dtrain['Disbursed'].values, dtrain_predictions)
print "AUC 得分 (训练集): %f" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob)
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
Explanation: 建模与交叉验证
写一个大的函数完成以下的功能
1. 数据建模
2. 求训练准确率
3. 求训练集AUC
4. 根据xgboost交叉验证更新n_estimators
5. 画出特征的重要度
End of explanation
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb1 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb1, train, test, predictors)
#对subsample 和 max_features 用grid search查找最好的参数
param_test1 = {
'max_depth':range(3,10,2),
'min_child_weight':range(1,6,2)
}
gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27),
param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch1.fit(train[predictors],train[target])
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
# 对于max_depth和min_child_weight查找最好的参数
param_test2 = {
'max_depth':[4,5,6],
'min_child_weight':[4,5,6]
}
gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2.fit(train[predictors],train[target])
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#交叉验证对min_child_weight寻找最合适的参数
param_test2b = {
'min_child_weight':[6,8,10,12]
}
gsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2b.fit(train[predictors],train[target])
gsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_
#Grid seach选择合适的gamma
param_test3 = {
'gamma':[i/10.0 for i in range(0,5)]
}
gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch3.fit(train[predictors],train[target])
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb2 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb2, train, test, predictors)
Explanation: 第1步- 对于高的学习率找到最合适的estimators个数
End of explanation
#对subsample 和 colsample_bytree用grid search寻找最合适的参数
param_test4 = {
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
gsearch4 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch4.fit(train[predictors],train[target])
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
Explanation: Tune subsample and colsample_bytree
End of explanation
# 同上
param_test5 = {
'subsample':[i/100.0 for i in range(75,90,5)],
'colsample_bytree':[i/100.0 for i in range(75,90,5)]
}
gsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch5.fit(train[predictors],train[target])
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
Explanation: tune subsample:
End of explanation
#对reg_alpha用grid search寻找最合适的参数
param_test6 = {
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch6.fit(train[predictors],train[target])
gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_
# 换一组参数对reg_alpha用grid search寻找最合适的参数
param_test7 = {
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test7, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch7.fit(train[predictors],train[target])
gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_
xgb3 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
reg_alpha=0.005,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb3, train, test, predictors)
xgb4 = XGBClassifier(
learning_rate =0.01,
n_estimators=5000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
reg_alpha=0.005,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb4, train, test, predictors)
Explanation: 对正则化做交叉验证
End of explanation |
13,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs
)# signals into final output layer
final_outputs = self.activation_function(final_inputs)# signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# Output error
output_errors = (targets - final_inputs) # Output layer error is the difference between desired target and actual output.
# Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1.0 - hidden_outputs) # hidden layer gradients
# TUpdate the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += np.dot(hidden_grad * hidden_errors, inputs.T) * self.lr# update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T
)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T
)# signals into final output layer
final_outputs = final_inputs# signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 600
learning_rate = 0.001
hidden_nodes = 24
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
# plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions, label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model predicts the first half of the test set fairly well (Dec 11 - 19). There is some underfitting with the current hyper parameters. The second half of the month isn't predicted as well, particularly Dec 24 - 25. This could be due to holidays observed on these days. More years of data or a holiday feature could help with predicting days like this.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
13,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Model fitting in GLS
In this exercise we will fit a linear model using a Spatial structure as covariance matrix.
We will use GLS to get better estimators.
As always we will need to load the necessary libraries.
Step1: Use this to automate the process. Be carefull it can overwrite current results
run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35
Importing data
We will use the FIA dataset and for exemplary purposes we will take a subsample of this data.
Also important.
The empirical variogram has been calculated for the entire data set using the residuals of an OLS model.
We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.
You can inspect the functions using the ?? symbol.
Step2: Now we will obtain the data from the calculated empirical variogram.
Step3: Instantiating the variogram object
Step4: Instantiating theoretical variogram model | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
sys.path.append('../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
import tools
Explanation: Spatial Model fitting in GLS
In this exercise we will fit a linear model using a Spatial structure as covariance matrix.
We will use GLS to get better estimators.
As always we will need to load the necessary libraries.
End of explanation
from HEC_runs.fit_fia_logbiomass_logspp_GLS import initAnalysis
from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust
section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv",
"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
-130,-60,30,40)
#section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv",
# "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
# -85,-80,30,35)
# IN HEC
#section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35)
section.shape
Explanation: Use this to automate the process. Be carefull it can overwrite current results
run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35
Importing data
We will use the FIA dataset and for exemplary purposes we will take a subsample of this data.
Also important.
The empirical variogram has been calculated for the entire data set using the residuals of an OLS model.
We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.
You can inspect the functions using the ?? symbol.
End of explanation
gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section)
gvg.plot(refresh=False,with_envelope=True)
resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=10,distance_threshold=1000000)
resum.as_text
plt.plot(resultspd.rsq)
plt.title("GLS feedback algorithm")
plt.xlabel("Number of iterations")
plt.ylabel("R-sq fitness estimator")
resultspd.columns
a = map(lambda x : x.to_dict(), resultspd['params'])
paramsd = pd.DataFrame(a)
paramsd
plt.plot(paramsd.Intercept.loc[1:])
plt.get_yaxis().get_major_formatter().set_useOffset(False)
fig = plt.figure(figsize=(10,10))
plt.plot(paramsd.logSppN.iloc[1:])
variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv"
thrs_dist = 100000
emp_var_log_log = pd.read_csv(variogram_data_path)
Explanation: Now we will obtain the data from the calculated empirical variogram.
End of explanation
gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = emp_var_log_log
gvg.empirical = emp_var_log_log.variogram
gvg.lags = emp_var_log_log.lags
#emp_var_log_log = emp_var_log_log.dropna()
#vdata = gvg.envelope.dropna()
Explanation: Instantiating the variogram object
End of explanation
matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4)
whittle_model = tools.WhittleVariogram(sill=0.340246718396,range_a=41188.0234423,nugget=0.329937603763,alpha=1.12143687914)
exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
gvg.model = whittle_model
#gvg.model = matern_model
#models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model])
gvg.fitVariogramModel(whittle_model)
import numpy as np
xx = np.linspace(0,1000000,1000)
gvg.plot(refresh=False,with_envelope=True)
plt.plot(xx,gvg.model.f(xx),lw=2.0,c='k')
plt.title("Empirical Variogram with fitted Whittle Model")
expdat = pd.DataFrame({'x':xx,'tvar':gvg.model.f(xx)})
expdat.to_csv('/outputs/theoretical_var.csv')
def randomSelection(n,p):
idxs = np.random.choice(n,p,replace=False)
random_sample = new_data.iloc[idxs]
return random_sample
#################
n = len(new_data)
p = 3000 # The amount of samples taken (let's do it without replacement)
random_sample = randomSelection(n,100)
Explanation: Instantiating theoretical variogram model
End of explanation |
13,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
13,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
cs231n case study toy NN example
http
Step1: Training a softmax linear classifier
Step2: Softmax loss using cross-entropy
keepdims variable forces the matrix shape !!
- else np.sum results in a 1D vector that gives shape error in / operation with 300x3 matrix exp_scores !!!
Step3: We now have an array probs of size [300 x 3], where each row now contains the class probabilities. In particular, since we’ve normalized them every row now sums to one. We can now query for the log probabilities assigned to the correct classes in each example
Step4: Evaluating this in the beginning (with random parameters) might give us loss = 1.1, which is np.log(1.0/3), since with small initial random weights all probabilities assigned to all classes are about one third. We now want to make the loss as low as possible, with loss = 0 as the absolute lower bound. But the lower the loss is, the higher are the probabilities assigned to the correct classes for all examples.
Lets optimize the loss by computing the gradient
$\frac{\partial L_i }{ \partial f_k } = p_k - \mathbb{1}(y_i = k)$
Step5: Lets update parameters
Step6: Full code
Step7: Training set accuracy
Step8: Training a 1-layer neural network
Step10: Full code
Step11: plot the decision boundaries | Python Code:
import numpy as np
# for quick visualization in notebook
import matplotlib.pyplot as plt
%matplotlib inline
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in xrange(K):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
# lets visualize the data:
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim([-1,1])
plt.ylim([-1,1])
print np.c_.__doc__
Explanation: cs231n case study toy NN example
http://cs231n.github.io/neural-networks-case-study/
End of explanation
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
num_examples = N*K
# compute class scores for a linear classifier
scores = np.dot(X, W) + b
print(scores.shape)
print(scores[50])
Explanation: Training a softmax linear classifier
End of explanation
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# get unnormalized probabilities
exp_scores = np.exp(scores)
# normalize them for each example
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
print(probs.shape)
print(probs[50])
print(range(4))
Explanation: Softmax loss using cross-entropy
keepdims variable forces the matrix shape !!
- else np.sum results in a 1D vector that gives shape error in / operation with 300x3 matrix exp_scores !!!
End of explanation
correct_logprobs = -np.log(probs[range(N*K),y])
print(correct_logprobs.shape)
# compute the loss: average cross-entropy loss and regularization
data_loss = np.sum(correct_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
print(loss)
Explanation: We now have an array probs of size [300 x 3], where each row now contains the class probabilities. In particular, since we’ve normalized them every row now sums to one. We can now query for the log probabilities assigned to the correct classes in each example:
$L_i = -\log\left(\frac{e^{f_{y_i}}}{ \sum_j e^{f_j} }\right)$
End of explanation
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
print(dscores.shape)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # don't forget the regularization gradient
Explanation: Evaluating this in the beginning (with random parameters) might give us loss = 1.1, which is np.log(1.0/3), since with small initial random weights all probabilities assigned to all classes are about one third. We now want to make the loss as low as possible, with loss = 0 as the absolute lower bound. But the lower the loss is, the higher are the probabilities assigned to the correct classes for all examples.
Lets optimize the loss by computing the gradient
$\frac{\partial L_i }{ \partial f_k } = p_k - \mathbb{1}(y_i = k)$
End of explanation
# perform a parameter update
W += -step_size * dW
b += -step_size * db
Explanation: Lets update parameters
End of explanation
#Train a Linear Classifier
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
for i in xrange(200):
# evaluate class scores, [N x K]
scores = np.dot(X, W) + b
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
if i % 10 == 0:
print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # regularization gradient
# perform a parameter update
W += -step_size * dW
b += -step_size * db
Explanation: Full code
End of explanation
scores = np.dot(X, W) + b
predicted_class = np.argmax(scores, axis=1)
print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
# plot the resulting classifier
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
#fig.savefig('spiral_linear.png')
Explanation: Training set accuracy
End of explanation
# initialize parameters randomly
h = 100 # size of hidden layer
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K))
# evaluate class scores with a 2-layer Neural Network
hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation
scores = np.dot(hidden_layer, W2) + b2
# backpropate the gradient to the parameters
# first backprop into parameters W2 and b2
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
dhidden = np.dot(dscores, W2.T)
# finally into W,b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True)
Explanation: Training a 1-layer neural network
End of explanation
# initialize parameters randomly
h = 100 # size of hidden layer
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
for i in xrange(10000):
# evaluate class scores, [N x K]
hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation
scores = np.dot(hidden_layer, W2) + b2
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2)
loss = data_loss + reg_loss
if i % 1000 == 0:
print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters
# first backprop into parameters W2 and b2
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
# next backprop into hidden layer
dhidden = np.dot(dscores, W2.T)
# backprop the ReLU non-linearity
dhidden[hidden_layer <= 0] = 0
# finally into W,b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True)
# add regularization gradient contribution
dW2 += reg * W2
dW += reg * W
# perform a parameter update
W += -step_size * dW
b += -step_size * db
W2 += -step_size * dW2
b2 += -step_size * db2
print(predicted_class.shape)
# evaluate training set accuracy
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
predicted_class = np.argmax(scores, axis=1)
print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
# write forward pass into predict function
def predict(X):
Input: X is matrix of NxD with N samples each of dimension D
Output: predicted_class is vector of length N (1 prediction per sample)
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
predicted_class = np.argmax(scores, axis=1)
return predicted_class
Explanation: Full code
End of explanation
# find arg across k where scores is max
Z = np.argmax(scores, axis=1) # class predictions 0,1,2
print(Z.shape, Z.size, Z[idx])
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in xrange(K):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
# lets visualize the data:
fig = plt.figure()
fig.set_size_inches(10,7)
# Put the probability scores into a color plot with training samples on it
# Plotting decision regions
h=0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
Z = predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=0.8, cmap=plt.cm.Spectral)
# plot training samples
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
#fig.savefig('spiral_net.png')
Explanation: plot the decision boundaries
End of explanation |
13,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Transformation
This exploratory analysis takes into account how long does it take to a given open case to be closed.
Step1: Transforming Column Names
Step2: Transforming Open and Close Dates
Changing Granularity
Step3: Adding new features
delta
Step4: Transforming Localization
Latitude and Longitude
Step5: Localization Grid (in Km)
Useful for applying unsupervised learning (clustering).
loc_x
Step6: Neighborhood
The neighborhood might be important because, even if a given case $C_1$ is closer to a case $C_2$, it might be more affected by a case $C_3$ that is further away but it is in the same neighborhood (duo policies, strategic location ...).
Interesting metrics that could be added in the future
Step7: To avoid the dummy-trap (and curse of dimensionality). We should drop one of the columns when applying one-hot-encoding depending on the classification method to be used. However, since we are not using a LogisticRegression-like classifier, we'll add all features.
Step8: Category
Step9: Request Type
The Request Type is highly associated to the category, meaning that category 'Abandoned Vehicle' is always correlated with request types 'Abandoned Vehicle - Car2door' and 'Car4door' ...
A minor exception is the difference between 'routine' and 'emergency'.
For the sake of simplicity, we decided to remove request_type
Step10: Source | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import bokeh
from bokeh.io import output_notebook
output_notebook()
import os
DATA_STREETLIGHT_CASES_URL = 'https://data.sfgov.org/api/views/c53t-rr3f/rows.json?accessType=DOWNLOAD'
DATA_STREETLIGHT_CASES_LOCAL = 'DATA_STREETLIGHT_CASES.json'
data_path = DATA_STREETLIGHT_CASES_URL
if os.path.isfile(DATA_STREETLIGHT_CASES_LOCAL):
data_path = DATA_STREETLIGHT_CASES_LOCAL
import urllib, json
def _load_data(url):
response = urllib.urlopen(url)
raw_data = json.loads(response.read())
columns = [col['name'] for col in raw_data['meta']['view']['columns']]
rows = raw_data['data']
return pd.DataFrame(data=rows, columns=columns)
df = _load_data(data_path)
Explanation: Data Transformation
This exploratory analysis takes into account how long does it take to a given open case to be closed.
End of explanation
df.columns = [col.lower().replace(' ', '_') for col in df.columns]
df.columns
Explanation: Transforming Column Names
End of explanation
df['opened'] = pd.to_datetime(df.opened)
df['opened_dayofweek'] = df.opened.dt.dayofweek
df['opened_month'] = df.opened.dt.month
df['opened_year'] = df.opened.dt.year
df['opened_dayofmonth'] = df.opened.dt.day
df['closed'] = pd.to_datetime(df.closed)
df['closed_dayofweek'] = df.closed.dt.dayofweek
df['closed_month'] = df.closed.dt.month
df['closed_year'] = df.closed.dt.year
df['closed_dayofmonth'] = df.closed.dt.day
Explanation: Transforming Open and Close Dates
Changing Granularity
End of explanation
df['delta'] = (df.closed - df.opened).dt.days
df['is_open'] = pd.isnull(df.closed)
df['opened_weekend'] = df.opened_dayofweek >= 5
df['closed_weekend'] = df.closed_dayofweek >= 5
df['target'] = df.delta <= 2
Explanation: Adding new features
delta: int, time (in days) taken for a given case to be closed
is_open: boolean, defines if a given case is still opened
opened_weekend: boolean, whether a given case was opened on a weekend
closed_weekend: boolean, whether a given case was closed on a weekend
End of explanation
from geopy.distance import vincenty
df['latitude'] = df.point.apply(lambda e: float(e[1]))
df['longitude'] = df.point.apply(lambda e: float(e[2]))
Explanation: Transforming Localization
Latitude and Longitude
End of explanation
min_lat, max_lat = min(df.latitude), max(df.latitude)
min_lng, max_lng = min(df.longitude), max(df.longitude)
def grid(lat, lng):
x = vincenty((lat, min_lng), (lat, lng)).miles
y = vincenty((min_lat, lng), (lat, lng)).miles
return x, y
xy = [grid(lat, lng) for lat, lng in zip(df.latitude.values, df.longitude.values)]
df['loc_x'] = np.array(xy)[:,0]
df['loc_y'] = np.array(xy)[:,1]
Explanation: Localization Grid (in Km)
Useful for applying unsupervised learning (clustering).
loc_x: horizontal distance to the leftmost case
loc_y: vertical distance to the lowermost case
WARNING :In order to keep this transformation more appropriate, we should
use the grid of maximum and minimum coordinates in San Francisco.
For the sake of simplicity, we'll use the leftmost and lowermost case.
End of explanation
df.neighborhood.unique()
Explanation: Neighborhood
The neighborhood might be important because, even if a given case $C_1$ is closer to a case $C_2$, it might be more affected by a case $C_3$ that is further away but it is in the same neighborhood (duo policies, strategic location ...).
Interesting metrics that could be added in the future:
- distance to center: how distant is the case from the center of the neighborhood in which it is located.
End of explanation
dummies = pd.get_dummies(df.neighborhood.str.replace(' ', '_').str.lower(), prefix='neigh_', drop_first=False)
dummies.head()
#del df['neighborhood']
df[dummies.columns] = dummies
Explanation: To avoid the dummy-trap (and curse of dimensionality). We should drop one of the columns when applying one-hot-encoding depending on the classification method to be used. However, since we are not using a LogisticRegression-like classifier, we'll add all features.
End of explanation
df.category.unique()
dummies = pd.get_dummies(df.category.str.replace(' ', '_').str.lower(), prefix='cat_', drop_first=False)
dummies.head()
#del df['category']
df[dummies.columns] = dummies
Explanation: Category
End of explanation
df.request_type.unique()
tmp = df[['request_type', 'category', 'delta', 'target']]
tmp = tmp.dropna()
vc = tmp.request_type.value_counts()
tmp.loc[vc[tmp.request_type].values < 50, 'request_type'] = 'Others'
pivot = tmp.pivot_table(index='request_type', columns='category', values='target',
aggfunc=sum, fill_value=0)
plt.figure(figsize=(10,6))
sns.heatmap(pivot.astype(int), annot=True, fmt="d", linewidths=.5)
Explanation: Request Type
The Request Type is highly associated to the category, meaning that category 'Abandoned Vehicle' is always correlated with request types 'Abandoned Vehicle - Car2door' and 'Car4door' ...
A minor exception is the difference between 'routine' and 'emergency'.
For the sake of simplicity, we decided to remove request_type
End of explanation
dummies = pd.get_dummies(df.source.str.replace(' ', '_').str.lower(), prefix='source_', drop_first=False)
df[dummies.columns] = dummies
df['status'] = df.status == 'Closed'
original_columns = [u'sid', u'id', u'position', u'created_at', u'created_meta',
u'updated_at', u'updated_meta', u'meta', u'caseid', u'opened',
u'closed', u'status', u'responsible_agency', u'address', u'category',
u'request_type', u'request_details', u'source', u'supervisor_district',
u'neighborhood', u'updated', u'point']
del df['sid']
del df['id']
del df['position']
del df['created_at']
del df['created_meta']
del df['updated_at']
del df['meta']
del df['caseid']
del df['address']
del df['responsible_agency']
del df['request_details']
del df['request_type']
del df['status']
del df['updated']
del df['supervisor_district']
del df['point']
Explanation: Source
End of explanation |
13,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of large set of Abl simulations on Folding@home (project 10468), one starting configuration
May 1, 2015
This is some initial MSM building Abl simulations.
Section 0
Step1: The timestep for these simulations is 2 fs (can be found in /data/choderalab/fah/initial-models/projects/ABL1_HUMAN_D0_V1/RUN0/integrator.xml [stepSize=".002"]).
Assuming the write frequency is every 125000 steps (can't find project.xml, assuming same as for MEK etc. projects). This means that each frame is 250 ps.
Step2: Load all trajectories > 1 us.
How many frames is 1us? 1000/.25 = 4000 frames!
Step3: Section 1
Step4: Section 2 | Python Code:
#Import libraries
import matplotlib.pyplot as plt
import mdtraj as md
import glob
import numpy as np
from msmbuilder.dataset import dataset
%pylab inline
#Import longest trajectory.
t = md.load("run0-clone35.h5")
Explanation: Analysis of large set of Abl simulations on Folding@home (project 10468), one starting configuration
May 1, 2015
This is some initial MSM building Abl simulations.
Section 0: Longest Sim
End of explanation
frame = np.arange(len(t))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "Length of this longest simulation of Abl is %s us." % ''.join(map(str, sim_time))
rmsd = md.rmsd(t,t,frame=0)
plt.plot(time, rmsd)
plt.xlabel('time (ns)')
plt.ylabel('RMSD(nm)')
plt.title('RMSD')
Explanation: The timestep for these simulations is 2 fs (can be found in /data/choderalab/fah/initial-models/projects/ABL1_HUMAN_D0_V1/RUN0/integrator.xml [stepSize=".002"]).
Assuming the write frequency is every 125000 steps (can't find project.xml, assuming same as for MEK etc. projects). This means that each frame is 250 ps.
End of explanation
# For now making dir long_sims in bash using:
# > for file in $(find * -type f -size +300000); do cp $file long_sims/$file; done
filenames = glob.glob("run0*.h5")
trajectories = [md.load(filename) for filename in filenames]
len(trajectories)
No_sims = len(trajectories)
print "There are %s sims in this. The shortest one is run0-clone338.h5." % No_sims
t_long_min = md.load("run0-clone338.h5")
frame = np.arange(len(t_long_min))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "Length of run0-clone338.h5 %s us." % ''.join(map(str, sim_time))
#NOT DOING THIS FOR NOW
#frame = np.arange(len(trajectories))[:, np.newaxis]
# Using 0.25 so that units are in ns.
#time = frame * .250
#sim_time = time[-1] * 1e-3
#print "The total length of all these long sims is %s us." % ''.join(map(str, sim_time))
Explanation: Load all trajectories > 1 us.
How many frames is 1us? 1000/.25 = 4000 frames!
End of explanation
from msmbuilder import msm, featurizer, utils, decomposition
# Make dihedral_features
dihedrals = featurizer.DihedralFeaturizer(types=["phi", "psi", "chi2"]).transform(trajectories)
# Make tICA features
tica = decomposition.tICA(n_components = 4)
X = tica.fit_transform(dihedrals)
#Note the default lagtime here is 1 (=250ps),
#which is super short according to lit for building reasonable protein MSM.
Xf = np.concatenate(X)
hexbin(Xf[:,0], Xf[:, 1], bins='log')
title("Dihedral tICA Analysis")
xlabel("Slowest Coordinate")
ylabel("Second Slowest Coordinate")
savefig("abl_10467_msm.png", bbox_inches="tight")
Explanation: Section 1: Building an MSM.
End of explanation
#Load trajectory with ensembler models
t_models = md.load("../../ensembler-models/traj-refine_implicit_md.xtc", top = "../../ensembler-models/topol-renumbered-implicit.pdb")
#Now make dihedrals of this.
dihedrals_models = featurizer.DihedralFeaturizer(types=["phi", "psi", "chi2"]).transform([t_models])
x_models = tica.transform(dihedrals_models)
#do not use fit here because don't want to change tica object, want to use one generated from sims.
#Now plot on the slow MSM features found above.
hexbin(Xf[:,0], Xf[:, 1], bins='log')
plot(x_models[0][:, 0], x_models[0][:, 1], 'o', markersize=5, label="ensembler models", color='white')
title("Dihedral tICA Analysis")
xlabel("Slowest Coordinate")
ylabel("Second Slowest Coordinate")
legend(loc=0)
savefig("abl_10467_msm_wmodels.png", bbox_inches="tight")
Explanation: Section 2: Comparing MSM to Danny's ensembler outputs.
End of explanation |
13,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recovering rotation periods in simulated LSST data
Step1: Randomly select targets from a TRILEGAL output.
Step2: Calculate periods from ages and colours for cool stars
Step3: Draw from a sum of two Gaussians (modelled in another notebook) that describes the period distribution for hot stars. Approximations
Step4: Make histograms of the ages and periods
Step5: Use Derek's results to calculate amplitudes
Step10: Assign amplitudes
Step11: Simulate light curves
Step12: Load and plot an example light curve
Step13: Compute a periodogram
Step14: Now compute LS pgrams for a set of LSST light curves and save the highest peak .
Step15: Save the data
Step16: Plot the recovered periods vs the true periods.
Step17: Decide whether the recovery was successful or not | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from gatspy.periodic import LombScargle
import sys
%matplotlib inline
from toy_simulator import simulate_LSST
from trilegal_models import random_stars
import simple_gyro as sg
import pandas as pd
Explanation: Recovering rotation periods in simulated LSST data
End of explanation
fname = "output574523944248.dat"
N = 100
logAges, bvs, logTeff, rmag = random_stars(fname, N)
teff = 10**logTeff
Explanation: Randomly select targets from a TRILEGAL output.
End of explanation
m = bvs > .4 # select only cool stars
cool_ages = 10**logAges[m] * 1e-9
cool_ps = sg.period(cool_ages, bvs[m])
cool_teffs = teff[m]
cool_rmags = rmag[m]
Explanation: Calculate periods from ages and colours for cool stars
End of explanation
hot_ages = 10**logAges[~m] * 1e-9 # select hot stars
hot_teffs = teff[~m]
hot_rmags = rmag[~m]
# copy parameters for two Gaussians from hot_stars ipython notebook
A1, A2, mu1, mu2, sig1, sig2 = 254.11651209, 49.8149765, 3.00751724, 3.73399554, 2.26525979, 8.31739725
hot_ps = np.zeros_like(hot_ages)
hot_ps1 = np.random.randn(int(len(hot_ages)*(1 - A2/A1)))*sig1 + mu1 # mode 1
hot_ps2 = np.random.randn(int(len(hot_ages)*(A2/A1)))*sig2 + mu2 # mode 2
hot_ps[:len(hot_ps1)] = hot_ps1
hot_ps[len(hot_ps1):len(hot_ps2)] = hot_ps2
tot = len(hot_ps1) + len(hot_ps2)
hot_ps[tot:] = np.random.randn(len(hot_ps)-tot)*sig2 + mu2 # make up the total number of Ps
# combine the modes
age = np.concatenate((cool_ages, hot_ages))
ps = np.concatenate((cool_ps, hot_ps))
teff = np.concatenate((cool_teffs, hot_teffs))
rmag = np.concatenate((cool_rmags, hot_rmags))
Explanation: Draw from a sum of two Gaussians (modelled in another notebook) that describes the period distribution for hot stars. Approximations: I have lumped all stars with colour < 0.4 in together AND I actually used teff = 6250, not B-V = 0.4 in the other notebook.
End of explanation
plt.hist(age)
plt.xlabel("Age (Gyr)")
plt.hist(ps)
plt.xlabel("Period (days)")
plt.hist(rmag)
plt.xlabel("r mag")
## Arrays of random (log-normal) periods and (uniform) amplitudes.
#min_period, max_period = 1, 100 # days
#ps = np.exp(np.random.uniform(np.log(min_period), np.log(max_period), N)) # periods
#amps = np.random.uniform(10, 300, N) # ppm
Explanation: Make histograms of the ages and periods
End of explanation
# Column headings: log10P, log10R, stdR, Nbin
teff_bins = [3500, 4000, 4500, 5000, 5500, 6000]
d35 = pd.read_csv("data/rot_v_act3500.txt")
d40 = pd.read_csv("data/rot_v_act4000.txt")
d45 = pd.read_csv("data/rot_v_act4500.txt")
d50 = pd.read_csv("data/rot_v_act5000.txt")
d55 = pd.read_csv("data/rot_v_act5500.txt")
d60 = pd.read_csv("data/rot_v_act6000.txt")
plt.step(d35["log10P"], d35["log10R"], label="T=3500")
plt.step(d40["log10P"], d40["log10R"], label="T=4000")
plt.step(d45["log10P"], d45["log10R"], label="T=4500")
plt.step(d50["log10P"], d50["log10R"], label="T=5000")
plt.step(d55["log10P"], d55["log10R"], label="T=5500")
plt.step(d60["log10P"], d60["log10R"], label="T=6000")
plt.legend()
plt.xlabel("log Period")
plt.ylabel("log Range")
Explanation: Use Derek's results to calculate amplitudes
End of explanation
def find_nearest (array, value):
Match a period to a bin.
array: array of bin heights.
value: the period of the star.
Returns the value and index of the bin.
m = np.abs(array-value) == np.abs(array-value).min()
return array[m], m
def assign_amps(ps, log10P, log10R, stdR):
Take periods and bin values and return an array of amplitudes.
npi = np.array([find_nearest(10**log10P, p) for p in ps]) # match periods to bins
nearest_ps, inds = npi[:, 0], npi[:, 1]
log_ranges = np.array([log10R[i] for i in inds])[:, 0] # array of ranges for each *
std_ranges = np.array([stdR[i] for i in inds])[:, 0] # array of stdevs in range for each *
return np.random.randn(len(ps))*std_ranges + log_ranges # draw amps from Gaussians
def make_arrays(data, temp_bin):
Amplitude arrays for each temperature bin
P, R, std = np.array(data["log10P"]), np.array(data["log10R"]), np.array(data["stdR"])
if temp_bin == 3500:
m = teff < 3750
elif temp_bin == 6000:
m = teff > 6000
else:
m = (temp_bin - 250 < teff) * (teff < temp_bin + 250)
periods, teffs, rmags = ps[m], teff[m], rmag[m]
amplitudes = assign_amps(periods, P, R, std)
return periods, amplitudes, teffs, rmags
def LSST_sig(m):
Approximate the noise in figure 2 of arxiv:1603.06638 from the apparent r-mag.
Returns the noise in magnitudes and ppm.
if m < 19:
return .005
mags = np.array([19, 20, 21, 22, 23, 24, 25])
sigs = np.array([.005, .007, .01, .02, .03, .1, .2])
return sigs[np.abs(mags - m) == np.abs(mags-m).min()][0]
pers, logamps, teffs, rmags = np.concatenate((make_arrays(d35, 3500), make_arrays(d40, 4000),
make_arrays(d45, 4500), make_arrays(d50, 5000),
make_arrays(d55, 5500), make_arrays(d60, 6000)), axis=1)
amps = 10**logamps # parts per million
noise = LSST_sig(rmag[0])
noises_mag = np.array([LSST_sig(mag) for mag in rmags])
noises_ppm = (1 - 10**(-noises_mag/2.5)) * 1e6
Explanation: Assign amplitudes
End of explanation
%%capture
# amps = np.random.uniform(10, 300, N) # ppm
path = "simulations" # where to save the lcs
[simulate_LSST(i, pers[i], amps[i], path, noises_ppm[i]) for i in range(len(pers))] # simulations
# save the true values
ids = np.arange(len(pers))
data = np.vstack((ids, pers, amps))
np.savetxt("{0}/truth.txt".format(path), data.T)
Explanation: Simulate light curves
End of explanation
id = 0
sid = str(int(id)).zfill(4)
path = "results" # where to save results
x, y, yerr = np.genfromtxt("simulations/{0}.txt".format(sid)).T # load a fake light curve
plt.errorbar(x, y, yerr=yerr, fmt="k.", capsize=0)
Explanation: Load and plot an example light curve
End of explanation
ps = np.linspace(2, 100, 1000) # the period array (in days)
model = LombScargle().fit(x, y, yerr)
pgram = model.periodogram(ps)
# find peaks
peaks = np.array([i for i in range(1, len(ps)-1) if pgram[i-1] < pgram[i] and pgram[i+1] < pgram[i]])
if len(peaks):
period = ps[pgram==max(pgram[peaks])][0]
else: period = 0
plt.plot(ps, pgram) # plot the pgram
plt.axvline(period, color="r") # plot the position of the highest peak
# load and plot the truth
ids, true_ps, true_as = np.genfromtxt("simulations/truth.txt").T
plt.axvline(true_ps[id], color="g") # plot the position of the highest peak
print(period, true_ps[id])
Explanation: Compute a periodogram
End of explanation
ids = np.arange(len(pers))
periods = np.zeros_like(ids)
for i, id in enumerate(ids):
sid = str(int(id)).zfill(4)
x, y, yerr = np.genfromtxt("simulations/{0}.txt".format(sid)).T # load a fake light curve
model = LombScargle().fit(x, y, yerr) # compute pgram
pgram = model.periodogram(ps)
# find peaks
peaks = np.array([i for i in range(1, len(ps)-1) if pgram[i-1] < pgram[i] and pgram[i+1] < pgram[i]])
if len(peaks):
period = ps[pgram==max(pgram[peaks])][0]
else: period = 0
periods[i] = period
Explanation: Now compute LS pgrams for a set of LSST light curves and save the highest peak .
End of explanation
data = np.vstack((true_ps, periods, teffs, rmags, true_as, noises_ppm))
np.savetxt("rotation_results{0}.txt".format(fname), data.T)
Explanation: Save the data
End of explanation
plt.plot(true_ps, periods, "k.")
xs = np.linspace(min(true_ps), max(true_ps), 100)
plt.plot(xs, xs, "r")
tau = .1 # the recovery must be within a factor of *threshold* of the truth
plt.plot(xs, xs-tau*xs, "r--")
plt.plot(xs, xs+tau*xs, "r--")
Explanation: Plot the recovered periods vs the true periods.
End of explanation
m = (true_ps - tau*true_ps < periods) * (periods < true_ps + tau*true_ps)
plt.hist(true_ps, 15, color="b", label="all")
plt.hist(true_ps[m], 15, color="r", alpha=.5, label="recovered")
plt.legend(loc="best")
print(len(true_ps), "injected", len(true_ps[m]), "recovered")
print(len(true_ps[m])/len(true_ps)*100, "percent success")
Explanation: Decide whether the recovery was successful or not
End of explanation |
13,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create dataframe
Step2: Make plot | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
Explanation: Title: Scatterplot In MatPlotLib
Slug: matplotlib_stacked_bar_plot
Summary: Scatterplot In MatPlotLib
Date: 2016-05-01 12:00
Category: Python
Tags: Data Visualization
Authors: Chris Albon
Based on: Sebastian Raschka.
Preliminaries
End of explanation
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'pre_score': [4, 24, 31, 2, 3],
'mid_score': [25, 94, 57, 62, 70],
'post_score': [5, 43, 23, 23, 51]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'pre_score', 'mid_score', 'post_score'])
df
Explanation: Create dataframe
End of explanation
# Create the general blog and the "subplots" i.e. the bars
f, ax1 = plt.subplots(1, figsize=(10,5))
# Set the bar width
bar_width = 0.75
# positions of the left bar-boundaries
bar_l = [i+1 for i in range(len(df['pre_score']))]
# positions of the x-axis ticks (center of the bars as bar labels)
tick_pos = [i+(bar_width/2) for i in bar_l]
# Create a bar plot, in position bar_1
ax1.bar(bar_l,
# using the pre_score data
df['pre_score'],
# set the width
width=bar_width,
# with the label pre score
label='Pre Score',
# with alpha 0.5
alpha=0.5,
# with color
color='#F4561D')
# Create a bar plot, in position bar_1
ax1.bar(bar_l,
# using the mid_score data
df['mid_score'],
# set the width
width=bar_width,
# with pre_score on the bottom
bottom=df['pre_score'],
# with the label mid score
label='Mid Score',
# with alpha 0.5
alpha=0.5,
# with color
color='#F1911E')
# Create a bar plot, in position bar_1
ax1.bar(bar_l,
# using the post_score data
df['post_score'],
# set the width
width=bar_width,
# with pre_score and mid_score on the bottom
bottom=[i+j for i,j in zip(df['pre_score'],df['mid_score'])],
# with the label post score
label='Post Score',
# with alpha 0.5
alpha=0.5,
# with color
color='#F1BD1A')
# set the x ticks with names
plt.xticks(tick_pos, df['first_name'])
# Set the label and legends
ax1.set_ylabel("Total Score")
ax1.set_xlabel("Test Subject")
plt.legend(loc='upper left')
# Set a buffer around the edge
plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
Explanation: Make plot
End of explanation |
13,764 | Given the following text description, write Python code to implement the functionality described.
Description:
Number of subarrays have bitwise OR >= K
Function to return the count of required sub - arrays ; Traverse sub - array [ i . . j ] ; Driver code
| Python Code:
def countSubArrays(arr , n , K ) :
count = 0 ;
for i in range(n ) :
for j in range(i , n ) :
bitwise_or = 0
for k in range(i , j + 1 ) :
bitwise_or = bitwise_or | arr[k ]
if(bitwise_or >= K ) :
count += 1
return count
if __name__== "__main __":
arr =[3 , 4 , 5 ]
n = len(arr )
k = 6
print(countSubArrays(arr , n , k ) )
|
13,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DTOcean Electrical Sub-Systems Example
Note, this example assumes the Electrical Sub-Systems Module has been installed
Step1: Create the core, menus and pipeline tree
The core object carrys all the system information and is operated on by the other classes
Step2: Create a new project
Step3: Set the device type
Step4: Initiate the pipeline
This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
Step5: Discover available modules
Step6: Activate a module
Note that the order of activation is important and that we can't deactivate yet!
Step7: Check the status of the module inputs
Step8: Initiate the dataflow
This indicates that the filtering and module / theme selections are complete
Step9: Load test data
Prepare the test data for loading. The test_data directory of the source code should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
Step10: Check if the module can be executed
Step11: Execute the current module
The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once.
Note, any data supplied by the module will be automatically copied into the active data state.
Step12: Examine the results
Currently, there is no robustness built into the core, so the assumption is that the module executed successfully. This will have to be improved towards deployment of the final software.
Let's check the updated annual output of the farm, using just information in the data object. | Python Code:
%matplotlib inline
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (14.0, 8.0)
import numpy as np
from dtocean_core import start_logging
from dtocean_core.core import Core
from dtocean_core.menu import ModuleMenu, ProjectMenu
from dtocean_core.pipeline import Tree
def html_list(x):
message = "<ul>"
for name in x:
message += "<li>{}</li>".format(name)
message += "</ul>"
return message
def html_dict(x):
message = "<ul>"
for name, status in x.iteritems():
message += "<li>{}: <b>{}</b></li>".format(name, status)
message += "</ul>"
return message
# Bring up the logger
start_logging()
Explanation: DTOcean Electrical Sub-Systems Example
Note, this example assumes the Electrical Sub-Systems Module has been installed
End of explanation
new_core = Core()
project_menu = ProjectMenu()
module_menu = ModuleMenu()
pipe_tree = Tree()
Explanation: Create the core, menus and pipeline tree
The core object carrys all the system information and is operated on by the other classes
End of explanation
project_title = "DTOcean"
new_project = project_menu.new_project(new_core, project_title)
Explanation: Create a new project
End of explanation
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection")
variable_id = "device.system_type"
my_var = options_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, "Wave Floating")
my_var.read(new_core, new_project)
Explanation: Set the device type
End of explanation
project_menu.initiate_pipeline(new_core, new_project)
Explanation: Initiate the pipeline
This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
End of explanation
names = module_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
Explanation: Discover available modules
End of explanation
module_name = 'Electrical Sub-Systems'
module_menu.activate(new_core, new_project, module_name)
Explanation: Activate a module
Note that the order of activation is important and that we can't deactivate yet!
End of explanation
electrical_branch = pipe_tree.get_branch(new_core, new_project, 'Electrical Sub-Systems')
input_status = electrical_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
Explanation: Check the status of the module inputs
End of explanation
project_menu.initiate_dataflow(new_core, new_project)
Explanation: Initiate the dataflow
This indicates that the filtering and module / theme selections are complete
End of explanation
%run test_data/inputs_wp3.py
electrical_branch.read_test_data(new_core,
new_project,
"test_data/inputs_wp3.pkl")
Explanation: Load test data
Prepare the test data for loading. The test_data directory of the source code should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
End of explanation
can_execute = module_menu.is_executable(new_core, new_project, module_name)
display(can_execute)
input_status = electrical_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
Explanation: Check if the module can be executed
End of explanation
module_menu.execute_current(new_core, new_project)
Explanation: Execute the current module
The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once.
Note, any data supplied by the module will be automatically copied into the active data state.
End of explanation
array_efficiency = new_core.get_data_value(new_project, "project.array_efficiency")
meta = new_core.get_metadata("project.array_efficiency")
name = meta.title
value = array_efficiency
message_two = "<p><b>{}:</b> <i>{}</i></p>".format(name, value)
HTML(message_two)
electrical_economics = new_core.get_data_value(new_project, "project.electrical_economics_data")
electrical_economics
umbilicals = new_core.get_data_value(new_project, "project.umbilical_cable_data")
umbilicals
Explanation: Examine the results
Currently, there is no robustness built into the core, so the assumption is that the module executed successfully. This will have to be improved towards deployment of the final software.
Let's check the updated annual output of the farm, using just information in the data object.
End of explanation |
13,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI Pipelines
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step13: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step14: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step15: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines
Step16: Additional imports.
Step17: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step18: Define AutoML image classification model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
Create and deploy an AutoML image classification Model resource using a Dataset resource.
Step19: Compile the pipeline
Next, compile the pipeline.
Step20: Run the pipeline
Next, run the pipeline.
Step21: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI Pipelines: AutoML image classification pipelines using google-cloud-pipeline-components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components to build an AutoML image classification workflow on Vertex AI Pipelines.
Dataset
The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.
Objective
In this tutorial, you create an AutoML image classification using a pipeline with components from google_cloud_pipeline_components.
The steps performed include:
Create a Dataset resource.
Train an AutoML Model resource.
Creates an Endpoint resource.
Deploys the Model resource to the Endpoint resource.
The components are documented here.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex AI SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
Explanation: Install the latest GA version of google-cloud-pipeline-components library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace('*', '').strip()
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
PIPELINE_ROOT = "{}/pipeline_root/flowers".format(BUCKET_NAME)
Explanation: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines:
End of explanation
import kfp
Explanation: Additional imports.
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
@kfp.dsl.pipeline(name="automl-image-training-v2")
def pipeline(project: str = PROJECT_ID, region: str = REGION):
from google_cloud_pipeline_components import aiplatform as gcc_aip
from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,
ModelDeployOp)
ds_op = gcc_aip.ImageDatasetCreateOp(
project=project,
display_name="flowers",
gcs_source="gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv",
import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,
)
training_job_run_op = gcc_aip.AutoMLImageTrainingJobRunOp(
project=project,
display_name="train-automl-flowers",
prediction_type="classification",
model_type="CLOUD",
dataset=ds_op.outputs["dataset"],
model_display_name="train-automl-flowers",
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=8000,
)
endpoint_op = EndpointCreateOp(
project=project,
location=region,
display_name="train-automl-flowers",
)
ModelDeployOp(
model=training_job_run_op.outputs["model"],
endpoint=endpoint_op.outputs["endpoint"],
automatic_resources_min_replica_count=1,
automatic_resources_max_replica_count=1,
)
Explanation: Define AutoML image classification model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
Create and deploy an AutoML image classification Model resource using a Dataset resource.
End of explanation
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="image classification_pipeline.json".replace(" ", "_"),
)
Explanation: Compile the pipeline
Next, compile the pipeline.
End of explanation
DISPLAY_NAME = "flowers_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="image classification_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
job.run()
! rm image_classification_pipeline.json
Explanation: Run the pipeline
Next, run the pipeline.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "image" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "image" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "image" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "image" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running:
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> -->
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_image_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_image_classif.png" width="40%"/></a>
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
13,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Data and Preprocessing
Step1: factorplot and FacetGrid | Python Code:
names = [
'mpg'
, 'cylinders'
, 'displacement'
, 'horsepower'
, 'weight'
, 'acceleration'
, 'model_year'
, 'origin'
, 'car_name'
]
# reading the file and assigning the header
df = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data", sep='\s+', names=names)
df['maker'] = df.car_name.map(lambda x: x.split()[0])
df.origin = df.origin.map({1: 'America', 2: 'Europe', 3: 'Asia'})
df=df.applymap(lambda x: np.nan if x == '?' else x).dropna()
df['horsepower'] = df.horsepower.astype(float)
df.head()
Explanation: Getting Data and Preprocessing
End of explanation
sns.factorplot(data=df, x="model_year", y="mpg")
sns.factorplot(data=df, x="model_year", y="mpg", col="origin")
Explanation: factorplot and FacetGrid
End of explanation |
13,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Question 1
Step1: On vérifie en utilisant sympy que le calcul est correct
Step2: Question 2
Step3: On vérifie en utilisant sympy que ce calcul est correct
Step4: Question 3
Step5: On vérifie que la fonction est_premier fonctionne bien
Step6: D'abord, on remarque qu'il n'existe pas de quadruplet de nombre premiers $p$, $p+2$, $p+4$, $p+6$, car l'un d'eux doit être divisible par trois (en effet, leurs valeurs modulo 3 sont $p$, $p+2$, $p+1$). Comme 3 est le seul nombre premier divisible par trois, il faudrait que le quadruplet contienne le nombre 3. Or un tel quadruplet n'existe pas.
Step7: Question 4 | Python Code:
def somme(A, B):
C = []
for i in range(4):
Ai = A[i]
Bi = B[i]
row = [Ai[j]+Bi[j] for j in range(4)]
C.append(row)
return C
X = [[56, 39, 3, 41],
[23, 78, 11, 62],
[61, 26, 65, 51],
[80, 98, 9, 68]]
Y = [[51, 52, 53, 15],
[ 1, 71, 46, 31],
[99, 7, 92, 12],
[15, 43, 36, 51]]
somme(X, Y)
Explanation: Question 1
End of explanation
from sympy import Matrix
Mx = Matrix(X)
My = Matrix(Y)
Mx + My
Explanation: On vérifie en utilisant sympy que le calcul est correct:
End of explanation
def produit(A, B):
C = []
for i in range(4):
row = []
for j in range(4):
row.append(sum(A[i][k]*B[k][j] for k in range(4)))
C.append(row)
return C
produit(X,Y)
Explanation: Question 2
End of explanation
Mx * My
Explanation: On vérifie en utilisant sympy que ce calcul est correct:
End of explanation
from math import sqrt
def est_premier(n):
if n == 0 or n == 1:
return False
for i in range(2, int(sqrt(n))+1):
if n % i == 0:
return False
return True
Explanation: Question 3
End of explanation
for i in range(100):
if est_premier(i):
print(i, end=', ')
Explanation: On vérifie que la fonction est_premier fonctionne bien:
End of explanation
def triplets_nombre_premier(n):
L = []
p = 3
while len(L) < n:
if est_premier(p) and est_premier(p+6):
if est_premier(p+2):
L.append((p, p+2, p+6))
elif est_premier(p+4):
L.append((p, p+4, p+6))
p += 2
return L
triplets_nombre_premier(10)
Explanation: D'abord, on remarque qu'il n'existe pas de quadruplet de nombre premiers $p$, $p+2$, $p+4$, $p+6$, car l'un d'eux doit être divisible par trois (en effet, leurs valeurs modulo 3 sont $p$, $p+2$, $p+1$). Comme 3 est le seul nombre premier divisible par trois, il faudrait que le quadruplet contienne le nombre 3. Or un tel quadruplet n'existe pas.
End of explanation
def triplets_pythagore(n):
L = []
for c in range(1, n+1):
for b in range(1, c+1):
for a in range(1, b+1):
if a**2+b**2 == c**2:
L.append((a,b,c))
return L
triplets_pythagore(30)
Explanation: Question 4
End of explanation |
13,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Creating an instance of the solow.Model class
In this notebook I will walk you through the creation of an instance of the solow.Model class. To create an instance of the solow.Model we must define two primitives
Step1: Examples
Step2: 1.2 Defining model parameters
A generic Solow growth model has several parameters that need to be specified. To see which parameters are required, we can check the docstring of the solow.Model.params attribute.
Step3: In addition to the standard parameters $g, n, s, \delta$, one will also need to specify any required parameters for the production function. In order to make sure that parameter values are consistent with the models assumptions some basic validation of the solow.Model.params attribute is done when ever the attribute is set.
Step4: Examples
Step5: 1.3 Other attributes of the solow.Model class
The intensive form of the production function
The assumption of constant returns to scale allows us to work with the intensive form of the aggregate production function, $F$. Defining $c=1/AL$ one can write
$$ F\bigg(\frac{K}{AL}, 1\bigg) = \frac{1}{AL}F(A, K, L) \tag{1.3.1} $$
Defining $k=K/AL$ and $y=Y/AL$ to be capital per unit effective labor and output per unit effective labor, respectively, the intensive form of the production function can be written as
$$ y = f(k). \tag{1.3.2}$$
Additional assumptions are that $f$ satisfies $f(0)=0$, is concave (i.e., $f'(k) > 0, f''(k) < 0$), and satisfies the Inada conditions
Step6: One can numerically evaluate the intensive output for various values of capital stock (per unit effective labor) as follows...
Step7: The marginal product of capital
The marginal product of capital is defined as follows
Step8: One can numerically evaluate the marginal product of capital for various values of capital stock (per unit effective labor) as follows...
Step9: Equation of motion for capital (per unit effective labor)
Because the economy is growing over time due to technological progress, $g$, and population growth, $n$, it makes sense to focus on the capital stock per unit effective labor, $k$, rather than aggregate physical capital, $K$. Since, by definition, $k=K/AL$, we can apply the chain rule to the time derative of $k$.
\begin{align}
\dot{k}(t) =& \frac{\dot{K}(t)}{A(t)L(t)} - \frac{K(t)}{[A(t)L(t)]^2}\bigg[\dot{A}(t)L(t) + \dot{L}(t)A(t)\bigg] \
=& \frac{\dot{K}(t)}{A(t)L(t)} - \bigg(\frac{\dot{A}(t)}{A(t)} + \frac{\dot{L}(t)}{L(t)}\bigg)\frac{K(t)}{A(t)L(t)} \tag{1.3.4}
\end{align}
By definition, $k=K/AL$, and by assumption $\dot{A}/A$ and $\dot{L}/L$ are $g$ and $n$ respectively. Aggregate capital stock evolves according to
$$ \dot{K}(t) = sF(K(t), A(t)L(t)) - \delta K(t). \tag{1.3.5}$$
Substituting these facts into the above equation yields the equation of
motion for capital stock (per unit effective labor).
\begin{align}
\dot{k}(t) =& \frac{sF(K(t), A(t)L(t)) - \delta K(t)}{A(t)L(t)} - (g + n)k(t) \
=& \frac{sY(t)}{A(t)L(t)} - (g + n + \delta)k(t) \
=& sf(k(t)) - (g + n + \delta)k(t) \tag{1.3.6}
\end{align}
The above information is available for reference in the docstring for the solow.Model.k_dot attribute.
Step10: One can numerically evaluate the equation of motion for capital (per unit effective labor) for various values of capital stock (per unit effective labor) as follows...
Step11: 1.4 Sub-classing the solow.Model class
Several commonly used functional forms for aggregate production, including both the Cobb-Douglas and Constant Elasticity of Substitution (CES) production functions, have been sub-classed from solow.Model. For these functional forms, one only needs to specify a valid dictionary of model parameters. | Python Code:
solow.Model.output?
Explanation: 1 Creating an instance of the solow.Model class
In this notebook I will walk you through the creation of an instance of the solow.Model class. To create an instance of the solow.Model we must define two primitives: an aggregate production function and a dictionary of model parameter values.
1.1 Defining the production function $F$:
At each point in time the economy in a Solow growth model has some amounts of capital, $K$, labor, $L$, and knowledge (or technology), $A$, that can be combined to produce output, $Y$, according to some function, $F$:
$$ Y(t) = F(K(t), A(t)L(t)) \tag{1.1.1} $$
where $t$ denotes time. Note that $A$ and $L$ are assumed to enter multiplicatively. Typically $A(t)L(t)$ denotes "effective labor", and technology that enters in this fashion is known as labor-augmenting or "Harrod neutral."
A key assumption of the model is that the function $F$ exhibits constant returns to scale in capital and labor inputs. Specifically,
$$ F(cK(t), cA(t)L(t)) = cF(K(t), A(t)L(t)) = cY(t) \tag {1.1.2} $$
for any $c \ge 0$. For reference, the above information is contained in the docstring of the solow.Model.output attribute.
End of explanation
# define model variables
A, K, L = sym.symbols('A, K, L')
# define production parameters
alpha, sigma = sym.symbols('alpha, sigma')
# define a production function
cobb_douglas_output = K**alpha * (A * L)**(1 - alpha)
rho = (sigma - 1) / sigma
ces_output = (alpha * K**rho + (1 - alpha) * (A * L)**rho)**(1 / rho)
Explanation: Examples:
A common functional form for aggregate production in a Solow model that satisies the above assumptions is the Cobb-Douglas production function
\begin{equation}
\lim_{\rho \rightarrow 0} Y(t) = K(t)^{\alpha}(A(t)L(t))^{1-\alpha}. \tag{1.1.3}
\end{equation}
The Cobb-Douglas production function is actually a special case of a more general class of production functions called constant elasticity of substitution (CES) production functions.
\begin{equation}
Y(t) = \bigg[\alpha K(t)^{\rho} + (1-\alpha) (A(t)L(t))^{\rho}\bigg]^{\frac{1}{\rho}} \tag{1.1.4}
\end{equation}
where $0 < \alpha < 1$ and $-\infty < \rho < 1$. The parameter $\rho = \frac{\sigma - 1}{\sigma}$ where $\sigma$ is the elasticity of substitution between factors of production. Taking the limit of equation 1.2 as the elasticity of subsitution goes to unity (i.e., $\sigma=1 \implies \rho=0$) recovers the Cobb-Douglas functional form.
End of explanation
solow.Model.params?
Explanation: 1.2 Defining model parameters
A generic Solow growth model has several parameters that need to be specified. To see which parameters are required, we can check the docstring of the solow.Model.params attribute.
End of explanation
# these parameters look fishy...why?
default_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.0, 'n': -0.03, 's': 0.15,
'delta': 0.01, 'alpha': 0.33}
# ...raises an AttributeError
model = solowpy.Model(output=cobb_douglas_output, params=default_params)
Explanation: In addition to the standard parameters $g, n, s, \delta$, one will also need to specify any required parameters for the production function. In order to make sure that parameter values are consistent with the models assumptions some basic validation of the solow.Model.params attribute is done when ever the attribute is set.
End of explanation
cobb_douglas_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,
'delta': 0.05, 'alpha': 0.33}
cobb_douglas_model = solow.Model(output=cobb_douglas_output,
params=cobb_douglas_params)
ces_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,
'delta': 0.05, 'alpha': 0.33, 'sigma': 0.95}
ces_model = solowpy.Model(output=ces_output, params=ces_params)
Explanation: Examples:
Here are some examples of how one successfully creates an instance of the solow.Model class...
End of explanation
solowpy.Model.intensive_output?
ces_model.intensive_output
Explanation: 1.3 Other attributes of the solow.Model class
The intensive form of the production function
The assumption of constant returns to scale allows us to work with the intensive form of the aggregate production function, $F$. Defining $c=1/AL$ one can write
$$ F\bigg(\frac{K}{AL}, 1\bigg) = \frac{1}{AL}F(A, K, L) \tag{1.3.1} $$
Defining $k=K/AL$ and $y=Y/AL$ to be capital per unit effective labor and output per unit effective labor, respectively, the intensive form of the production function can be written as
$$ y = f(k). \tag{1.3.2}$$
Additional assumptions are that $f$ satisfies $f(0)=0$, is concave (i.e., $f'(k) > 0, f''(k) < 0$), and satisfies the Inada conditions: $\lim_{k \rightarrow 0} = \infty$ and $\lim_{k \rightarrow \infty} = 0$. The <cite data-cite="inada1964">(Inada, 1964)</cite> conditions are sufficient (but not necessary!) to ensure that the time path of capital per effective worker does not explode. Much of the above information is actually taken straight from the docstring for the solow.Model.intensive_output attribute.
End of explanation
ces_model.evaluate_intensive_output(np.linspace(1.0, 10.0, 25))
Explanation: One can numerically evaluate the intensive output for various values of capital stock (per unit effective labor) as follows...
End of explanation
solowpy.Model.marginal_product_capital?
ces_model.marginal_product_capital
Explanation: The marginal product of capital
The marginal product of capital is defined as follows:
$$ \frac{\partial F(K, AL)}{\partial K} \equiv f'(k) \tag{1.3.3}$$
where $k=K/AL$ is capital stock (per unit effective labor).
End of explanation
ces_model.evaluate_mpk(np.linspace(1.0, 10.0, 25))
Explanation: One can numerically evaluate the marginal product of capital for various values of capital stock (per unit effective labor) as follows...
End of explanation
solowpy.Model.k_dot?
ces_model.k_dot
Explanation: Equation of motion for capital (per unit effective labor)
Because the economy is growing over time due to technological progress, $g$, and population growth, $n$, it makes sense to focus on the capital stock per unit effective labor, $k$, rather than aggregate physical capital, $K$. Since, by definition, $k=K/AL$, we can apply the chain rule to the time derative of $k$.
\begin{align}
\dot{k}(t) =& \frac{\dot{K}(t)}{A(t)L(t)} - \frac{K(t)}{[A(t)L(t)]^2}\bigg[\dot{A}(t)L(t) + \dot{L}(t)A(t)\bigg] \
=& \frac{\dot{K}(t)}{A(t)L(t)} - \bigg(\frac{\dot{A}(t)}{A(t)} + \frac{\dot{L}(t)}{L(t)}\bigg)\frac{K(t)}{A(t)L(t)} \tag{1.3.4}
\end{align}
By definition, $k=K/AL$, and by assumption $\dot{A}/A$ and $\dot{L}/L$ are $g$ and $n$ respectively. Aggregate capital stock evolves according to
$$ \dot{K}(t) = sF(K(t), A(t)L(t)) - \delta K(t). \tag{1.3.5}$$
Substituting these facts into the above equation yields the equation of
motion for capital stock (per unit effective labor).
\begin{align}
\dot{k}(t) =& \frac{sF(K(t), A(t)L(t)) - \delta K(t)}{A(t)L(t)} - (g + n)k(t) \
=& \frac{sY(t)}{A(t)L(t)} - (g + n + \delta)k(t) \
=& sf(k(t)) - (g + n + \delta)k(t) \tag{1.3.6}
\end{align}
The above information is available for reference in the docstring for the solow.Model.k_dot attribute.
End of explanation
ces_model.evaluate_k_dot(np.linspace(1.0, 10.0, 25))
Explanation: One can numerically evaluate the equation of motion for capital (per unit effective labor) for various values of capital stock (per unit effective labor) as follows...
End of explanation
solowpy.cobb_douglas?
cobb_douglas_model = solowpy.CobbDouglasModel(params=cobb_douglas_params)
solowpy.ces?
ces_model = solowpy.CESModel(params=ces_params)
Explanation: 1.4 Sub-classing the solow.Model class
Several commonly used functional forms for aggregate production, including both the Cobb-Douglas and Constant Elasticity of Substitution (CES) production functions, have been sub-classed from solow.Model. For these functional forms, one only needs to specify a valid dictionary of model parameters.
End of explanation |
13,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialize PySpark
First, we use the findspark package to initialize PySpark.
Step1: Hello, World!
Loading data, mapping it and collecting the records into RAM...
Step2: Creating Objects from CSV using pyspark.RDD.map
Using a function with a map operation to create objects (dicts) as records...
Step3: pyspark.RDD.groupBy
Using the groupBy operator to count the number of jobs per person...
Step4: Exercises
Use pyspark.RDD.groupBy to group executives by job title, then prepare records with the job title and the count of the number of executives with that job.
Map vs FlatMap
We need to understand the difference between the map and flatmap operators. Map groups items per-record, while flatMap creates a single large group of items.
Step5: Creating Rows
We can create pyspark.sql.Rows out of python objects so we can create pyspark.sql.DataFrames. This is desirable because once we have DataFrames we can run Spark SQL on our data.
Step6: Exercises
First count the number of companies for each executive, then create a pyspark.sql.Row for this result.
Step8: Creating DataFrames from RDDs
Using the RDD.toDF() method to create a dataframe, registering the DataFrame as a temporary table with Spark SQL, and counting the jobs per person using Spark SQL.
Step9: SparkContext.parallelize()
The oppotiste of pyspark.RDD.collect() is SparkContext.parallelize(). Whereas collect pulls data from Spark's memory into local RAM, parallelize sends data from local memory to Spark's memory.
You can access it like this
Step11: Exercises
Create your own RDD of dict elements with named fields using sc.parallelize. Make it at least 5 records long.
Convert this RDD of dicts into an RDD of pyspark.sql.Row elements.
Convert this RDD of pyspark.sql.Rows into a pyspark.sql.DataFrame.
Run a SQL GROUP BY/COUNT on your new DataFrame.
Step12: Creating RDDs from DataFrames
We can easily convert back from a DataFrame to an RDD using the pyspark.sql.DataFrame.rdd() method, along with pyspark.sql.Row.asDict() if we desire a Python dict of our records.
Step13: Exercises
Using the data from item 4 in the exercise above, convert the data back to its original form, a local collection of dict elements.
Step14: Loading and Inspecting Parquet Files
Using the SparkSession to load files as DataFrames and inspecting their contents...
Step15: DataFrame Workflow
Step16: From Minutes to Hours
Now lets convert our AirTime from minutes to hours by dividing by 60.
Step17: Raw Calculation
Now lets calculate miles per hour!
Step18: Investigating nulls
Looks like we have some errors in some records in our calculation because of missing fields? Lets bring back in the Distance and AirTime fields to see where the problem is coming from.
Step19: Filtering nulls
Now that we know some records are missing AirTimes, we can filter those records using pyspark.sql.DataFrame.filter(). Starting from the beginning, lets recalculate our values.
Step20: Averaging Speed
How fast does the fleet travel overall? Lets compute the average speed for the entire fleet.
Step22: It looks like the average speed of the fleet is 408 mph. Note how along the way we chekced the data for sanity, which led to confidence in our answer. SQL by contast can hide the internals of a query, which might have skewed our average significantly!
SQL-Based Speed Calculation
Now lets work the same thing out in SQL. Starting from the top
Step23: Evaluating SQL
The SQL based solution seems to be better in this case, because we can simply express our calculation all in one place. When complexity grows however, it is best to break a single query into multiple stages where you use SQL or Dataflow programming to massage the data into shape.
Calculating Histograms
Having computed the speed in miles per hour and the overall average speed of passenger jets in the US, lets dig deeper by using the RDD API's histogram method to calculate histograms buckets and values, which will then use to visualize data.
Step25: Visualizing Histograms
The problem with the output above is that it is hard to interpret. For better understanding, we need a visualization. We can use matplotlib inline in a Jupyter Notebook to visualize this distribution and see what the tendency of speed of airplanes is around the mean of 408 mph.
Step26: Iterating on a Histogram
That looks interesting, but the bars seem to fat the really see what is going on. Lets double the number of buckets from 10 to 20. We can reuse the create_hist() method to do so.
Step27: Speed Summary
You've now seen how to calculate different values in both SQL and Dataflow style, how to switch between the two methods, how to switch between the pyspark.RDD and pyspark.sql.DataFrame APIs and you're starting to build a proficiency in PySpark!
Counting Airplanes in the US Fleet
Lets convert our on_time_dataframe (a DataFrame) into an RDD to calculate the total number of airplanes in the US fleet.
Step28: Exercise 1
Step29: Calculating with DataFrame.groupBy
We can use Spark SQL to calculate things using DataFrames, but we can also group data and calculate as we did with RDDs. For a full list of methods you can apply to grouped DataFrames, see the documentation for pyspark.sql.GroupedData. Below we will demonstrate some of these methods.
Step30: Pivoting DataFrames
One useful function of DataFrames is pivot. Pivot lets you compute pivot tables from data. Lets use pivot to calculate the average flight times between Atlanta ATL and other airports.
Step32: Plotting Scatterplots
Another type of visualization that is of interest to data scientists is the scatterplot. A scatterplot enables us to compare the trend of one value plotted against the other. For example, we could calculate the relationship between Origin and Dest Distance and the Mph speed figure we calculated earlier. Are longer flights generally faster, or not?
To prepare a scatterplot, we need to use matplotlib again, so we'll need to look at what its scatterplot API expects. The matplotlib.pyplot.scatter API takes two independant lists of values for the variables x and y, so we must compute them for Distance and Mph.
Step33: Collecting Data
Note that we will have to convert our data from existing within our Spark cluster's memory to within our local computer's memory where matplotlib runs.
Step34: Sampling Data
When I tried to plot this data, it took a very long time to draw. This is because... well, how many unique values are there for each variable? Lets see.
Step35: It is hard to plot 5.7 million dots on a scatterplot that will fit on a computer screen. So lets sample our data. We can use PySpark DataFrame's sample method. Lets take a 0.1% random sample without replacement, which will leave us with 5,687 or so data points - something we can more easily manage.
Step36: Note that we need to sample once and then split the datasets out - otherwise the data for a single observation will be scrambled across variables. We don't want that! All our scatterplots would show no relationships at all.
Step37: Fun with matplotlib.pyplot.scatter
Now we feed the scatter API distance as x and speed as y, giving it a title and x and y axes. Note that we also specify a size in inches via the figure.figsize rcParam.
Step38: Interpreting Our Scatterplot
We can see pretty clearly that as distance increases, average speed across that distance increases rapidly and then levels off as the distance increases.
Exercises
Query the on_time_dataframe to focus on two numeric fields.
Plot a histogram of one of these fields
Plot a scatterplot of both of these fields
Predicting Speed Given Distance
It is often the case that once we characterize a distribution, we want to create a function to predict one variable given the other. Lets take this example further by fitting a polynomial regression to describe our data. We use sklearn.pipeline.Pipeline to chain a sklearn.preprocessing.PolynomialFeatures to a sklearn.linear_model.LinearRegression. Other than that, we simply define x and y, and fit a model to those values. Then we finally compute a cross value score, to see the model's performance. We'll see this pattern again when we use large data tools in Spark MLlib.
Step39: Visualizing Polynomial Fit
Because we are running a polynomial regression, where we get to decide the degree of the polynomial. To help decide, lets plot a polynomial fit line to the data using matplotlib.
Step40: Joining Data in PySpark
Next we're going to learn how to join between datasets using PySpark. We're going to pick up from an example we're going to work in chapter 6, and explore it more deeply. To begin with, we will prepare a list of TailNum (tail numbers) from the FAA flight records. These uniquely identify each airplane from each flight.
Unique Tail Numbers
Step41: FAA Airplane Records
We will trim the FAA records down to just the TailNum, Model and Engine_Model. Note that we go ahead and rename the TailNum field to FAATailNum using the pyspark.sql.functions.alias() method. This avoids having two fields referenced by the same name once we perform our joins.
Step42: Inner Joins
You may be familiar with an inner join from SQL. An inner join joins two datasets based on the presence of a key from one dataset in the other. Records which don't have a key that appears in the other table do not appear in the final output.
Step43: Inner Join Results
Note that there are as many records in the output as there were in the FAA Airplane dataset - indicating that there was a representative of every tail number from that dataset in the on-time performance flight records. Lets take a look at the records themselves.
Step44: Note how convenient it is that we renamed one of the keys FAATailNum. If we hadn't, we'd have two columns with the same name now and would have trouble referring to one or the other.
Left Outer Join
Another type of join is the left outer join. It ensures that one record will remain in the output from the left side of the join no matter what. If a match on the join keys is found, the fields for the record on the right will be filled. If a match is not found, they will be empty.
Lets look at how this works with our two datasets.
Step45: Left Outer Join Result
Note that there were 4,898 records on the left side of our join and there are the same number on the output of our join. Lets take a look at what both matched and unmatched records look like
Step46: Note that some records have fields filled out, and some don't.
Right Outer Join
Another type of join is the right outer join. This works the opposite of a left outer join. In this case, the output will preserve a record for each and every record on the right side of the join. Use the right_outer join type to perform this kind of join.
Exercises
Go back and perform a right outer join on the preceding two datasets. Is the distinct() call on the FAA on-time performance records still needed? Why or why not?
Using RDDs and Map/Reduce to Prepare a Complex Record
Step47: Counting Late Flights
Step48: Counting Flights with Hero Captains
"Hero Captains" are those that depart late but make up time in the air and arrive on time or early.
Step49: Printing Our Results
Step51: Computing the Average Lateness Per Flights
Step53: Inspecting Late Flights
Step55: Determining Why Flights Are Late
Step56: Computing a Histogram of Weather Delayed Flights
Step57: Preparing a Histogram for Visualization by d3.js
Step58: Building a Classifier Model to Predict Flight Delays
Loading Our Data
Step59: Check Data for Nulls
Step60: Add a Route Column
Demonstrating the addition of a feature to our model...
Step61: Bucketizing ArrDelay into ArrDelayBucket
Step62: Indexing Our String Fields into Numeric Fields
Step63: Combining Numeric Fields into a Single Vector
Step64: Training Our Model in an Experimental Setup | Python Code:
from pyspark.sql import SparkSession
# Initialize PySpark with MongoDB support
APP_NAME = "Introducing PySpark"
spark = (
SparkSession.builder.appName(APP_NAME)
# Load support for MongoDB and Elasticsearch
.config("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.elasticsearch:elasticsearch-spark-30_2.12:7.14.2")
# Add Configuration for MongopDB
.config("spark.mongodb.input.uri", "mongodb://mongo:27017/test.coll")
.config("spark.mongodb.output.uri", "mongodb://mongo:27017/test.coll")
.getOrCreate()
)
sc = spark.sparkContext
sc.setLogLevel("ERROR")
print("\nPySpark initialized...")
Explanation: Initialize PySpark
First, we use the findspark package to initialize PySpark.
End of explanation
# Load the text file using the SparkContext
csv_lines = sc.textFile("../data/example.csv")
# Map the data to split the lines into a list
data = csv_lines.map(lambda line: line.split(","))
# Collect the dataset into local RAM
data.collect()
Explanation: Hello, World!
Loading data, mapping it and collecting the records into RAM...
End of explanation
# Turn the CSV lines into objects
def csv_to_record(line):
parts = line.split(",")
record = {
"name": parts[0],
"company": parts[1],
"title": parts[2]
}
return record
# Apply the function to every record
records = csv_lines.map(csv_to_record)
# Inspect the first item in the dataset
records.first()
Explanation: Creating Objects from CSV using pyspark.RDD.map
Using a function with a map operation to create objects (dicts) as records...
End of explanation
# Group the records by the name of the person
grouped_records = records.groupBy(lambda x: x["name"])
# Show the first group
print(grouped_records.first())
# Count the groups
job_counts = grouped_records.map(
lambda x: {
"name": x[0],
"job_count": len(x[1])
}
)
job_counts.collect()
Explanation: pyspark.RDD.groupBy
Using the groupBy operator to count the number of jobs per person...
End of explanation
# Compute a relation of words by line
words_by_line = csv_lines\
.map(lambda line: line.split(","))
words_by_line.collect()
# Compute a relation of words
flattened_words = csv_lines\
.map(lambda line: line.split(","))\
.flatMap(lambda x: x)
flattened_words.collect()
lengths = flattened_words.map(lambda x: len(x))
lengths.collect()
lengths.sum() / lengths.count()
Explanation: Exercises
Use pyspark.RDD.groupBy to group executives by job title, then prepare records with the job title and the count of the number of executives with that job.
Map vs FlatMap
We need to understand the difference between the map and flatmap operators. Map groups items per-record, while flatMap creates a single large group of items.
End of explanation
from pyspark.sql import Row
# Convert the CSV into a pyspark.sql.Row
def csv_to_row(line):
parts = line.split(",")
row = Row(
name=parts[0],
company=parts[1],
title=parts[2]
)
return row
# Apply the function to get rows in an RDD
rows = csv_lines.map(csv_to_row)
rows.first()
Explanation: Creating Rows
We can create pyspark.sql.Rows out of python objects so we can create pyspark.sql.DataFrames. This is desirable because once we have DataFrames we can run Spark SQL on our data.
End of explanation
records = csv_lines.map(lambda line: line.split(','))
records.collect()
groups = records.groupBy(lambda x: x[0])
counts = groups.map(lambda x: (x[0], len(x[1])))
new_rows = counts.map(lambda x: Row(name=x[0], total=x[1]))
new_rows.collect()
new_rows.toDF().select("name","total").show()
Explanation: Exercises
First count the number of companies for each executive, then create a pyspark.sql.Row for this result.
End of explanation
# Convert to a pyspark.sql.DataFrame
rows_df = rows.toDF()
rows_df.show()
# Register the DataFrame for Spark SQL
rows_df.registerTempTable("executives")
# Generate a new DataFrame with SQL using the SparkSession
job_counts = spark.sql(
SELECT
name,
COUNT(*) AS total
FROM executives
GROUP BY name
)
job_counts.show()
# Go back to an RDD
job_counts.rdd.map(lambda x: x.asDict()).collect()
Explanation: Creating DataFrames from RDDs
Using the RDD.toDF() method to create a dataframe, registering the DataFrame as a temporary table with Spark SQL, and counting the jobs per person using Spark SQL.
End of explanation
my_rdd = sc.parallelize([1,2,3,4,5])
my_rdd.first()
Explanation: SparkContext.parallelize()
The oppotiste of pyspark.RDD.collect() is SparkContext.parallelize(). Whereas collect pulls data from Spark's memory into local RAM, parallelize sends data from local memory to Spark's memory.
You can access it like this:
End of explanation
my_data = [
{"name": "Russell Jurney", "interest": "Ancient Greece"},
{"name": "Chris Jurney", "interest": "Virtual Reality"},
{"name": "Bill Jurney", "interest": "Sports"},
{"name": "Ruth Jurney", "interest": "Wildlife"},
{"name": "Bob Smith", "interest": "Sports"}
]
my_rdd = sc.parallelize(my_data)
my_rows = my_rdd.map(lambda x: Row(name=x["name"], interest=x["interest"]))
my_df = my_rows.toDF()
my_df.show()
my_df.registerTempTable("people")
spark.sql(SELECT interest, COUNT(*) as total FROM people GROUP BY interest).show()
Explanation: Exercises
Create your own RDD of dict elements with named fields using sc.parallelize. Make it at least 5 records long.
Convert this RDD of dicts into an RDD of pyspark.sql.Row elements.
Convert this RDD of pyspark.sql.Rows into a pyspark.sql.DataFrame.
Run a SQL GROUP BY/COUNT on your new DataFrame.
End of explanation
job_counts.rdd.map(lambda x: x.asDict()).collect()
Explanation: Creating RDDs from DataFrames
We can easily convert back from a DataFrame to an RDD using the pyspark.sql.DataFrame.rdd() method, along with pyspark.sql.Row.asDict() if we desire a Python dict of our records.
End of explanation
my_df.rdd.map(lambda x: x.asDict()).collect()
Explanation: Exercises
Using the data from item 4 in the exercise above, convert the data back to its original form, a local collection of dict elements.
End of explanation
# Load the parquet file containing flight delay records
on_time_dataframe = spark.read.parquet('../data/on_time_performance.parquet')
# Register the data for Spark SQL
on_time_dataframe.registerTempTable("on_time_performance")
# Check out the columns
on_time_dataframe.columns
# Trim the fields and keep the result
trimmed_on_time = on_time_dataframe\
.select(
"FlightDate",
"TailNum",
"Origin",
"Dest",
"Carrier",
"DepDelay",
"ArrDelay"
)
# Sample 0.01% of the data and show
trimmed_on_time.sample(False, 0.0001).show(10)
sampled_ten_percent = trimmed_on_time.sample(False, 0.1)
sampled_ten_percent.show(10)
Explanation: Loading and Inspecting Parquet Files
Using the SparkSession to load files as DataFrames and inspecting their contents...
End of explanation
fd = on_time_dataframe.select("AirTime", "Distance")
fd.show(6)
Explanation: DataFrame Workflow: Calculating Speed in Dataflow and SQL
We can go back and forth between dataflow programming and SQL programming using pyspark.sql.DataFrames. This enables us to get the best of both worlds from these two APIs. For example, if we want to group records and get a total count for each group... a SQL SELECT/GROUP BY/COUNT is the most direct way to do it. On the other hand, if we want to filter data, a dataflow API call like DataFrame.filter() is the cleanest way. This comes down to personal preference for the user. In time you will develop your own style of working.
Dataflow Programming
If we were to look at the AirTime along with the Distance, we could get a good idea of how fast the airplanes were going. Pretty cool! Lets do this using Dataflows first.
Trimming Our Data
First lets select just the two columns of interest: AirTime and Distance. We can always go back and select more columns if we want to extend our analysis, but trimming uneeded fields optimizes performance right away.
End of explanation
hourly_fd = fd.select((fd.AirTime / 60).alias('Hours'), "Distance")
hourly_fd.show(5)
Explanation: From Minutes to Hours
Now lets convert our AirTime from minutes to hours by dividing by 60.
End of explanation
miles_per_hour = hourly_fd.select(
(hourly_fd.Distance / hourly_fd.Hours).alias('Mph')
)
miles_per_hour.show(10)
Explanation: Raw Calculation
Now lets calculate miles per hour!
End of explanation
fd.select(
"AirTime",
(fd.AirTime / 60).alias('Hours'),
"Distance"
).show()
Explanation: Investigating nulls
Looks like we have some errors in some records in our calculation because of missing fields? Lets bring back in the Distance and AirTime fields to see where the problem is coming from.
End of explanation
fd = on_time_dataframe.select("AirTime", "Distance")
filled_fd = fd.filter(fd.AirTime.isNotNull())
hourly_fd = filled_fd.select(
"AirTime",
(filled_fd.AirTime / 60).alias('Hours'),
"Distance"
)
mph = hourly_fd.select((hourly_fd.Distance / hourly_fd.Hours).alias('Mph'))
mph.show(10)
Explanation: Filtering nulls
Now that we know some records are missing AirTimes, we can filter those records using pyspark.sql.DataFrame.filter(). Starting from the beginning, lets recalculate our values.
End of explanation
from pyspark.sql.functions import avg
mph.select(
pyspark.sql.functions.avg(mph.Mph)
).show()
Explanation: Averaging Speed
How fast does the fleet travel overall? Lets compute the average speed for the entire fleet.
End of explanation
on_time_dataframe.registerTempTable("on_time_performance")
mph = spark.sql(
SELECT ( Distance / ( AirTime/60 ) ) AS Mph
FROM on_time_performance
WHERE AirTime IS NOT NULL
ORDER BY AirTime
)
mph.show(10)
mph.registerTempTable("mph")
spark.sql("SELECT AVG(Mph) from mph").show()
Explanation: It looks like the average speed of the fleet is 408 mph. Note how along the way we chekced the data for sanity, which led to confidence in our answer. SQL by contast can hide the internals of a query, which might have skewed our average significantly!
SQL-Based Speed Calculation
Now lets work the same thing out in SQL. Starting from the top:
End of explanation
# Compute a histogram of departure delays
mph\
.select("Mph")\
.rdd\
.flatMap(lambda x: x)\
.histogram(10)
Explanation: Evaluating SQL
The SQL based solution seems to be better in this case, because we can simply express our calculation all in one place. When complexity grows however, it is best to break a single query into multiple stages where you use SQL or Dataflow programming to massage the data into shape.
Calculating Histograms
Having computed the speed in miles per hour and the overall average speed of passenger jets in the US, lets dig deeper by using the RDD API's histogram method to calculate histograms buckets and values, which will then use to visualize data.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Function to plot a histogram using pyplot
def create_hist(rdd_histogram_data):
Given an RDD.histogram, plot a pyplot histogram
heights = np.array(rdd_histogram_data[1])
full_bins = rdd_histogram_data[0]
mid_point_bins = full_bins[:-1]
widths = [abs(i - j) for i, j in zip(full_bins[:-1], full_bins[1:])]
bar = plt.bar(mid_point_bins, heights, width=widths, color='b')
return bar
# Compute a histogram of departure delays
departure_delay_histogram = mph\
.select("Mph")\
.rdd\
.flatMap(lambda x: x)\
.histogram(10)
create_hist(departure_delay_histogram)
Explanation: Visualizing Histograms
The problem with the output above is that it is hard to interpret. For better understanding, we need a visualization. We can use matplotlib inline in a Jupyter Notebook to visualize this distribution and see what the tendency of speed of airplanes is around the mean of 408 mph.
End of explanation
# Compute a histogram of departure delays
departure_delay_histogram = mph\
.select("Mph")\
.rdd\
.flatMap(lambda x: x)\
.histogram(20)
create_hist(departure_delay_histogram)
Explanation: Iterating on a Histogram
That looks interesting, but the bars seem to fat the really see what is going on. Lets double the number of buckets from 10 to 20. We can reuse the create_hist() method to do so.
End of explanation
# Dump the unneeded fields
tail_numbers = on_time_dataframe.rdd.map(lambda x: x.TailNum)
tail_numbers = tail_numbers.filter(lambda x: x != '' and x is not None)
# distinct() gets us unique tail numbers
unique_tail_numbers = tail_numbers.distinct()
# now we need a count() of unique tail numbers
airplane_count = unique_tail_numbers.count()
print("Total airplanes: {}".format(airplane_count))
Explanation: Speed Summary
You've now seen how to calculate different values in both SQL and Dataflow style, how to switch between the two methods, how to switch between the pyspark.RDD and pyspark.sql.DataFrame APIs and you're starting to build a proficiency in PySpark!
Counting Airplanes in the US Fleet
Lets convert our on_time_dataframe (a DataFrame) into an RDD to calculate the total number of airplanes in the US fleet.
End of explanation
origin_hour_dist = on_time_dataframe.filter(
on_time_dataframe.AirTime.isNotNull()
).select(
"Origin",
(on_time_dataframe.AirTime/60).alias("Hours"),
"Distance"
)
mph_origins = origin_hour_dist.select(
"Origin",
(origin_hour_dist.Distance / origin_hour_dist.Hours).alias("Mph")
)
mph_origins.registerTempTable("mph_origins")
avg_speeds = mph_origins.groupBy("Origin").agg({"Mph": "avg"}).alias("Mph")
avg_speeds.show()
on_time_dataframe.columns
Explanation: Exercise 1: Characterizing Airports
Using the techniques we demonstated above, calculate any 3 out of 4 of the following things using both the SQL and the Dataflow methods for each one. That is: implement each calculation twice - once in SQL and once using Dataflows. Try to use both the RDD and DataFrame APIs as you work.
How many airports are there in the united states?
What is the average flight time for flights arriving in San Francisco (SFO)? What does the distribution of this value look like? Plot a histogram using the create_hist method shown above.
Which American airport has the fastest out-bound speeds? What does the distribution of the flight speeds at this one airport look like? Plot a histogram using the create_hist method shown above.
What were the worst travel dates in terms of overall delayed flights in the US in 2015?
End of explanation
# Calculate average of every numeric field
on_time_dataframe.groupBy("Origin").avg().show(1)
# Calculate verage AirTime per origin city
on_time_dataframe.groupBy("Origin").agg({"AirTime": "mean"}).show(1)
# Get the count of flights from each origin
on_time_dataframe.groupBy("Origin").count().show(1)
# Get the maximum airtime for flights leaving each city
on_time_dataframe.groupby("Origin").agg({"AirTime": "max"}).show(1)
# Get the maximum of all numeric columns for flights leaving each city
on_time_dataframe.groupBy("Origin").max().show(1)
# Get the shortest flight for each origin airport
on_time_dataframe.groupBy("Origin").agg({"AirTime": "min"}).show(1)
# Total minutes flown from each airport
on_time_dataframe.groupBy("Origin").agg({"AirTime": "sum"}).show(1)
Explanation: Calculating with DataFrame.groupBy
We can use Spark SQL to calculate things using DataFrames, but we can also group data and calculate as we did with RDDs. For a full list of methods you can apply to grouped DataFrames, see the documentation for pyspark.sql.GroupedData. Below we will demonstrate some of these methods.
End of explanation
on_time_dataframe\
.filter("Origin == 'ATL'")\
.groupBy("Origin")\
.pivot("Dest")\
.avg("AirTime")\
.rdd\
.map(lambda x: x.asDict())\
.collect()[0]
Explanation: Pivoting DataFrames
One useful function of DataFrames is pivot. Pivot lets you compute pivot tables from data. Lets use pivot to calculate the average flight times between Atlanta ATL and other airports.
End of explanation
mph = spark.sql(
SELECT
Distance,
( Distance / ( AirTime/60 ) ) AS Mph
FROM on_time_performance
WHERE AirTime IS NOT NULL
)
mph.show(10)
Explanation: Plotting Scatterplots
Another type of visualization that is of interest to data scientists is the scatterplot. A scatterplot enables us to compare the trend of one value plotted against the other. For example, we could calculate the relationship between Origin and Dest Distance and the Mph speed figure we calculated earlier. Are longer flights generally faster, or not?
To prepare a scatterplot, we need to use matplotlib again, so we'll need to look at what its scatterplot API expects. The matplotlib.pyplot.scatter API takes two independant lists of values for the variables x and y, so we must compute them for Distance and Mph.
End of explanation
distance = mph.select("Distance").rdd.flatMap(lambda x: x)
distance = distance.collect()
distance[0:10]
speed = mph.select("Mph").rdd.flatMap(lambda x: x)
speed = speed.collect()
speed[0:10]
Explanation: Collecting Data
Note that we will have to convert our data from existing within our Spark cluster's memory to within our local computer's memory where matplotlib runs.
End of explanation
print("Total distances: {:,}".format(len(distance)))
print("Total speeds: {:,}".format(len(speed)))
Explanation: Sampling Data
When I tried to plot this data, it took a very long time to draw. This is because... well, how many unique values are there for each variable? Lets see.
End of explanation
sample = mph.sample(False, 0.001)
sample.count()
Explanation: It is hard to plot 5.7 million dots on a scatterplot that will fit on a computer screen. So lets sample our data. We can use PySpark DataFrame's sample method. Lets take a 0.1% random sample without replacement, which will leave us with 5,687 or so data points - something we can more easily manage.
End of explanation
speed = sample.select("Mph").rdd.flatMap(lambda x: x).collect()
distance = sample.select("Distance").rdd.flatMap(lambda x: x).collect()
print("{:,} x {:,} records!".format(
len(speed),
len(distance)
))
Explanation: Note that we need to sample once and then split the datasets out - otherwise the data for a single observation will be scrambled across variables. We don't want that! All our scatterplots would show no relationships at all.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (18,12)
plt.scatter(
distance,
speed,
alpha=0.5
)
plt.title("Distance x Speed")
plt.xlabel("Distance")
plt.ylabel("Speed")
plt.show()
Explanation: Fun with matplotlib.pyplot.scatter
Now we feed the scatter API distance as x and speed as y, giving it a title and x and y axes. Note that we also specify a size in inches via the figure.figsize rcParam.
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score
import numpy as np
x = np.array(distance)
y = np.array(speed)
x_test = np.arange(0, 5000, 100)
model = Pipeline([
('poly', PolynomialFeatures(degree=3)),
('linear', LinearRegression(fit_intercept=False))
])
model = model.fit(x[:, np.newaxis], y)
model.named_steps['linear'].coef_
y_out = model.predict(x_test.reshape(-1,1))
cross_val_score(model, x.reshape(-1,1), y)
Explanation: Interpreting Our Scatterplot
We can see pretty clearly that as distance increases, average speed across that distance increases rapidly and then levels off as the distance increases.
Exercises
Query the on_time_dataframe to focus on two numeric fields.
Plot a histogram of one of these fields
Plot a scatterplot of both of these fields
Predicting Speed Given Distance
It is often the case that once we characterize a distribution, we want to create a function to predict one variable given the other. Lets take this example further by fitting a polynomial regression to describe our data. We use sklearn.pipeline.Pipeline to chain a sklearn.preprocessing.PolynomialFeatures to a sklearn.linear_model.LinearRegression. Other than that, we simply define x and y, and fit a model to those values. Then we finally compute a cross value score, to see the model's performance. We'll see this pattern again when we use large data tools in Spark MLlib.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (18,12)
plt.scatter(
distance,
speed,
alpha=0.5
)
plt.plot(
x_test,
y_out,
color='orange',
linewidth=3
)
plt.title("Distance x Speed")
plt.xlabel("Distance")
plt.ylabel("Speed")
plt.show()
Explanation: Visualizing Polynomial Fit
Because we are running a polynomial regression, where we get to decide the degree of the polynomial. To help decide, lets plot a polynomial fit line to the data using matplotlib.
End of explanation
tail_numbers = on_time_dataframe.select("TailNum").distinct()
tail_numbers.show(6)
Explanation: Joining Data in PySpark
Next we're going to learn how to join between datasets using PySpark. We're going to pick up from an example we're going to work in chapter 6, and explore it more deeply. To begin with, we will prepare a list of TailNum (tail numbers) from the FAA flight records. These uniquely identify each airplane from each flight.
Unique Tail Numbers
End of explanation
faa_tail_number_inquiry = spark.read.json('../data/faa_tail_number_inquiry.jsonl')
airplane_records = faa_tail_number_inquiry.select(
faa_tail_number_inquiry.TailNum.alias("FAATailNum"),
"Model",
"Engine_Model"
)
airplane_records.show(6)
Explanation: FAA Airplane Records
We will trim the FAA records down to just the TailNum, Model and Engine_Model. Note that we go ahead and rename the TailNum field to FAATailNum using the pyspark.sql.functions.alias() method. This avoids having two fields referenced by the same name once we perform our joins.
End of explanation
# INNER JOIN
print(
"FAA tail numbers: {:,}".format(
tail_numbers.count()
)
)
print(
"Airplane records: {:,}".format(
airplane_records.count()
)
)
inner_joined = tail_numbers.join(
airplane_records,
tail_numbers.TailNum == airplane_records.FAATailNum,
'inner'
)
print(
"Joined records: {:,}".format(
inner_joined.count()
)
)
Explanation: Inner Joins
You may be familiar with an inner join from SQL. An inner join joins two datasets based on the presence of a key from one dataset in the other. Records which don't have a key that appears in the other table do not appear in the final output.
End of explanation
inner_joined.show(6)
Explanation: Inner Join Results
Note that there are as many records in the output as there were in the FAA Airplane dataset - indicating that there was a representative of every tail number from that dataset in the on-time performance flight records. Lets take a look at the records themselves.
End of explanation
# INNER JOIN
print(
"FAA tail numbers: {:,}".format(
tail_numbers.count()
)
)
print(
"Airplane records: {:,}".format(
airplane_records.count()
)
)
left_outer_joined = tail_numbers.join(
airplane_records,
tail_numbers.TailNum == airplane_records.FAATailNum,
'left_outer'
)
print(
"Joined records: {:,}".format(
left_outer_joined.count()
)
)
Explanation: Note how convenient it is that we renamed one of the keys FAATailNum. If we hadn't, we'd have two columns with the same name now and would have trouble referring to one or the other.
Left Outer Join
Another type of join is the left outer join. It ensures that one record will remain in the output from the left side of the join no matter what. If a match on the join keys is found, the fields for the record on the right will be filled. If a match is not found, they will be empty.
Lets look at how this works with our two datasets.
End of explanation
left_outer_joined.show(6)
Explanation: Left Outer Join Result
Note that there were 4,898 records on the left side of our join and there are the same number on the output of our join. Lets take a look at what both matched and unmatched records look like:
End of explanation
# Filter down to the fields we need to identify and link to a flight
flights = on_time_dataframe.rdd.map(
lambda x: (x.Carrier, x.FlightDate, x.FlightNum, x.Origin, x.Dest, x.TailNum)
)
# Group flights by tail number, sorted by date, then flight number, then origin/dest
flights_per_airplane = flights\
.map(lambda nameTuple: (nameTuple[5], [nameTuple[0:5]]))\
.reduceByKey(lambda a, b: a + b)\
.map(lambda tuple:
{
'TailNum': tuple[0],
'Flights': sorted(tuple[1], key=lambda x: (x[1], x[2], x[3], x[4]))
}
)
flights_per_airplane.first()
Explanation: Note that some records have fields filled out, and some don't.
Right Outer Join
Another type of join is the right outer join. This works the opposite of a left outer join. In this case, the output will preserve a record for each and every record on the right side of the join. Use the right_outer join type to perform this kind of join.
Exercises
Go back and perform a right outer join on the preceding two datasets. Is the distinct() call on the FAA on-time performance records still needed? Why or why not?
Using RDDs and Map/Reduce to Prepare a Complex Record
End of explanation
total_flights = on_time_dataframe.count()
# Flights that were late leaving...
late_departures = on_time_dataframe.filter(
on_time_dataframe.DepDelayMinutes > 0
)
total_late_departures = late_departures.count()
print(f'{total_late_departures:,}')
# Flights that were late arriving...
late_arrivals = on_time_dataframe.filter(
on_time_dataframe.ArrDelayMinutes > 0
)
total_late_arrivals = late_arrivals.count()
print(f'{total_late_arrivals:,}')
# Get the percentage of flights that are late, rounded to 1 decimal place
pct_late = round((total_late_arrivals / (total_flights * 1.0)) * 100, 1)
pct_late
Explanation: Counting Late Flights
End of explanation
# Flights that left late but made up time to arrive on time...
on_time_heros = on_time_dataframe.filter(
(on_time_dataframe.DepDelayMinutes > 0)
&
(on_time_dataframe.ArrDelayMinutes <= 0)
)
total_on_time_heros = on_time_heros.count()
print(f'{total_on_time_heros:,}')
Explanation: Counting Flights with Hero Captains
"Hero Captains" are those that depart late but make up time in the air and arrive on time or early.
End of explanation
print("Total flights: {:,}".format(total_flights))
print("Late departures: {:,}".format(total_late_departures))
print("Late arrivals: {:,}".format(total_late_arrivals))
print("Recoveries: {:,}".format(total_on_time_heros))
print("Percentage Late: {}%".format(pct_late))
Explanation: Printing Our Results
End of explanation
# Get the average minutes late departing and arriving
spark.sql(
SELECT
ROUND(AVG(DepDelay),1) AS AvgDepDelay,
ROUND(AVG(ArrDelay),1) AS AvgArrDelay
FROM on_time_performance
).show()
Explanation: Computing the Average Lateness Per Flights
End of explanation
# Why are flights late? Lets look at some delayed flights and the delay causes
late_flights = spark.sql(
SELECT
ArrDelayMinutes,
WeatherDelay,
CarrierDelay,
NASDelay,
SecurityDelay,
LateAircraftDelay
FROM
on_time_performance
WHERE
WeatherDelay IS NOT NULL
OR
CarrierDelay IS NOT NULL
OR
NASDelay IS NOT NULL
OR
SecurityDelay IS NOT NULL
OR
LateAircraftDelay IS NOT NULL
ORDER BY
FlightDate
)
late_flights.sample(False, 0.01).show()
Explanation: Inspecting Late Flights
End of explanation
# Calculate the percentage contribution to delay for each source
total_delays = spark.sql(
SELECT
ROUND(SUM(WeatherDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_weather_delay,
ROUND(SUM(CarrierDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_carrier_delay,
ROUND(SUM(NASDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_nas_delay,
ROUND(SUM(SecurityDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_security_delay,
ROUND(SUM(LateAircraftDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_late_aircraft_delay
FROM on_time_performance
)
total_delays.show()
Explanation: Determining Why Flights Are Late
End of explanation
# Eyeball the first to define our buckets
weather_delay_histogram = on_time_dataframe\
.select("WeatherDelay")\
.rdd\
.flatMap(lambda x: x)\
.histogram([1, 5, 10, 15, 30, 60, 120, 240, 480, 720, 24*60.0])
print(weather_delay_histogram)
# See above for definition
create_hist(weather_delay_histogram)
Explanation: Computing a Histogram of Weather Delayed Flights
End of explanation
# Transform the data into something easily consumed by d3
def histogram_to_publishable(histogram):
record = {'key': 1, 'data': []}
for label, value in zip(histogram[0], histogram[1]):
record['data'].append(
{
'label': label,
'value': value
}
)
return record
# Recompute the weather histogram with a filter for on-time flights
weather_delay_histogram = on_time_dataframe\
.filter(
(on_time_dataframe.WeatherDelay.isNotNull())
&
(on_time_dataframe.WeatherDelay > 0)
)\
.select("WeatherDelay")\
.rdd\
.flatMap(lambda x: x)\
.histogram([0, 15, 30, 60, 120, 240, 480, 720, 24*60.0])
print(weather_delay_histogram)
record = histogram_to_publishable(weather_delay_histogram)
record
Explanation: Preparing a Histogram for Visualization by d3.js
End of explanation
from pyspark.sql.types import StringType, IntegerType, FloatType, DoubleType, DateType, TimestampType
from pyspark.sql.types import StructType, StructField
from pyspark.sql.functions import udf
schema = StructType([
StructField("ArrDelay", DoubleType(), True), # "ArrDelay":5.0
StructField("CRSArrTime", TimestampType(), True), # "CRSArrTime":"2015-12-31T03:20:00.000-08:00"
StructField("CRSDepTime", TimestampType(), True), # "CRSDepTime":"2015-12-31T03:05:00.000-08:00"
StructField("Carrier", StringType(), True), # "Carrier":"WN"
StructField("DayOfMonth", IntegerType(), True), # "DayOfMonth":31
StructField("DayOfWeek", IntegerType(), True), # "DayOfWeek":4
StructField("DayOfYear", IntegerType(), True), # "DayOfYear":365
StructField("DepDelay", DoubleType(), True), # "DepDelay":14.0
StructField("Dest", StringType(), True), # "Dest":"SAN"
StructField("Distance", DoubleType(), True), # "Distance":368.0
StructField("FlightDate", DateType(), True), # "FlightDate":"2015-12-30T16:00:00.000-08:00"
StructField("FlightNum", StringType(), True), # "FlightNum":"6109"
StructField("Origin", StringType(), True), # "Origin":"TUS"
])
features = spark.read.json(
"../data/simple_flight_delay_features.jsonl.bz2",
schema=schema
)
features.first()
Explanation: Building a Classifier Model to Predict Flight Delays
Loading Our Data
End of explanation
#
# Check for nulls in features before using Spark ML
#
null_counts = [(column, features.where(features[column].isNull()).count()) for column in features.columns]
cols_with_nulls = filter(lambda x: x[1] > 0, null_counts)
print(list(cols_with_nulls))
Explanation: Check Data for Nulls
End of explanation
#
# Add a Route variable to replace FlightNum
#
from pyspark.sql.functions import lit, concat
features_with_route = features.withColumn(
'Route',
concat(
features.Origin,
lit('-'),
features.Dest
)
)
features_with_route.select("Origin", "Dest", "Route").show(5)
Explanation: Add a Route Column
Demonstrating the addition of a feature to our model...
End of explanation
#
# Use pysmark.ml.feature.Bucketizer to bucketize ArrDelay
#
from pyspark.ml.feature import Bucketizer
splits = [-float("inf"), -15.0, 0, 30.0, float("inf")]
bucketizer = Bucketizer(
splits=splits,
inputCol="ArrDelay",
outputCol="ArrDelayBucket"
)
ml_bucketized_features = bucketizer.transform(features_with_route)
# Check the buckets out
ml_bucketized_features.select("ArrDelay", "ArrDelayBucket").show()
Explanation: Bucketizing ArrDelay into ArrDelayBucket
End of explanation
#
# Extract features tools in with pyspark.ml.feature
#
from pyspark.ml.feature import StringIndexer, VectorAssembler
# Turn category fields into categoric feature vectors, then drop intermediate fields
for column in ["Carrier", "DayOfMonth", "DayOfWeek", "DayOfYear",
"Origin", "Dest", "Route"]:
string_indexer = StringIndexer(
inputCol=column,
outputCol=column + "_index"
)
ml_bucketized_features = string_indexer.fit(ml_bucketized_features)\
.transform(ml_bucketized_features)
# Check out the indexes
ml_bucketized_features.show(6)
Explanation: Indexing Our String Fields into Numeric Fields
End of explanation
# Handle continuous, numeric fields by combining them into one feature vector
numeric_columns = ["DepDelay", "Distance"]
index_columns = ["Carrier_index", "DayOfMonth_index",
"DayOfWeek_index", "DayOfYear_index", "Origin_index",
"Origin_index", "Dest_index", "Route_index"]
vector_assembler = VectorAssembler(
inputCols=numeric_columns + index_columns,
outputCol="Features_vec"
)
final_vectorized_features = vector_assembler.transform(ml_bucketized_features)
# Drop the index columns
for column in index_columns:
final_vectorized_features = final_vectorized_features.drop(column)
# Check out the features
final_vectorized_features.show()
Explanation: Combining Numeric Fields into a Single Vector
End of explanation
#
# Cross validate, train and evaluate classifier
#
# Test/train split
training_data, test_data = final_vectorized_features.randomSplit([0.7, 0.3])
# Instantiate and fit random forest classifier
from pyspark.ml.classification import RandomForestClassifier
rfc = RandomForestClassifier(
featuresCol="Features_vec",
labelCol="ArrDelayBucket",
maxBins=4657,
maxMemoryInMB=1024
)
model = rfc.fit(training_data)
# Evaluate model using test data
predictions = model.transform(test_data)
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(labelCol="ArrDelayBucket", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Accuracy = {}".format(accuracy))
# Check a sample
predictions.sample(False, 0.001, 18).orderBy("CRSDepTime").show(6)
Explanation: Training Our Model in an Experimental Setup
End of explanation |
13,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SETI Test Set Classification Accuracy
This notebook provides the code needed to calculate the performance of your signal classification models using the PREVIEW test set (see Step 1. Get Data notebook)
Step1: Scoring a Scorecard
The Preview test data set can be obtained in the Step 1. Get Data notebook. Using your trained model, you can generate a scorecard for this preview test data set. Your scorecard must be a CSV file with 8 columns. The first column value will contain the UUID and the next 7 will contain the probability estimates for each of the classes that were produced by your model. See the Judging Information notebook for more information.
Now you can score the scorecard using this code. [We now are providing the Preview test data set key in order for you to easily produce your own confusion matrix and scoring. This will give you the exact answers for the preview test set, of course.]
<br>
Using the Example Scorecard.
On the Judging Information notebook there is a link to download an example scorecard.
Step2: <br>
Using the Preview Test Set Key.
If I use a scorecard built from the preview test UUID,class CSV file, then I will get a perfect score. With the UUID,class file I created the private_list_primary_v3_testset_preview_scoreboard_key_29june_2017.csv. This scorecard will produce a perfect score.
Step3: Winning Team's Scorecard.
I've included the winning team's scorecard submitted to the preview test set scoreboard on July 21 in this repository. The scores for that scorecard are shown below.
Step4: How to score your own test set
You can use the score functions above with your own test data set parsed from the training data set to measure your model performance. Of course, this let's you test different models and different model parameters more quickly and while keeping the preview test set available for your nearly completed model.
The following code will
* show how to split the training data into a training set and test set
* create some fake models to produce some predicted values for the test set
* pass those predicted values to the printsklearnScores function above
1. Split Up the Data
First, let's split our data up into a training data set and a test set. We start with the primay small
index file.
Step5: 2. Train Your Model
In normal operation, you'd then use the X_train set of UUIDs to grab the <UUID>.dat data files and produce spectrograms and features. You'd then pass your features, along with y_train, which contains
the labels, to your model for training.
Below, I've coded up two FAKE models. The randomModel produces random probabilities. The perfectModel actually uses the known values in the y_test -- so it will produce a perfect score.
Step6: 3. Make Predicitons and Score
Next, you'd take the X_test set of UUIDs, extract the necessary spectrogram and features and pass that to your model
in order to predict their class. We use the two fake models from above | Python Code:
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
import numpy as np
import sklearn
import csv
import operator
class_list = ['brightpixel', 'narrowband', 'narrowbanddrd', 'noise', 'squarepulsednarrowband', 'squiggle', 'squigglesquarepulsednarrowband']
fieldnames = ['uuid'] + class_list
#Helper functions for parsing the data and using sklearn to print scoring metrics
def classChooser(listOfDictionaryScores):
results = []
for row in listOfDictionaryScores:
rowscores = dict((k, float(row[k])) for k in class_list)
maxclass = max(rowscores.iteritems(), key=operator.itemgetter(1))[0]
results.append({'UUID':row['uuid'], 'SIGNAL_CLASSIFICATION':maxclass})
return results
def printsklearnScores(y_true, y_pred, y_prob):
print sklearn.metrics.classification_report(y_true,y_pred, digits=5)
print sklearn.metrics.confusion_matrix(y_true,y_pred)
print("Classification accuracy: %0.6f" % sklearn.metrics.accuracy_score(y_true,y_pred) )
print("Log Loss: %0.6f" % sklearn.metrics.log_loss(y_true,y_prob) )
# Takes a .csv scorecard file, compares the results to the preview testset UUID,Class file and
# prints the scores.
def score(resultsFile):
testSetFile = 'private_list_primary_v3_testset_preview_uuid_class_29june_2017.csv'
actual_uuid = csv.DictReader(open(testSetFile))
actual_uuid_list = [x for x in actual_uuid]
actual_uuid_list_sorted = sorted(actual_uuid_list, key=lambda k: k['UUID'])
classifier_results = csv.DictReader(open(resultsFile), fieldnames=fieldnames)
classifier_results_list = [x for x in classifier_results]
classifier_results_list_sorted = sorted(classifier_results_list, key=lambda k: k['uuid'])
#yc = classChooser(classifier_results_list_sorted)
#print yc[:5]
y_true = [x['SIGNAL_CLASSIFICATION'] for x in actual_uuid_list_sorted]
y_pred = [x['SIGNAL_CLASSIFICATION'] for x in classChooser(classifier_results_list_sorted)]
y_prob = [[float(row[cl]) for cl in class_list] for row in classifier_results_list_sorted]
printsklearnScores(y_true, y_pred, y_prob)
Explanation: SETI Test Set Classification Accuracy
This notebook provides the code needed to calculate the performance of your signal classification models using the PREVIEW test set (see Step 1. Get Data notebook)
End of explanation
score('example_scorecard_codechallenge_v3_testset_preview.csv')
Explanation: Scoring a Scorecard
The Preview test data set can be obtained in the Step 1. Get Data notebook. Using your trained model, you can generate a scorecard for this preview test data set. Your scorecard must be a CSV file with 8 columns. The first column value will contain the UUID and the next 7 will contain the probability estimates for each of the classes that were produced by your model. See the Judging Information notebook for more information.
Now you can score the scorecard using this code. [We now are providing the Preview test data set key in order for you to easily produce your own confusion matrix and scoring. This will give you the exact answers for the preview test set, of course.]
<br>
Using the Example Scorecard.
On the Judging Information notebook there is a link to download an example scorecard.
End of explanation
#Test with the scoreboard key. This should get 100% accuracy
score('private_list_primary_v3_testset_preview_scoreboard_key_29june_2017.csv')
Explanation: <br>
Using the Preview Test Set Key.
If I use a scorecard built from the preview test UUID,class CSV file, then I will get a perfect score. With the UUID,class file I created the private_list_primary_v3_testset_preview_scoreboard_key_29june_2017.csv. This scorecard will produce a perfect score.
End of explanation
score("results_Effsubsee_best_preview_test_set.csv")
Explanation: Winning Team's Scorecard.
I've included the winning team's scorecard submitted to the preview test set scoreboard on July 21 in this repository. The scores for that scorecard are shown below.
End of explanation
indexfile = 'public_list_primary_v3_small_21june_2017.csv'
indexfile_uuid = csv.DictReader(open(indexfile))
indexfile_uuid_list = [x for x in indexfile_uuid]
indexfile_uuid_list = sorted(indexfile_uuid_list, key=lambda k: k['UUID'])
X = [x['UUID'] for x in indexfile_uuid_list]
y = [class_list.index(x['SIGNAL_CLASSIFICATION']) for x in indexfile_uuid_list] #also convert from class name to a number
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, random_state=42)
Explanation: How to score your own test set
You can use the score functions above with your own test data set parsed from the training data set to measure your model performance. Of course, this let's you test different models and different model parameters more quickly and while keeping the preview test set available for your nearly completed model.
The following code will
* show how to split the training data into a training set and test set
* create some fake models to produce some predicted values for the test set
* pass those predicted values to the printsklearnScores function above
1. Split Up the Data
First, let's split our data up into a training data set and a test set. We start with the primay small
index file.
End of explanation
from sklearn.preprocessing import LabelBinarizer
# Example classes
# Your class, of course, would have actual code in the `train` functions and
# the predict function would also be different.
class randomModel(object):
def __init__(self):
pass
def train(self, X_train, y_train):
## do whatever
pass
def predict(self, X_test):
y_prob = np.random.rand(len(X_test), len(class_list))
return (y_prob.T / y_prob.sum(axis=1)).T
class perfectModel(object):
def __init__(self):
pass
def train(self, X_train, y_train):
## train
pass
def predict(self, X_test):
encoder = LabelBinarizer()
ytest_np = np.array(y_test).reshape(1,-1)
ytest_onehot = encoder.fit_transform(ytest_np.T)
return ytest_onehot
Explanation: 2. Train Your Model
In normal operation, you'd then use the X_train set of UUIDs to grab the <UUID>.dat data files and produce spectrograms and features. You'd then pass your features, along with y_train, which contains
the labels, to your model for training.
Below, I've coded up two FAKE models. The randomModel produces random probabilities. The perfectModel actually uses the known values in the y_test -- so it will produce a perfect score.
End of explanation
mRandModel = randomModel()
mRandModel.train(X_train, y_train)
y_prob = mRandModel.predict(X_test)
y_true = [class_list[i] for i in y_test]
y_pred = [class_list[probarray.argmax()] for probarray in y_prob]
print 'The randomModel class produces random probability estimates'
print y_prob[:5]
print ''
printsklearnScores(y_true, y_pred, y_prob)
mPerfectModel = perfectModel()
mPerfectModel.train(X_train, y_train)
y_prob = mPerfectModel.predict(X_test)
y_true = [class_list[i] for i in y_test]
y_pred = [class_list[probarray.argmax()] for probarray in y_prob]
print y_prob[:5]
printsklearnScores(y_true, y_pred, y_prob)
Explanation: 3. Make Predicitons and Score
Next, you'd take the X_test set of UUIDs, extract the necessary spectrogram and features and pass that to your model
in order to predict their class. We use the two fake models from above: perfectModel and randomModel.
Each model.predict function returns a 2d array, M x K, where M is the number of samples in the test set passed into the function and K is the number of classes. The values for each row are the class probability predictions. Obviously, your model should produce a LogLoss and classification accuracy score somewhere between these two values.
End of explanation |
13,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practice with galaxy photometry and shape measurement
To accompany galaxy-measurement lecture from the LSSTC Data Science Fellowship Program, July 2020.
All questions and corrections can be directed to me at [email protected]
Enjoy!
Gary Bernstein, 16 July 2020
Step1: Useful tools
For our galaxy measurement practice, we'll be testing out some of our techniques on exponential profile galaxies, which are define by
$$ I(x,y) \propto e^{-r/r_0},$$
where $r_0$ is the "scale length," and we'll allow our galaxy to potentially be elliptical shaped by setting
$$ r^2 = (1-e^2) \left[ \frac{(x-x_0)^2}{1-e} + \frac{(y-y_0)^2}{1+e}\right].$$
To reduce the complexity of our problem, I'm only letting the galaxy have the $e_+$ form of ellipticity, where $e>0$ ($e<0$) means the galaxy is stretched along the $x$ ($y$) axis.
We're also going to assume that our galaxy is viewed through a circular Gaussian PSF
Step2: Exercise 1
Step3: (b) Next let's add some background noise to our image, say n_bg=100.
First, make one such noisy version of your galaxy and imshow it.
Then, using analytic methods, estimate what the variance of your aperture flux measurements will be when R=10.
* Finally, make 1000 different realizations of your noisy galaxy and measure their tophat_flux to see whether the real variance of the flux measurements matches your prediction.
Step4: Since the variance of each pixel is n_bg$=n$, the variance of our aperture flux, generically, is
$$ \textrm{Var}(f) = \textrm{Var} \sum_{xy} I_{xy} W(x,y) = \sum_{xy} W^2(x,y) \textrm{Var}(I_{xy}) = n \sum_{xy} W^2(x,y).$$
So we just need to get that last sum, which is just the number of pixels inside the aperture for the tophat.
Step5: (c) Now create a plot of the S/N level of the flux measurement vs the radius R of the aperture. Here the signal is the mean, and the noise the std deviation, of the tophat_flux of many noisy measurements of this galaxy. You can use either an analytic or numeric estimate of these quantities. Report what the optimal tophat S/N is, and what R achieves it.
Step6: (d) Repeat part (c), but this time use a Gaussian aperture whose width $\sigma_w$ you vary to optimize the S/N ratio of the aperture flux, i.e. a function gaussian_flux(img,sigma_w) is needed. Which performs better, the optimized tophat or the optimized Gaussian?
Step7: Exercise 2
Step8: (b) Using either your Gaussian or your tophat aperture code, plot the measured $g-r$ color of the galaxy as a function of the size of the aperture. Since the true color is zero, this measurement is the size of the systematic error that is being made in color because of mismatched pre-seeing apertures.
Step9: We can see here that a naive use of "matched" apertures can cause significant spurious color, even when the aperture has a sigma that is many times that of the galaxy and PSF. But the tophat does better. So without any kind of PSF matching, we have to use algorithms with non-optimal S/N in order to approach true colors.
Exercise 3
Step10: (b) Use this to calculate the best achievable measurement accuracy on $e$ for our standard image.
Step11: (c) Make a graph showing how the optimal $\sigma_e$ varies as the size $\sigma_{\rm PSF}$ of the Gaussian PSF varies from being $0.2\times r_0$ to being $3\times r_0.$. What's the lesson here? | Python Code:
# Load the packages we will use
import numpy as np
import astropy.io.fits as pf
import astropy.coordinates as co
from matplotlib import pyplot as pl
import scipy.fft as fft
%matplotlib inline
Explanation: Practice with galaxy photometry and shape measurement
To accompany galaxy-measurement lecture from the LSSTC Data Science Fellowship Program, July 2020.
All questions and corrections can be directed to me at [email protected]
Enjoy!
Gary Bernstein, 16 July 2020
End of explanation
def addBackground(image, variance):
# Add Gaussian noise with given variance to each pixel of the image
image += np.random.normal(scale=np.sqrt(variance),size=image.shape)
return
n_pix = 64
xy=np.indices( (n_pix,n_pix),dtype=float)
x = xy[1].copy()- n_pix/2
y = xy[0].copy()- n_pix/2
pl.imshow(x,origin='lower',interpolation='nearest')
pl.title("This is a plot of x coordinate")
pl.colorbar()
# Here is our elliptical exponential galaxy drawing function
# It is always centered on the pixel just above right of the image center.
def drawDisk(r0=4.,flux=1.,e=0.,sigma_psf=3.,n_pix=n_pix):
# n_pix must be even.
# Build arrays holding the (ky,kx) values
# irfft2 wants array of this shape:
tmp = np.ones((n_pix,n_pix//2+1),dtype=float)
freqs = np.arange(-n_pix//2,n_pix//2)
freqs = (2 * np.pi / n_pix)*np.roll(freqs,n_pix//2)
kx = tmp * freqs[:n_pix//2+1]
ky = tmp * freqs[:,np.newaxis]
# Calculate the FT of the PSF
ft = np.exp( (kx*kx+ky*ky)*(-sigma_psf*sigma_psf/2.))
# Produce the FT of the exponential - for the circular version,
# it's (1+k^2 r_0^2)**(-3/2)
# factors to "ellipticize" and scale the k's:
a = np.power((1+e)/(1-e),0.25)
ksqp1 = np.square(r0*kx*a) + np.square(r0*ky/a) + 1
ft *= flux / (ksqp1*np.sqrt(ksqp1))
# Now FFT back to real space
img = fft.irfft2(ft)
# And roll the origin to the center
return np.roll(img, (n_pix//2,n_pix//2),axis=(0,1))
# As a test, let's draw an image with a small PSF size and
# see if it really is exponential.
# With e>0, it should be extended along x axis
r0=4.
img = drawDisk(e=0.2,flux=1e5,sigma_psf=3.,r0=r0)
pl.imshow(img,origin='lower',interpolation='nearest')
pl.title("Is it stretched along x?")
# And also a plot of log(flux) vs x or y should look linear
pl.figure()
pl.plot(np.arange(-32,32)/r0,np.log(img[:,32]),label='Y')
pl.plot(np.arange(-32,32)/r0,np.log(img[32,:]),label='X')
pl.legend()
pl.title("Are the lines straight and near unity slope?")
pl.xlabel("(x or y)/r0")
pl.ylabel("log(I)")
pl.grid()
Explanation: Useful tools
For our galaxy measurement practice, we'll be testing out some of our techniques on exponential profile galaxies, which are define by
$$ I(x,y) \propto e^{-r/r_0},$$
where $r_0$ is the "scale length," and we'll allow our galaxy to potentially be elliptical shaped by setting
$$ r^2 = (1-e^2) \left[ \frac{(x-x_0)^2}{1-e} + \frac{(y-y_0)^2}{1+e}\right].$$
To reduce the complexity of our problem, I'm only letting the galaxy have the $e_+$ form of ellipticity, where $e>0$ ($e<0$) means the galaxy is stretched along the $x$ ($y$) axis.
We're also going to assume that our galaxy is viewed through a circular Gaussian PSF:
$$ T(x,y) \propto e^{-(x^2+y^2)/2\sigma_{\rm PSF}^2}.$$
The function drawDisk below is provided to draw an image of an elliptical exponential galaxy as convolved with a Gaussian PSF. You don't have to understand how it works to do these exercises. But you might be interested (since this is how the GalSim galaxy simulation package works): the galaxy and the PSF are first "drawn" in Fourier space, and then multiplied, since a convolution in real space is multiplication in Fourier space (which is much faster). Then we use a Fast Fourier Transform (FFT) to get our image back in real space.
I also include in this notebook two helpful things from the astrometry notebook:
* The function addBackground which will add background noise of a chosen level (denoted as $n$ in the lecture notes) to any image.
* The x and y arrays that give the location values of each pixel. In this set of exercises, we'll work exclusively with 64x64 images. Also I am going to redefine the coordinate system so that $(x,y)=(0,0)$ is actually at element [32,32] of the array.
End of explanation
r0 = 4.
e = 0.
flux = 1e4
sigma_psf = 2.
# First construct a weight-function class for tophat
class Tophat:
# A tophat weight function of radius sigma
def __init__(self,R):
self.R = R
return
def __call__(self,dx,dy):
# Given equal-shaped arrays dx=x-x0, dy=y-y0,
# returns an array of the same shape giving weight function.
# Calculate distance of a pixel from the center
rsq = np.square(dx) + np.square(dy)
# Now return the weight
return np.where(rsq<=self.R*self.R, 1., 0.)
# Now I'm going to write a generic aperture_flux instead of tophat_flux
def aperture_flux(img, weight, x=x, y=y, x0=0., y0=0.):
# Return aperture flux sum of image given weight function.
# x and y are coordinates assigned to each array element.
# x0,y0 are center of aperture. Which in this exercise will
# be zero sinze x and y are already centered.
# Super-simple!
return np.sum(img*weight(x,y))
star = drawDisk(r0=r0, e=e, flux=flux,sigma_psf=sigma_psf)
Rlist = np.arange(5,30.1,1.)
flux_fraction = []
for R in Rlist:
weight = Tophat(R)
flux_fraction.append(aperture_flux(star,weight) / flux)
pl.plot(Rlist, flux_fraction,'ro')
pl.xlabel('Aperture radius (pixels)')
pl.ylabel('Fraction of flux')
pl.grid()
index_99 = np.where(np.array(flux_fraction)>0.99)[0][0]
print('Aperture radius / scale length for >99% of flux:',Rlist[index_99] / r0)
Explanation: Exercise 1: Aperture photometry
Here we'll try out a few forms of aperture photometry and see how they compare in terms of the S/N ratios they provide on the galaxy flux.
(a) Write a function tophat_flux(img,R) which implements a simple tophat aperture sum of flux in all pixels within radius R of the center of the galaxy. We will keep the center of our galaxy fixed at pixel [32,32] so you don't have to worry about iterating to find the centroid.
Draw a noiseless version of a circular galaxy with the characteristics in the cell below. Then use your tophat_flux function to plot the "curve of growth" for this image, with R on the x axis going from 5 to 30 pixels, and the y axis showing the fraction of the total flux that falls in your aperture.
How many scale radii do we need the aperture to be to miss <1% of the flux?
End of explanation
# Make our noiseless galaxy image
noiseless = drawDisk(r0=r0,e=e,flux=flux,sigma_psf=sigma_psf)
# Add noise to ity
noisy = noiseless.copy()
n_bg = 100
addBackground(noisy,variance=n_bg)
# show it
pl.imshow(noisy,origin='lower',interpolation='nearest')
Explanation: (b) Next let's add some background noise to our image, say n_bg=100.
First, make one such noisy version of your galaxy and imshow it.
Then, using analytic methods, estimate what the variance of your aperture flux measurements will be when R=10.
* Finally, make 1000 different realizations of your noisy galaxy and measure their tophat_flux to see whether the real variance of the flux measurements matches your prediction.
End of explanation
weight = Tophat(10.)
expected_variance = n_bg * np.sum(np.square(weight(x,y)))
print('Analytic variance: ',expected_variance)
# Now measure a pile of images with the same weight function.
# I'll use the same noiseless star and keep adding fresh noise to it.
fluxes = []
nTrials = 1000
for i in range(nTrials):
noisy = noiseless.copy()
addBackground(noisy,variance=n_bg)
fluxes.append(aperture_flux(noisy,weight))
print("Empirical variance:",np.var(fluxes))
Explanation: Since the variance of each pixel is n_bg$=n$, the variance of our aperture flux, generically, is
$$ \textrm{Var}(f) = \textrm{Var} \sum_{xy} I_{xy} W(x,y) = \sum_{xy} W^2(x,y) \textrm{Var}(I_{xy}) = n \sum_{xy} W^2(x,y).$$
So we just need to get that last sum, which is just the number of pixels inside the aperture for the tophat.
End of explanation
# I'll go with the analytic variance since that seems to work and it will
# be faster.
# I have the noiseless measured fluxes already in the flux_fraction array.
sn_ratio = []
for R,f in zip(Rlist,flux_fraction):
weight = Tophat(R)
expected_variance = n_bg * np.sum(np.square(weight(x,y)))
sn_ratio.append( f * flux / np.sqrt(expected_variance))
# Now plot the results
pl.plot(Rlist, sn_ratio,'ro')
pl.xlabel('Aperture radius (pixels)')
pl.ylabel('Flux S/N ratio')
pl.grid()
# Where's the best?
i = np.argmax(sn_ratio)
print("Best S/N is",sn_ratio[i],"at aperture radius",Rlist[i])
Explanation: (c) Now create a plot of the S/N level of the flux measurement vs the radius R of the aperture. Here the signal is the mean, and the noise the std deviation, of the tophat_flux of many noisy measurements of this galaxy. You can use either an analytic or numeric estimate of these quantities. Report what the optimal tophat S/N is, and what R achieves it.
End of explanation
# I can do this quickly by making a Gaussian Aperture class and plugging it into
# my aperture_flux code
class GaussAp:
# A tophat weight function of radius sigma
def __init__(self,sigma):
self.sigma = sigma
return
def __call__(self,dx,dy):
# Given equal-shaped arrays dx=x-x0, dy=y-y0,
# returns an array of the same shape giving weight function.
# Calculate distance of a pixel from the center
rsq = np.square(dx) + np.square(dy)
# Now return the weight
return np.exp(-rsq/(2.*self.sigma*self.sigma))
sigma_list = np.arange(1.0,10.,0.25)
sn_ratio = []
for s in sigma_list:
weight = GaussAp(s)
gauss_flux = aperture_flux(noiseless,weight)
expected_variance = n_bg * np.sum(np.square(weight(x,y)))
sn_ratio.append( gauss_flux / np.sqrt(expected_variance))
# Now plot the results
pl.plot(sigma_list, sn_ratio,'ro')
pl.xlabel('Gaussian aperture sigma (pixels)')
pl.ylabel('Flux S/N ratio')
pl.grid()
# Where's the best?
i = np.argmax(sn_ratio)
print("Best S/N is",sn_ratio[i],"at aperture sigma",sigma_list[i])
# The Gaussian aperture yields about 15% higher S/N, equivalent to 30% more exposure time!
Explanation: (d) Repeat part (c), but this time use a Gaussian aperture whose width $\sigma_w$ you vary to optimize the S/N ratio of the aperture flux, i.e. a function gaussian_flux(img,sigma_w) is needed. Which performs better, the optimized tophat or the optimized Gaussian?
End of explanation
img_g = drawDisk(r0=r0,e=e,flux=flux,sigma_psf=2.5)
img_r = drawDisk(r0=r0,e=e,flux=flux,sigma_psf=2.0)
pl.imshow(img_g-img_r,origin='lower',interpolation='nearest')
pl.colorbar()
Explanation: Exercise 2: Spurious color
This time let's consider that we want to measure an accurate $g-r$ color for our galaxy, but the seeing is $\sigma_{\rm PSF}=2$ pixels in the $r$ image but $\sigma_{\rm PSF}=2.5$ pixels in the $g$ image. Let's see how the size of our aperture biases our color measurement.
(a) Draw a noiseless $g$-band and a noiseless $r$-band image of our galaxy. Let's assume that the true color $g-r \equiv 2.5\log_10(f_r/f_g) = 0,$ i.e. that the $g$ and $r$ fluxes of the galaxy are both equal to our nominal flux. Plot the difference between the two images: are they the same?
End of explanation
#I'll plot Gaussian apertures since they got better S/N
# Use our 2 noiseless images
color = []
for s in sigma_list:
weight = GaussAp(s)
flux_g = aperture_flux(img_g,weight)
flux_r = aperture_flux(img_r,weight)
color.append(-2.5*np.log10(flux_g/flux_r))
# Now plot the results
pl.plot(sigma_list, color,'ro')
pl.xlabel('Gaussian aperture sigma (pixels)')
pl.ylabel('Spurious color (mag)')
pl.grid()
# What the heck, let's do Tophats too:
pl.figure()
color = []
for R in Rlist:
weight = Tophat(R)
flux_g = aperture_flux(img_g,weight)
flux_r = aperture_flux(img_r,weight)
color.append(-2.5*np.log10(flux_g/flux_r))
# Now plot the results
pl.plot(Rlist, color,'bs')
pl.xlabel('Tophat aperture radius (pixels)')
pl.ylabel('Spurious color (mag)')
pl.grid()
Explanation: (b) Using either your Gaussian or your tophat aperture code, plot the measured $g-r$ color of the galaxy as a function of the size of the aperture. Since the true color is zero, this measurement is the size of the systematic error that is being made in color because of mismatched pre-seeing apertures.
End of explanation
de = 0.01
img1 = drawDisk(r0=r0,flux=flux,sigma_psf=sigma_psf,e=+de)
img2 = drawDisk(r0=r0,flux=flux,sigma_psf=sigma_psf,e=-de)
dI_de = (img1-img2)/(2*de)
pl.imshow(dI_de,origin='lower',interpolation='nearest')
pl.colorbar()
pl.title('dI/de')
# Look at that! The signature of WL shear is a quadrupole pattern, which is matched
# and therefore picked out of an image by the (x^2-y^2) moment!
Explanation: We can see here that a naive use of "matched" apertures can cause significant spurious color, even when the aperture has a sigma that is many times that of the galaxy and PSF. But the tophat does better. So without any kind of PSF matching, we have to use algorithms with non-optimal S/N in order to approach true colors.
Exercise 3: Degradation of ellipticity measurements by seeing
It's hard to measure the shape of a galaxy that is not resolved by the PSF. That means that poorly-resolved galaxies are less useful for detecting weak-lensing (WL) shear. Let's see if we can quantify this by using the Fisher matrix to determine the best possible measurement accuracy on the parameter $e$ of our model (we'll make things easy by holding all other parameters of the galaxy model as fixed).
Remember how the Fisher matrix works: for an image signal $I_{xy}$ and noise $\sigma_{xy}$ in each pixel, the Fisher information for a parameter $\theta$ is
$$ F_{\theta\theta} = \sum_{xy} \frac{1}{\sigma^2_{xy}} \left(\frac{\partial I_{xy}}{\partial\theta}\right)^2.$$
Here we're interested in $\theta=e$.
(a) Draw two versions of our standard galaxy, with $e = \pm0.01.$ Use these to calculate and plot the quantity we need, $\frac{\partial I_{xy}}{\partial e}.$ Comment on how this picture relates to the fact that we like to measure WL shear using the moment of $x^2-y^2$.
End of explanation
# We take the Fisher sum
Fee = np.sum(dI_de * dI_de) / n_bg
# ...and then the Cramer-Rao bound on sigma_e is
print("Lower bound on sigma_e:",1./np.sqrt(Fee))
Explanation: (b) Use this to calculate the best achievable measurement accuracy on $e$ for our standard image.
End of explanation
# I want to wrap all of the above into a function of sigma_psf:
def sigma_opt(sigma_psf):
de = 0.01
img1 = drawDisk(r0=r0,flux=flux,sigma_psf=sigma_psf,e=+de)
img2 = drawDisk(r0=r0,flux=flux,sigma_psf=sigma_psf,e=-de)
dI_de = (img1-img2)/(2*de)
Fee = np.sum(dI_de * dI_de) / n_bg
return 1./np.sqrt(Fee)
# Now make our plot
psfs = np.linspace(0.2*r0,3*r0,40)
# use our function to get results
sigma_e = [sigma_opt(sigma_psf) for sigma_psf in psfs]
# And plot
pl.plot(psfs,sigma_e,'r-',lw=3)
pl.xlabel("PSF sigma (pixels)")
pl.ylabel('Optimal ellipticity error')
pl.ylim(0)
pl.xlim(0)
pl.grid()
pl.annotate('Scale length',(r0,0.15),xytext=(r0,0.25),ha='center',
arrowprops={'arrowstyle':'->'})
Explanation: (c) Make a graph showing how the optimal $\sigma_e$ varies as the size $\sigma_{\rm PSF}$ of the Gaussian PSF varies from being $0.2\times r_0$ to being $3\times r_0.$. What's the lesson here?
End of explanation |
13,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
13,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.
Step1: In addition, the DrawControl also has last_action and last_draw attributes that are created dynamicaly anytime a new drawn path arrives.
Step2: It's possible to remove all drawings from the map
Step3: Let's draw a second map and try to import this GeoJSON data into it.
Step4: We can use link to synchronize traitlets of the two maps
Step5: Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.
Now let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit. | Python Code:
dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}},
rectangle={'shapeOptions': {'color': '#0000FF'}},
circle={'shapeOptions': {'color': '#0000FF'}},
circlemarker={},
)
def handle_draw(self, action, geo_json):
print(action)
print(geo_json)
dc.on_draw(handle_draw)
m.add_control(dc)
Explanation: Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.
End of explanation
dc.last_action
dc.last_draw
Explanation: In addition, the DrawControl also has last_action and last_draw attributes that are created dynamicaly anytime a new drawn path arrives.
End of explanation
dc.clear_circles()
dc.clear_polylines()
dc.clear_rectangles()
dc.clear_markers()
dc.clear_polygons()
dc.clear()
Explanation: It's possible to remove all drawings from the map
End of explanation
m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px'))
m2
Explanation: Let's draw a second map and try to import this GeoJSON data into it.
End of explanation
map_center_link = link((m, 'center'), (m2, 'center'))
map_zoom_link = link((m, 'zoom'), (m2, 'zoom'))
new_poly = GeoJSON(data=dc.last_draw)
m2.add_layer(new_poly)
Explanation: We can use link to synchronize traitlets of the two maps:
End of explanation
dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={},
circle={'shapeOptions': {'color': '#0000FF'}})
m2.add_control(dc2)
Explanation: Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.
Now let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit.
End of explanation |
13,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(📗) ipyrad Cookbook
Step1: Connect to cluster
The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the ipyparallel library (see our ipyparallel tutorial). An ipcluster instance must be started for you to connect to, which can be started by running 'ipcluster start' in a terminal.
Step2: Load in your .loci data file and a tree hypothesis
We are going to use the shape of our tree topology hypothesis to generate 4-taxon tests to perform, therefore we'll start by looking at our tree and making sure it is properly rooted.
Step3: Short tutorial
Step4: Look at the results
By default we do not attach the names of the samples that were included in each test to the results table since it makes the table much harder to read, and we wanted it to look very clean. However, this information is readily available in the .test() attribute of the baba object as shown below. Also, we have made plotting functions to show this information clearly as well.
Step5: Plotting and interpreting results
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (tests 12-19). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 20, 24, 28, 34, and 35). Take note, the tests are indexed starting at 0.
Step6: generating tests
Because tests are generated based on a tree file, it will only generate tests that fit the topology of the test. For example, the entries below generate zero possible tests because the two samples entered for P3 (the two thamnophila subspecies) are paraphyletic on the tree topology, and therefore cannot form a clade together.
Step7: If you want to get results for a test that does not fit on your tree you can always write the result out by hand instead of auto-generating it from the tree. Doing it this way is fine when you have few tests to run, but becomes burdensome when writing many tests.
Step8: Further investigating results with 5-part tests
You can also perform partitioned D-statistic tests like below. Here we are testing the direction of introgression. If the two thamnophila subspecies are in fact sister species then they would be expected to share derived alleles that arose in their ancestor and which would be introduced from together if either one of them introgressed into a P. rex taxon. As you can see, test 0 shows no evidence of introgression, whereas test 1 shows that the two thamno subspecies share introgressed alleles that are present in two samples of rex relative to sample "35236_rex".
More on this further below in this notebook.
Step9: Full Tutorial
Creating a baba object
The fundamental object for running abba-baba tests is the ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus=4), to maximize the amount of data available for any test. Once an initial baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
Step10: Linking tests to the baba object
The next thing we need to do is to link a 'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba object named 'cc' below we enter two tests using a list to show how multiple tests can be linked to a single baba object.
Step11: Other parameters
Each baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2. However, for the test above setting mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba object 'bb'.
Step12: Running the tests
When you execute the 'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient object. The results of the tests will be stored in your baba object under the attributes 'results_table' and 'results_boots'.
Step13: The results table
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table, which can be easily accessed and manipulated. The tests are listed in order and can be referenced by their 'index' (the number in the left-most column). For example, below we see the results for object 'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them from the .tests attribute as a dictionary, or as .taxon_table which returns it as a dataframe. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
Step14: Auto-generating tests
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
Step15: Running the tests
The .run() command will run the tests linked to your analysis object. An ipyclient object is required to distribute the jobs in parallel. The .plot() function can then optionally be used to visualize the results on a tree. Or, you can simply look at the results in the .results_table attribute.
Step16: More about input file paths (i/o)
The default (required) input data file is the .loci file produced by ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
Step17: (optional)
Step18: Interpreting results
You can see in the results_table below that the D-statistic range around 0.0-0.15 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above). | Python Code:
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
import toyplot
print ipa.__version__
print toyplot.__version__
print toytree.__version__
Explanation: (📗) ipyrad Cookbook: abba-baba admixture tests
The ipyrad.analysis Python module includes functions to calculate abba-baba admixture statistics (including several variants of these measures), to perform signifance tests, and to produce plots of results. All code in this notebook is written in Python, which you can copy/paste into an IPython terminal to execute, or, preferably, run in a Jupyter notebook like this one. See the other analysis cookbooks for instructions on using Jupyter notebooks. All of the software required for this tutorial is included with ipyrad (v.6.12+). Finally, we've written functions to generate plots for summarizing and interpreting results.
Load packages
End of explanation
ipyclient = ipp.Client()
len(ipyclient)
Explanation: Connect to cluster
The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the ipyparallel library (see our ipyparallel tutorial). An ipcluster instance must be started for you to connect to, which can be started by running 'ipcluster start' in a terminal.
End of explanation
## ipyrad and raxml output files
locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci"
newick = "./analysis-raxml/RAxML_bipartitions.pedic"
## parse the newick tree, re-root it, and plot it.
tre = toytree.tree(newick=newick)
tre.root(wildcard="prz")
tre.draw(
height=350,
width=400,
node_labels=tre.get_node_values("support")
)
## store rooted tree back into a newick string.
newick = tre.tree.write()
Explanation: Load in your .loci data file and a tree hypothesis
We are going to use the shape of our tree topology hypothesis to generate 4-taxon tests to perform, therefore we'll start by looking at our tree and making sure it is properly rooted.
End of explanation
## create a baba object linked to a data file and newick tree
bb = ipa.baba(data=locifile, newick=newick)
## generate all possible abba-baba tests meeting a set of constraints
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno"],
})
## show the first 3 tests
bb.tests[:3]
## run all tests linked to bb
bb.run(ipyclient)
## show first 5 results
bb.results_table.head()
Explanation: Short tutorial: calculating abba-baba statistics
To give a gist of what this code can do, here is a quick tutorial version, each step of which we explain in greater detail below. We first create a 'baba' analysis object that is linked to our data file, in this example we name the variable bb. Then we tell it which tests to perform, here by automatically generating a number of tests using the generate_tests_from_tree() function. And finally, we calculate the results and plot them.
End of explanation
## save all results table to a tab-delimited CSV file
bb.results_table.to_csv("bb.abba-baba.csv", sep="\t")
## show the results table sorted by index score (Z)
sorted_results = bb.results_table.sort_values(by="Z", ascending=False)
sorted_results.head()
## get taxon names in the sorted results order
sorted_taxa = bb.taxon_table.iloc[sorted_results.index]
## show taxon names in the first few sorted tests
sorted_taxa.head()
Explanation: Look at the results
By default we do not attach the names of the samples that were included in each test to the results table since it makes the table much harder to read, and we wanted it to look very clean. However, this information is readily available in the .test() attribute of the baba object as shown below. Also, we have made plotting functions to show this information clearly as well.
End of explanation
## plot results on the tree
bb.plot(height=850, width=700, pct_tree_y=0.2, pct_tree_x=0.5, alpha=4.0);
Explanation: Plotting and interpreting results
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (tests 12-19). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 20, 24, 28, 34, and 35). Take note, the tests are indexed starting at 0.
End of explanation
## this is expected to generate zero tests
aa = bb.copy()
aa.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno", "30556_thamno"],
})
Explanation: generating tests
Because tests are generated based on a tree file, it will only generate tests that fit the topology of the test. For example, the entries below generate zero possible tests because the two samples entered for P3 (the two thamnophila subspecies) are paraphyletic on the tree topology, and therefore cannot form a clade together.
End of explanation
## writing tests by hand for a new object
aa = bb.copy()
aa.tests = [
{"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno", "30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["39618_rex", "38362_rex"]},
{"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno", "30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["35236_rex"]},
]
## run the tests
aa.run(ipyclient)
aa.results_table
Explanation: If you want to get results for a test that does not fit on your tree you can always write the result out by hand instead of auto-generating it from the tree. Doing it this way is fine when you have few tests to run, but becomes burdensome when writing many tests.
End of explanation
## further investigate with a 5-part test
cc = bb.copy()
cc.tests = [
{"p5": ["32082_przewalskii", "33588_przewalskii"],
"p4": ["33413_thamno"],
"p3": ["30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["39618_rex", "38362_rex"]},
{"p5": ["32082_przewalskii", "33588_przewalskii"],
"p4": ["33413_thamno"],
"p3": ["30556_thamno"],
"p2": ["40578_rex", "35855_rex"],
"p1": ["35236_rex"]},
]
cc.run(ipyclient)
## the partitioned D results for two tests
cc.results_table
## and view the 5-part test taxon table
cc.taxon_table
Explanation: Further investigating results with 5-part tests
You can also perform partitioned D-statistic tests like below. Here we are testing the direction of introgression. If the two thamnophila subspecies are in fact sister species then they would be expected to share derived alleles that arose in their ancestor and which would be introduced from together if either one of them introgressed into a P. rex taxon. As you can see, test 0 shows no evidence of introgression, whereas test 1 shows that the two thamno subspecies share introgressed alleles that are present in two samples of rex relative to sample "35236_rex".
More on this further below in this notebook.
End of explanation
## create an initial object linked to your data in 'locifile'
aa = ipa.baba(data=locifile)
## create two other copies
bb = aa.copy()
cc = aa.copy()
## print these objects
print aa
print bb
print cc
Explanation: Full Tutorial
Creating a baba object
The fundamental object for running abba-baba tests is the ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus=4), to maximize the amount of data available for any test. Once an initial baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
End of explanation
aa.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["29154_superba"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
bb.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["30686_cyathophylla"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
cc.tests = [
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41954_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41478_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
]
Explanation: Linking tests to the baba object
The next thing we need to do is to link a 'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba object named 'cc' below we enter two tests using a list to show how multiple tests can be linked to a single baba object.
End of explanation
## print params for object aa
aa.params
## set the mincov value as a dictionary for object bb
bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1}
bb.params
Explanation: Other parameters
Each baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2. However, for the test above setting mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba object 'bb'.
End of explanation
## run tests for each of our objects
aa.run(ipyclient)
bb.run(ipyclient)
cc.run(ipyclient)
Explanation: Running the tests
When you execute the 'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient object. The results of the tests will be stored in your baba object under the attributes 'results_table' and 'results_boots'.
End of explanation
## you can sort the results by Z-score
cc.results_table.sort_values(by="Z", ascending=False)
## save the table to a file
cc.results_table.to_csv("cc.abba-baba.csv")
## show the results in notebook
cc.results_table
Explanation: The results table
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table, which can be easily accessed and manipulated. The tests are listed in order and can be referenced by their 'index' (the number in the left-most column). For example, below we see the results for object 'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them from the .tests attribute as a dictionary, or as .taxon_table which returns it as a dataframe. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
End of explanation
## create a new 'copy' of your baba object and attach a treefile
dd = bb.copy()
dd.newick = newick
## generate all possible tests
dd.generate_tests_from_tree()
## a dict of constraints
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["40578_rex", "35855_rex"],
}
## generate tests with contraints
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=False,
)
## 'exact' contrainst are even more constrained
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=True,
)
Explanation: Auto-generating tests
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
End of explanation
## run the dd tests
dd.run(ipyclient)
dd.plot(height=500, pct_tree_y=0.2, alpha=4);
dd.results_table
Explanation: Running the tests
The .run() command will run the tests linked to your analysis object. An ipyclient object is required to distribute the jobs in parallel. The .plot() function can then optionally be used to visualize the results on a tree. Or, you can simply look at the results in the .results_table attribute.
End of explanation
## path to a locifile created by ipyrad
locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci"
## path to an unrooted tree inferred with tetrad
newick = "./analysis-tetrad/tutorial.tree"
Explanation: More about input file paths (i/o)
The default (required) input data file is the .loci file produced by ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
End of explanation
## load in the tree
tre = toytree.tree(newick)
## set the outgroup either as a list or using a wildcard selector
tre.root(outgroup=["32082_przewalskii", "33588_przewalskii"])
tre.root(wildcard="prz")
## draw the tree
tre.draw(width=400)
## save the rooted newick string back to a variable and print
newick = tre.newick
Explanation: (optional): root the tree
For abba-baba tests you will pretty much always want your tree to be rooted, since the test relies on an assumption about which alleles are ancestral. You can use our simple tree plotting library toytree to root your tree. This library uses Toyplot as its plotting backend, and ete3 as its tree manipulation backend.
Below I load in a newick string and root the tree on the two P. przewalskii samples using the root() function. You can either enter the names of the outgroup samples explicitly or enter a wildcard to select them. We show the rooted tree from a tetrad analysis below. The newick string of the rooted tree can be saved or accessed by the .newick attribute, like below.
End of explanation
## show the results table
print dd.results_table
Explanation: Interpreting results
You can see in the results_table below that the D-statistic range around 0.0-0.15 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above).
End of explanation |
13,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
13,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Installation instructions
First, clone cmac2.0 into your own directory
Step2: This will start a distributed cluster on the arm_high_mem queue. I have set it to have 6 adi_cmac2 processes per node,
with 36 total processes being ran. Feel free to change these values as you see fit. TYou will need to change the environment name and paths to what you named your adi_cmac2 environment on your machine. You will also need to change the path to your conda.sh.
Step3: Run the above code to start the distributed client, and then use the output of this cell to determine whether your client got started. You should have nonzero resources available if the cluster has started.
Step5: This creates the list of dictionaries mapped onto exec_adi when adi_cmac2 is run on the cluster. | Python Code:
import subprocess
import os
import sys
from dask_jobqueue import PBSCluster
from distributed import Client, progress
from datetime import datetime, timedelta
from pkg_resources import load_entry_point
from distributed import progress
def exec_adi(info_dict):
This function will call adi_cmac2 from within Python. It takes in a dictionary where the inputs to adi_cmac2 are
stored.
Parameters
----------
info_dict: dict
A dictionary with the following keywords:
'facility' = The facility marker (i.e. 'sgp', 'nsa', etc.)
'site' = The site marker (i.e. i4, i5, i6)
'start_date' = The start date as a string formatted YYYYMMDD
'end_date' = The end date as a string formatted YYYYMMDD
facility = info_dict['facility']
site = info_dict['site']
start_date = info_dict['start_date']
end_date = info_dict['end_date']
# Change this directory to where you want your adi logs stored
logs_dir = "/home/rjackson/adi_logs"
# Set the path to your datasteam here!
os.environ["DATASTREAM_DATA"] = "/lustre/or-hydra/cades-arm/rjackson/"
logs_dir += logs_dir + "/" + site + start_date + "_" + end_date
if not os.path.isdir(logs_dir):
os.makedirs(logs_dir)
os.environ["LOGS_DATA"] = logs_dir
os.environ["PROJ_LIB"] = "/home/rjackson/anaconda3/envs/adi_env3/share/proj/"
# Set the path to the clutter file here!
os.environ["CMAC_CLUTTER_FILE"] = "/home/rjackson/cmac2.0/scripts/clutter201901.nc"
subprocess.call(("/home/rjackson/anaconda3/envs/adi_env3/bin/adi_cmac2 -D 1 -f " +
facility + " -s " + site + " -b " + start_date + " -e "+ end_date), shell=True)
Explanation: Installation instructions
First, clone cmac2.0 into your own directory:
git clone https://github.com/EVS-ATMOS/cmac2.0.git
Second: Create the environment from the cmac environment. I will call it cmac_env here:
cd cmac2.0
conda env create -f environment-3.6.yml
After that, we will install CyLP into the new environment:
conda activate cmac_env
module load gcc/6.3.0
export COIN_INSTALL_DIR=/path/to/anaconda3/envs/cmac_env
pip install git+https://github.com/jjhelmus/CyLP@py3
After this is done, the next step is to compile and install the ADI libraries. First, clone the adi_cmac2, adi_py, and adi_pyart_glue repositories from code.arm.gov and install them.
git clone https://code.arm.gov/adi_cmac2.git
git clone https://code.arm.gov/adi_py.git
git clone https://code.arm.gov/adi_pyart_glue.git
You will need to load the ADI module to build and install ADI into anaconda:
module load adi
Then install the 3 packages:
cd adi_py
python setup.py install
cd ..
cd adi_pyart_glue
python setup.py install
cd ..
cd adi_cmac2
python setup.py install
Finally, we need to set up the conda environment to load system libraries that are needed for adi on startup. To do this, we will edit the /path/to/anaconda3/envs/cmac_env/etc/conda/activate.d/env_var.sh and /path/to/anaconda3/envs/cmac_env/etc/conda/deactivate.d/env_var.sh. First, let us create them:
touch /path/to/anaconda3/envs/cmac_env/etc/conda/activate.d/env_var.sh
touch /path/to/anaconda3/envs/cmac_env/etc/conda/deactivate.d/env_var.sh
Put this in the contents of /path/to/anaconda3/envs/cmac_env/etc/conda/activate.d/env_var.sh:
#!/bin/bash
module load postgresql
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/software/user_tools/current/cades-arm/apps/lib64
export C_INCLUDE_PATH=$C_INCLUDE_PATH:/software/user_tools/current/cades-arm/apps/include:/software/dev_tools/swtree/cs400_centos7.5_pe2018/anaconda3/5.1.0/centos7.5_intel18.0.0/anaconda/pkgs/libnetcdf-4.6.1-he6cff42_8/include/
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH::/software/user_tools/current/cades-arm/apps/lib64/pkgconfig:/software/dev_tools/swtree/cs400_centos7.5_pe2018/anaconda3/5.1.0/centos7.5_intel18.0.0/anaconda/pkgs/libnetcdf-4.6.1-he6cff42_8/lib/pkgconfig/
And in /path/to/anaconda3/envs/cmac_env/etc/conda/deactivate.d/env_var.sh
#!/bin/bash
module unload postgresql
This will get all of the libraries you need to run adi_cmac2. Make sure to run adi_cmac2 from an arm_high_mem node before starting or it will not work.
To test to see if adi_cmac2 is working, just type arm_cmac2 in the terminal. If it is installed correctly the only error that should pop up is that no files were specified. Be sure that when you use adi_cmac2 that you are on a arm_high_mem node or it will not be able to connect to the DMF.
Notebook to scale ADI onto stratus
Import all of the needed libraries
End of explanation
the_cluster = PBSCluster(processes=6, cores=36, queue="arm_high_mem",
walltime="3:00:00", resource_spec="qos=std",
job_extra=["-A arm", "-W group_list=cades-arm"],
env_extra=[". /home/rjackson/anaconda3/etc/profile.d/conda.sh", "conda activate adi_env3"])
the_cluster.scale(36)
client = Client(the_cluster)
client
Explanation: This will start a distributed cluster on the arm_high_mem queue. I have set it to have 6 adi_cmac2 processes per node,
with 36 total processes being ran. Feel free to change these values as you see fit. TYou will need to change the environment name and paths to what you named your adi_cmac2 environment on your machine. You will also need to change the path to your conda.sh.
End of explanation
client
Explanation: Run the above code to start the distributed client, and then use the output of this cell to determine whether your client got started. You should have nonzero resources available if the cluster has started.
End of explanation
def make_date_list_dict_list(start_day, end_day):
This automatically generates a list of day inputs for the exec_adi function.
Parameters
----------
start_day: datetime
The start date
end_day:
The end date
Returns
-------
the_list: A list of dictionary inputs for exec_adi
cur_day = start_day
the_list = []
while(cur_day < end_day):
next_day = cur_day + timedelta(days=1)
temp_dict = {}
# Change these next two lines to fit your facility
temp_dict['facility'] = "I5"
temp_dict['site'] = "sgp"
temp_dict['start_date'] = cur_day.strftime("%Y%m%d")
temp_dict['end_date'] = next_day.strftime("%Y%m%d")
the_list.append(temp_dict)
cur_day = cur_day + timedelta(days=1)
return the_list
# Here we specify the dates that we want to process
date_list = make_date_list_dict_list(datetime(2019, 1, 1), datetime(2019,2,6))
# Run the cluster
futures = client.map(exec_adi, date_list)
# Put up a little progress bar!
progress(futures)
# This will make the tasks quit
del futures
cluster.stop_all_jobs()
Explanation: This creates the list of dictionaries mapped onto exec_adi when adi_cmac2 is run on the cluster.
End of explanation |
13,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix generation
Init symbols for sympy
Step1: Lame params
Step2: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
Step3: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step4: Christoffel symbols
Step5: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
Step6: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
Step7: Physical coordinates
$u_i=u_{[i]} H_i$
Step8: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step9: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
Step10: Mass matrix | Python Code:
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
init_printing()
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
Explanation: Matrix generation
Init symbols for sympy
End of explanation
h1 = Function("H1")
h2 = Function("H2")
h3 = Function("H3")
H1 = h1(alpha1, alpha2, alpha3)
H2 = S(1)
H3 = h3(alpha1, alpha2, alpha3)
Explanation: Lame params
End of explanation
G_up = getMetricTensorUpLame(H1, H2, H3)
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
G_down = getMetricTensorDownLame(H1, H2, H3)
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
GK = getChristoffelSymbols2(G_up, G_down, (alpha1, alpha2, alpha3))
Explanation: Christoffel symbols
End of explanation
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
B
Explanation: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
%aimport geom_util
#u=getUHat3D(alpha1, alpha2, alpha3)
u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
Explanation: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
P=zeros(12,12)
P[0,0]=H1
P[1,0]=(H1).diff(alpha1)
P[1,1]=H1
P[2,0]=(H1).diff(alpha2)
P[2,2]=H1
P[3,0]=(H1).diff(alpha3)
P[3,3]=H1
P[4,4]=H2
P[5,4]=(H2).diff(alpha1)
P[5,5]=H2
P[6,4]=(H2).diff(alpha2)
P[6,6]=H2
P[7,4]=(H2).diff(alpha3)
P[7,7]=H2
P[8,8]=H3
P[9,8]=(H3).diff(alpha1)
P[9,9]=H3
P[10,8]=(H3).diff(alpha2)
P[10,10]=H3
P[11,8]=(H3).diff(alpha3)
P[11,11]=H3
P=simplify(P)
P
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
ratio=1
if (i==0):
ratio = ratio*H1
elif (i==1):
ratio = ratio*H2
elif (i==2):
ratio = ratio*H3
if (j==0):
ratio = ratio*H1
elif (j==1):
ratio = ratio*H2
elif (j==2):
ratio = ratio*H3
row_index = i*3+j
B_P[row_index, row_index] = 1/ratio
Grad_U_P = simplify(B_P*B*P)
Grad_U_P
StrainL=simplify(E*Grad_U_P)
StrainL
%aimport geom_util
u=getUHatU3Main(alpha1, alpha2, alpha3)
gradup=B_P*B*P*u
E_NLp = E_NonLinear(gradup)*B*P*u
simplify(E_NLp)
Explanation: Physical coordinates
$u_i=u_{[i]} H_i$
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
D_p_T = StrainL*T
simplify(D_p_T)
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
simplify(StrainNL)
Explanation: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
L=zeros(12,12)
h=Symbol('h')
p0=1/2-alpha3/h
p1=1/2+alpha3/h
p2=1-(2*alpha3/h)**2
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
D_p_L = StrainL*L
simplify(D_p_L)
h = 0.5
exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8)
p02=integrate(exp, (alpha3, -h/2, h/2))
integral = expand(simplify(p02))
integral
Explanation: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
Explanation: Mass matrix
End of explanation |
13,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook
Step1: Download or use cached file oecd-canada.json. Caching file on disk permits to work off-line and to speed up the exploration of the data.
Step2: Initialize JsonStatCollection from the file and print the list of dataset contained into the collection.
Step3: Select the dataset named oedc. Oecd dataset has three dimensions (concept, area, year), and contains 432 values.
Step4: Shows some detailed info about dimensions
Step5: Accessing value in the dataset
Print the value in oecd dataset for area = IT and year = 2012
Step6: Trasforming dataset into pandas DataFrame
Step7: Extract a subset of data in a pandas dataframe from the jsonstat dataset.
We can trasform dataset freezing the dimension area to a specific country (Canada)
Step8: Trasforming a dataset into a python list
Step9: It is possible to trasform jsonstat data into table in different order | Python Code:
# all import here
from __future__ import print_function
import os
import pandas as ps # using panda to convert jsonstat dataset to pandas dataframe
import jsonstat # import jsonstat.py package
import matplotlib as plt # for plotting
%matplotlib inline
Explanation: Notebook: using jsonstat.py python library with jsonstat format version 1.
This Jupyter notebook shows the python library jsonstat.py in action. The JSON-stat is a simple lightweight JSON dissemination format. For more information about the format see the official site. This example shows how to explore the example data file oecd-canada from json-stat.org site. This file is compliant to the version 1 of jsonstat.
End of explanation
url = 'http://json-stat.org/samples/oecd-canada.json'
file_name = "oecd-canada.json"
file_path = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.json-stat.org", file_name))
if os.path.exists(file_path):
print("using already downloaded file {}".format(file_path))
else:
print("download file and storing on disk")
jsonstat.download(url, file_name)
file_path = file_name
Explanation: Download or use cached file oecd-canada.json. Caching file on disk permits to work off-line and to speed up the exploration of the data.
End of explanation
collection = jsonstat.from_file(file_path)
collection
Explanation: Initialize JsonStatCollection from the file and print the list of dataset contained into the collection.
End of explanation
oecd = collection.dataset('oecd')
oecd
Explanation: Select the dataset named oedc. Oecd dataset has three dimensions (concept, area, year), and contains 432 values.
End of explanation
oecd.dimension('concept')
oecd.dimension('area')
oecd.dimension('year')
Explanation: Shows some detailed info about dimensions
End of explanation
oecd.data(area='IT', year='2012')
oecd.value(area='IT', year='2012')
oecd.value(concept='unemployment rate',area='Australia',year='2004') # 5.39663128
oecd.value(concept='UNR',area='AU',year='2004')
Explanation: Accessing value in the dataset
Print the value in oecd dataset for area = IT and year = 2012
End of explanation
df_oecd = oecd.to_data_frame('year', content='id')
df_oecd.head()
df_oecd['area'].describe() # area contains 36 values
Explanation: Trasforming dataset into pandas DataFrame
End of explanation
df_oecd_ca = oecd.to_data_frame('year', content='id', blocked_dims={'area':'CA'})
df_oecd_ca.tail()
df_oecd_ca['area'].describe() # area contains only one value (CA)
df_oecd_ca.plot(grid=True)
Explanation: Extract a subset of data in a pandas dataframe from the jsonstat dataset.
We can trasform dataset freezing the dimension area to a specific country (Canada)
End of explanation
oecd.to_table()[:5]
Explanation: Trasforming a dataset into a python list
End of explanation
order = [i.did for i in oecd.dimensions()]
order = order[::-1] # reverse list
table = oecd.to_table(order=order)
table[:5]
Explanation: It is possible to trasform jsonstat data into table in different order
End of explanation |
13,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of plots and calculations using the tmm package
Imports
Step1: Set up
Step2: Sample 1
Here's a thin non-absorbing layer, on top of a thick absorbing layer, with
air on both sides. Plotting reflected intensity versus wavenumber, at two
different incident angles.
Step3: Sample 2
Here's the transmitted intensity versus wavelength through a single-layer
film which has some complicated wavelength-dependent index of refraction.
(I made these numbers up, but in real life they could be read out of a
graph / table published in the literature.) Air is on both sides of the
film, and the light is normally incident.
Step4: Sample 3
Here is a calculation of the psi and Delta parameters measured in
ellipsometry. This reproduces Fig. 1.14 in Handbook of Ellipsometry by
Tompkins, 2005.
Step5: Sample 4
Here is an example where we plot absorption and Poynting vector
as a function of depth.
Step6: Sample 5
Color calculations | Python Code:
from __future__ import division, print_function, absolute_import
from tmm import (coh_tmm, unpolarized_RT, ellips,
position_resolved, find_in_structure_with_inf)
from numpy import pi, linspace, inf, array
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Examples of plots and calculations using the tmm package
Imports
End of explanation
try:
import colorpy.illuminants
import colorpy.colormodels
from tmm import color
colors_were_imported = True
except ImportError:
# without colorpy, you can't run sample5(), but everything else is fine.
colors_were_imported = False
# "5 * degree" is 5 degrees expressed in radians
# "1.2 / degree" is 1.2 radians expressed in degrees
degree = pi/180
Explanation: Set up
End of explanation
# list of layer thicknesses in nm
d_list = [inf,100,300,inf]
# list of refractive indices
n_list = [1,2.2,3.3+0.3j,1]
# list of wavenumbers to plot in nm^-1
ks=linspace(0.0001,.01,num=400)
# initialize lists of y-values to plot
Rnorm=[]
R45=[]
for k in ks:
# For normal incidence, s and p polarizations are identical.
# I arbitrarily decided to use 's'.
Rnorm.append(coh_tmm('s',n_list, d_list, 0, 1/k)['R'])
R45.append(unpolarized_RT(n_list, d_list, 45*degree, 1/k)['R'])
kcm = ks * 1e7 #ks in cm^-1 rather than nm^-1
plt.figure()
plt.plot(kcm,Rnorm,'blue',kcm,R45,'purple')
plt.xlabel('k (cm$^{-1}$)')
plt.ylabel('Fraction reflected')
plt.title('Reflection of unpolarized light at 0$^\circ$ incidence (blue), '
'45$^\circ$ (purple)');
Explanation: Sample 1
Here's a thin non-absorbing layer, on top of a thick absorbing layer, with
air on both sides. Plotting reflected intensity versus wavenumber, at two
different incident angles.
End of explanation
#index of refraction of my material: wavelength in nm versus index.
material_nk_data = array([[200, 2.1+0.1j],
[300, 2.4+0.3j],
[400, 2.3+0.4j],
[500, 2.2+0.4j],
[750, 2.2+0.5j]])
material_nk_fn = interp1d(material_nk_data[:,0].real,
material_nk_data[:,1], kind='quadratic')
d_list = [inf,300,inf] #in nm
lambda_list = linspace(200,750,400) #in nm
T_list = []
for lambda_vac in lambda_list:
n_list = [1, material_nk_fn(lambda_vac), 1]
T_list.append(coh_tmm('s',n_list,d_list,0,lambda_vac)['T'])
plt.figure()
plt.plot(lambda_list,T_list)
plt.xlabel('Wavelength (nm)')
plt.ylabel('Fraction of power transmitted')
plt.title('Transmission at normal incidence');
Explanation: Sample 2
Here's the transmitted intensity versus wavelength through a single-layer
film which has some complicated wavelength-dependent index of refraction.
(I made these numbers up, but in real life they could be read out of a
graph / table published in the literature.) Air is on both sides of the
film, and the light is normally incident.
End of explanation
n_list=[1,1.46,3.87+0.02j]
ds=linspace(0,1000,num=100) #in nm
psis=[]
Deltas=[]
for d in ds:
e_data=ellips(n_list, [inf,d,inf], 70*degree, 633) #in nm
psis.append(e_data['psi']/degree) # angle in degrees
Deltas.append(e_data['Delta']/degree) # angle in degrees
plt.figure()
plt.plot(ds,psis,ds,Deltas)
plt.xlabel('SiO2 thickness (nm)')
plt.ylabel('Ellipsometric angles (degrees)')
plt.title('Ellipsometric parameters for air/SiO2/Si, varying '
'SiO2 thickness.\n'
'@ 70$^\circ$, 633nm. '
'Should agree with Handbook of Ellipsometry Fig. 1.14');
Explanation: Sample 3
Here is a calculation of the psi and Delta parameters measured in
ellipsometry. This reproduces Fig. 1.14 in Handbook of Ellipsometry by
Tompkins, 2005.
End of explanation
d_list = [inf, 100, 300, inf] #in nm
n_list = [1, 2.2+0.2j, 3.3+0.3j, 1]
th_0=pi/4
lam_vac=400
pol='p'
coh_tmm_data = coh_tmm(pol,n_list,d_list,th_0,lam_vac)
ds = linspace(0,400,num=1000) #position in structure
poyn=[]
absor=[]
for d in ds:
layer, d_in_layer = find_in_structure_with_inf(d_list,d)
data=position_resolved(layer,d_in_layer,coh_tmm_data)
poyn.append(data['poyn'])
absor.append(data['absor'])
# convert data to numpy arrays for easy scaling in the plot
poyn = array(poyn)
absor = array(absor)
plt.figure()
plt.plot(ds,poyn,'blue',ds,200*absor,'purple')
plt.xlabel('depth (nm)')
plt.ylabel('AU')
plt.title('Local absorption (purple), Poynting vector (blue)');
Explanation: Sample 4
Here is an example where we plot absorption and Poynting vector
as a function of depth.
End of explanation
if not colors_were_imported:
print('Colorpy was not detected (or perhaps an error occurred when',
'loading it). You cannot do color calculations, sorry!',
'http://pypi.python.org/pypi/colorpy')
else:
# Crystalline silicon refractive index. Data from Palik via
# http://refractiveindex.info, I haven't checked it, but this is just for
# demonstration purposes anyway.
Si_n_data = [[400, 5.57 + 0.387j],
[450, 4.67 + 0.145j],
[500, 4.30 + 7.28e-2j],
[550, 4.08 + 4.06e-2j],
[600, 3.95 + 2.57e-2j],
[650, 3.85 + 1.64e-2j],
[700, 3.78 + 1.26e-2j]]
Si_n_data = array(Si_n_data)
Si_n_fn = interp1d(Si_n_data[:,0], Si_n_data[:,1], kind='linear')
# SiO2 refractive index (approximate): 1.46 regardless of wavelength
SiO2_n_fn = lambda wavelength : 1.46
# air refractive index
air_n_fn = lambda wavelength : 1
n_fn_list = [air_n_fn, SiO2_n_fn, Si_n_fn]
th_0 = 0
# Print the colors, and show plots, for the special case of 300nm-thick SiO2
d_list = [inf, 300, inf]
reflectances = color.calc_reflectances(n_fn_list, d_list, th_0)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
print('air / 300nm SiO2 / Si --- rgb =', color_dict['rgb'], ', xyY =', color_dict['xyY'])
plt.figure()
color.plot_reflectances(reflectances,
title='air / 300nm SiO2 / Si -- '
'Fraction reflected at each wavelength')
plt.figure()
color.plot_spectrum(spectrum,
title='air / 300nm SiO2 / Si -- '
'Reflected spectrum under D65 illumination')
# Calculate irgb color (i.e. gamma-corrected sRGB display color rounded to
# integers 0-255) versus thickness of SiO2
max_SiO2_thickness = 600
SiO2_thickness_list = linspace(0,max_SiO2_thickness,num=80)
irgb_list = []
for SiO2_d in SiO2_thickness_list:
d_list = [inf, SiO2_d, inf]
reflectances = color.calc_reflectances(n_fn_list, d_list, th_0)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
irgb_list.append(color_dict['irgb'])
# Plot those colors
print('Making color vs SiO2 thickness graph. Compare to (for example)')
print('http://www.htelabs.com/appnotes/sio2_color_chart_thermal_silicon_dioxide.htm')
plt.figure()
plt.plot([0,max_SiO2_thickness],[1,1])
plt.xlim(0,max_SiO2_thickness)
plt.ylim(0,1)
plt.xlabel('SiO2 thickness (nm)')
plt.yticks([])
plt.title('Air / SiO2 / Si color vs SiO2 thickness')
for i in range(len(SiO2_thickness_list)):
# One strip of each color, centered at x=SiO2_thickness_list[i]
if i==0:
x0 = 0
else:
x0 = (SiO2_thickness_list[i] + SiO2_thickness_list[i-1]) / 2
if i == len(SiO2_thickness_list) - 1:
x1 = max_SiO2_thickness
else:
x1 = (SiO2_thickness_list[i] + SiO2_thickness_list[i+1]) / 2
y0 = 0
y1 = 1
poly_x = [x0, x1, x1, x0]
poly_y = [y0, y0, y1, y1]
color_string = colorpy.colormodels.irgb_string_from_irgb(irgb_list[i])
plt.fill(poly_x, poly_y, color_string, edgecolor=color_string)
Explanation: Sample 5
Color calculations: What color is a air / thin SiO2 / Si wafer?
End of explanation |
13,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Our Mission
Spam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'.
In this mission we will be using the Naive Bayes algorithm to create a model that can classify 'https
Step1: Step 1.2
Step2: Step 2.1
Step3: Step 2
Step4: Step 3
Step5: Step 4
Step6: Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.
We should now have a solid understanding of what is happening behind the scenes in the sklearn.feature_extraction.text.CountVectorizer method of scikit-learn.
We will now implement sklearn.feature_extraction.text.CountVectorizer method in the next step.
Step 2.3
Step7: Instructions
Step8: Data preprocessing with CountVectorizer()
In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are
Step9: Instructions
Step10: The get_feature_names() method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'.
Instructions
Step11: Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately.
Instructions
Step12: Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created.
One potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical contructs etc could skew our matrix and affect our analyis.
There are a couple of ways to mitigate this. One way is to use the stop_words parameter and set its value to english. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn.
Another way of mitigating this is by using the <a href = 'http
Step13: Step 3.2
Step14: Step 4.1
Step15: Using all of this information we can calculate our posteriors as follows
Step16: Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption.
What does the term 'Naive' in 'Naive Bayes' mean ?
The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of 0 and 1, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other.
Step 4.2
Step17: Now we can compute the probability of P(J|F,I), that is the probability of Jill Stein saying the words Freedom and Immigration and P(G|F,I), that is the probability of Gary Johnson saying the words Freedom and Immigration.
Step18: And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for Gary Johnson of the Libertarian party.
Another more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independant entities and hence being 'naive' in its approach.
Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm looks at each word individually and not as associated entities with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam.
Step 5
Step19: Now that predictions have been made on our test set, we need to check the accuracy of our predictions.
Step 6 | Python Code:
'''
Solution
'''
import pandas as pd
# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
df = pd.read_table('smsspamcollection/SMSSpamCollection',
sep='\t',
header=None,
names=['label', 'sms_message'])
# Output printing out first 5 columns
df.head()
Explanation: Our Mission
Spam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'.
In this mission we will be using the Naive Bayes algorithm to create a model that can classify 'https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection' SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Usually they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us!
Being able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions.
Step 0: Introduction to the Naive Bayes Theorem
Bayes theorem is one of the earliest probabilistic inference algorithms developed by Reverend Bayes (which he used to try and infer the existence of God no less) and still performs extremely well for certain use cases.
It's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like the age, sex, and other smaller factors like is the person carrying a bag?, does the person look nervous? etc. you can make a judgement call as to if that person is viable threat.
If an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. The Bayes theorem works in the same way as we are computing the probability of an event(a person being a threat) based on the probabilities of certain related events(age, sex, presence of bag or not, nervousness etc. of the person).
One thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't.
This is the 'Naive' bit of the theorem where it considers each feature to be independant of each other which may not always be the case and hence that can affect the final judgement.
In short, the Bayes theorem calculates the probability of a certain event happening(in our case, a message being spam) based on the joint probabilistic distributions of certain other events(in our case, a message being classified as spam). We will dive into the workings of the Bayes theorem later in the mission, but first, let us understand the data we are going to work with.
Step 1.1: Understanding our dataset ###
We will be using a 'https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection' dataset from the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes.
Here's a preview of the data:
<img src="images/dqnb.png" height="1242" width="1242">
The columns in the data set are currently not named and as you can see, there are 2 columns.
The first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam.
The second column is the text content of the SMS message that is being classified.
Instructions:
* Import the dataset into a pandas dataframe using the read_table method. Because this is a tab separated dataset we will be using '\t' as the value for the 'sep' argument which specifies this format.
* Also, rename the column names by specifying a list ['label, 'sms_message'] to the 'names' argument of read_table().
* Print the first five values of the dataframe with the new column names.
End of explanation
'''
Solution
'''
df['label'] = df.label.map({'ham':0, 'spam':1})
print(df.shape)
df.head() # returns (rows, columns)
Explanation: Step 1.2: Data Preprocessing
Now that we have a basic understanding of what our dataset looks like, lets convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation.
You might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values).
Our model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers.
Instructions:
* Convert the values in the 'label' colum to numerical values using map method as follows:
{'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1.
* Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using
'shape'.
End of explanation
'''
Solution:
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
lower_case_documents = []
for i in documents:
lower_case_documents.append(i.lower())
print(lower_case_documents)
Explanation: Step 2.1: Bag of words
What we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy.
Here we'd like to introduce the Bag of Words(BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter.
Using a process which we will go through now, we can covert a collection of documents to a matrix, with each document being a row and each word(token) being the column, and the corresponding (row,column) values being the frequency of occurrance of each word or token in that document.
For example:
Lets say we have 4 documents as follows:
['Hello, how are you!',
'Win money, win from home.',
'Call me now',
'Hello, Call you tomorrow?']
Our objective here is to convert this set of text to a frequency distribution matrix, as follows:
<img src="images/countvectorizer.png" height="542" width="542">
Here as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document.
Lets break this down and see how we can do this conversion using a small set of documents.
To handle this, we will be using sklearns
<a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer'> sklearn.feature_extraction.text.CountVectorizer </a> method which does the following:
It tokenizes the string(separates the string into individual words) and gives an integer ID to each token.
It counts the occurrance of each of those tokens.
Please Note:
The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the lowercase parameter which is by default set to True.
It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the token_pattern parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters.
The third parameter to take note of is the stop_words parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the' etc. By setting this parameter value to english, CountVectorizer will automatically ignore all words(from our input text) that are found in the built in list of english stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam.
We will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data.
Step 2.2: Implementing Bag of Words from scratch
Before we dive into scikit-learn's Bag of Words(BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes.
Step 1: Convert all strings to their lower case form.
Let's say we have a document set:
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
Instructions:
* Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method.
End of explanation
'''
Solution:
'''
sans_punctuation_documents = []
import string
for i in lower_case_documents:
sans_punctuation_documents.append(i.translate(str.maketrans('', '', string.punctuation)))
print(sans_punctuation_documents)
Explanation: Step 2: Removing all punctuations
Instructions:
Remove all punctuation from the strings in the document set. Save them into a list called
'sans_punctuation_documents'.
End of explanation
'''
Solution:
'''
preprocessed_documents = []
for i in sans_punctuation_documents:
preprocessed_documents.append(i.split(' '))
print(preprocessed_documents)
Explanation: Step 3: Tokenization
Tokenizing a sentence in a document set means splitting up a sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and the end of a word(for example we could use a single space as the delimiter for identifying words in our document set.)
Instructions:
Tokenize the strings stored in 'sans_punctuation_documents' using the split() method. and store the final document set
in a list called 'preprocessed_documents'.
End of explanation
'''
Solution
'''
frequency_list = []
import pprint
from collections import Counter
for i in preprocessed_documents:
frequency_counts = Counter(i)
frequency_list.append(frequency_counts)
pprint.pprint(frequency_list)
Explanation: Step 4: Count frequencies
Now that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the Counter method from the Python collections library for this purpose.
Counter counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list.
Instructions:
Using the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequncy of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'.
End of explanation
'''
Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the
document-term matrix generation happens. We have created a sample document set 'documents'.
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
Explanation: Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.
We should now have a solid understanding of what is happening behind the scenes in the sklearn.feature_extraction.text.CountVectorizer method of scikit-learn.
We will now implement sklearn.feature_extraction.text.CountVectorizer method in the next step.
Step 2.3: Implementing Bag of Words in scikit-learn
Now that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step.
End of explanation
'''
Solution
'''
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer()
Explanation: Instructions:
Import the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'.
End of explanation
'''
Practice node:
Print the 'count_vector' object which is an instance of 'CountVectorizer()'
'''
print(count_vector)
Explanation: Data preprocessing with CountVectorizer()
In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are:
lowercase = True
The lowercase parameter has a default value of True which converts all of our text to its lower case form.
token_pattern = (?u)\\b\\w\\w+\\b
The token_pattern parameter has a default regular expression value of (?u)\\b\\w\\w+\\b which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words.
stop_words
The stop_words parameter, if set to english will remove all words from our document set that match a list of English stop words which is defined in scikit-learn. Considering the size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not be setting this parameter value.
You can take a look at all the parameter values of your count_vector object by simply printing out the object as follows:
End of explanation
'''
Solution:
'''
count_vector.fit(documents)
count_vector.get_feature_names()
Explanation: Instructions:
Fit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words
which have been categorized as features using the get_feature_names() method.
End of explanation
'''
Solution
'''
doc_array = count_vector.transform(documents).toarray()
doc_array
Explanation: The get_feature_names() method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'.
Instructions:
Create a matrix with the rows being each of the 4 documents, and the columns being each word.
The corresponding (row, column) value is the frequency of occurrance of that word(in the column) in a particular
document(in the row). You can do this using the transform() method and passing in the document data set as the
argument. The transform() method returns a matrix of numpy integers, you can convert this to an array using
toarray(). Call the array 'doc_array'
End of explanation
'''
Solution
'''
frequency_matrix = pd.DataFrame(doc_array,
columns = count_vector.get_feature_names())
frequency_matrix
Explanation: Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately.
Instructions:
Convert the array we obtained, loaded into 'doc_array', into a dataframe and set the column names to
the word names(which you computed earlier using get_feature_names(). Call the dataframe 'frequency_matrix'.
End of explanation
'''
Solution
'''
# split into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0]))
Explanation: Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created.
One potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical contructs etc could skew our matrix and affect our analyis.
There are a couple of ways to mitigate this. One way is to use the stop_words parameter and set its value to english. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn.
Another way of mitigating this is by using the <a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer'> sklearn.feature_extraction.text.TfidfVectorizer</a> method. This method is out of scope for the context of this lesson.
Step 3.1: Training and testing sets
Now that we have understood how to deal with the Bag of Words problem we can get back to our dataset and proceed with our analysis. Our first step in this regard would be to split our dataset into a training and testing set so we can test our model later.
Instructions:
Split the dataset into a training and testing set by using the train_test_split method in sklearn. Split the data
using the following variables:
* X_train is our training data for the 'sms_message' column.
* y_train is our training data for the 'label' column
* X_test is our testing data for the 'sms_message' column.
* y_test is our testing data for the 'label' column
Print out the number of rows we have in each our training and testing data.
End of explanation
'''
[Practice Node]
The code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data
and then transforming the data into a document-term matrix; secondly, for the testing data we are only
transforming the data into a document-term matrix.
This is similar to the process we followed in Step 2.3
We will provide the transformed data to students in the variables 'training_data' and 'testing_data'.
'''
'''
Solution
'''
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
Explanation: Step 3.2: Applying Bag of Words processing to our dataset.
Now that we have split the data, our next objective is to follow the steps from Step 2: Bag of words and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here:
Firstly, we have to fit our training data (X_train) into CountVectorizer() and return the matrix.
Secondly, we have to transform our testing data (X_test) to return the matrix.
Note that X_train is our training data for the 'sms_message' column in our dataset and we will be using this to train our model.
X_test is our testing data for the 'sms_message' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with y_test in a later step.
For now, we have provided the code that does the matrix transformations for you!
End of explanation
'''
Instructions:
Calculate probability of getting a positive test result, P(Pos)
'''
'''
Solution (skeleton code will be provided)
'''
# P(D)
p_diabetes = 0.01
# P(~D)
p_no_diabetes = 0.99
# Sensitivity or P(Pos|D)
p_pos_diabetes = 0.9
# Specificity or P(Neg/~D)
p_neg_no_diabetes = 0.9
# P(Pos)
p_pos = (p_diabetes * p_pos_diabetes) + (p_no_diabetes * (1 - p_neg_no_diabetes))
print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))
Explanation: Step 4.1: Bayes Theorem implementation from scratch
Now that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of a prior(the probabilities that we are aware of or that is given to us) and the posterior(the probabilities we are looking to compute using the priors).
Let us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result.
In the medical field, such probabilies play a very important role as it usually deals with life and death situatuations.
We assume the following:
P(D) is the probability of a person having Diabetes. It's value is 0.01 or in other words, 1% of the general population has diabetes(Disclaimer: these values are assumptions and are not reflective of any medical study).
P(Pos) is the probability of getting a positive test result.
P(Neg) is the probability of getting a negative test result.
P(Pos|D) is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value 0.9. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate.
P(Neg|~D) is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of 0.9 and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate.
The Bayes formula is as follows:
<img src="images/bayes_formula.png" height="242" width="242">
P(A) is the prior probability of A occuring independantly. In our example this is P(D). This value is given to us.
P(B) is the prior probability of B occuring independantly. In our example this is P(Pos).
P(A|B) is the posterior probability that A occurs given B. In our example this is P(D|Pos). That is, the probability of an individual having diabetes, given that, that individual got a positive test result. This is the value that we are looking to calculate.
P(B|A) is the likelihood probability of B occuring, given A. In our example this is P(Pos|D). This value is given to us.
Putting our values into the formula for Bayes theorem we get:
P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
The probability of getting a positive test result P(Pos) can be calulated using the Sensitivity and Specificity as follows:
P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]
End of explanation
'''
Instructions:
Compute the probability of an individual having diabetes, given that, that individual got a positive test result.
In other words, compute P(D|Pos).
The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
'''
'''
Solution
'''
# P(D|Pos)
p_diabetes_pos = (p_diabetes * p_pos_diabetes) / p_pos
print('Probability of an individual having diabetes, given that that individual got a positive test result is:\
',format(p_diabetes_pos))
'''
Instructions:
Compute the probability of an individual not having diabetes, given that, that individual got a positive test result.
In other words, compute P(~D|Pos).
The formula is: P(~D|Pos) = (P(~D) * P(Pos|~D) / P(Pos)
Note that P(Pos/~D) can be computed as 1 - P(Neg/~D).
Therefore:
P(Pos/~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1
'''
'''
Solution
'''
# P(Pos/~D)
p_pos_no_diabetes = 0.1
# P(~D|Pos)
p_no_diabetes_pos = (p_no_diabetes * p_pos_no_diabetes) / p_pos
print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\
,p_no_diabetes_pos
Explanation: Using all of this information we can calculate our posteriors as follows:
The probability of an individual having diabetes, given that, that individual got a positive test result:
P(D/Pos) = (P(D) * Sensitivity)) / P(Pos)
The probability of an individual not having diabetes, given that, that individual got a positive test result:
P(~D/Pos) = (P(~D) * (1-Specificity)) / P(Pos)
The sum of our posteriors will always equal 1.
End of explanation
'''
Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or
P(F,I).
The first step is multiplying the probabilities of Jill Stein giving a speech with her individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text
The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text
The third step is to add both of these probabilities and you will get P(F,I).
'''
'''
Solution: Step 1
'''
# P(J)
p_j = 0.5
# P(J|F)
p_j_f = 0.1
# P(J|I)
p_j_i = 0.1
p_j_text = p_j * p_j_f * p_j_i
print(p_j_text)
'''
Solution: Step 2
'''
# P(G)
p_g = 0.5
# P(G|F)
p_g_f = 0.7
# P(G|I)
p_g_i = 0.2
p_g_text = p_g * p_g_f * p_g_i
print(p_g_text)
'''
Solution: Step 3: Compute P(F,I) and store in p_f_i
'''
p_f_i = p_j_text + p_g_text
print('Probability of words freedom and immigration being said are: ', format(p_f_i))
Explanation: Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption.
What does the term 'Naive' in 'Naive Bayes' mean ?
The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of 0 and 1, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other.
Step 4.2: Naive Bayes implementation from scratch
Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than feature.
Let's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech:
Probability that Jill Stein says 'freedom': 0.1 ---------> P(J|F)
Probability that Jill Stein says 'immigration': 0.1 -----> P(J|I)
Probability that Jill Stein says 'environment': 0.8 -----> P(J|E)
Probability that Gary Johnson says 'freedom': 0.7 -------> P(G|F)
Probability that Gary Johnson says 'immigration': 0.2 ---> P(G|I)
Probability that Gary Johnson says 'environment': 0.1 ---> P(G|E)
And let us also assume that the probablility of Jill Stein giving a speech, P(J) is 0.5 and the same for Gary Johnson, P(G) = 0.5.
Given this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes'theorem comes into play as we are considering two features, 'freedom' and 'immigration'.
Now we are at a place where we can define the formula for the Naive Bayes' theorem:
<img src="images/naivebayes.png" height="342" width="342">
Here, y is the class variable or in our case the name of the candidate and x1 through xn are the feature vectors or in our case the individual words. The theorem makes the assumption that each of the feature vectors or words (xi) are independent of each other.
To break this down, we have to compute the following posterior probabilities:
P(J|F,I): Probability of Jill Stein saying the words Freedom and Immigration.
Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I). Here P(F,I) is the probability of the words 'freedom' and 'immigration' being said in a speech.
P(G|F,I): Probability of Gary Johnson saying the words Freedom and Immigration.
Using the formula, we can compute this as follows: P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I)
End of explanation
'''
Instructions:
Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I) and store it in a variable p_j_fi
'''
'''
Solution
'''
p_j_fi = p_j_text / p_f_i
print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))
'''
Instructions:
Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I) and store it in a variable p_g_fi
'''
'''
Solution
'''
p_g_fi = p_g_text / p_f_i
print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))
Explanation: Now we can compute the probability of P(J|F,I), that is the probability of Jill Stein saying the words Freedom and Immigration and P(G|F,I), that is the probability of Gary Johnson saying the words Freedom and Immigration.
End of explanation
'''
Instructions:
We have loaded the training data into the variable 'training_data' and the testing data into the
variable 'testing_data'.
Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier
'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier.
'''
'''
Solution
'''
from sklearn.naive_bayes import MultinomialNB
naive_bayes = MultinomialNB()
naive_bayes.fit(training_data, y_train)
'''
Instructions:
Now that our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.
'''
'''
Solution
'''
predictions = naive_bayes.predict(testing_data)
Explanation: And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for Gary Johnson of the Libertarian party.
Another more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independant entities and hence being 'naive' in its approach.
Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm looks at each word individually and not as associated entities with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam.
Step 5: Naive Bayes implementation using scikit-learn
Thankfully, sklearn has several Naive Bayes implementations that we can use and so we do not have to do the math from scratch. We will be using sklearns sklearn.naive_bayes method to make predictions on our dataset.
Specifically, we will be using the multinomial Naive Bayes implementation. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian(normal) distribution.
End of explanation
'''
Instructions:
Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions
you made earlier stored in the 'predictions' variable.
'''
'''
Solution
'''
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(y_test, predictions)))
print('Precision score: ', format(precision_score(y_test, predictions)))
print('Recall score: ', format(recall_score(y_test, predictions)))
print('F1 score: ', format(f1_score(y_test, predictions)))
Explanation: Now that predictions have been made on our test set, we need to check the accuracy of our predictions.
Step 6: Evaluating our model
Now that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, but first let's do quick recap of them.
Accuracy measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).
Precision tells us what proportion of messages we classified as spam, actually were spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of
[True Positives/(True Positives + False Positives)]
Recall(sensitivity) tells us what proportion of messages that actually were spam were classified by us as spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of
[True Positives/(True Positives + False Negatives)]
For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score.
We will be using all 4 metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing.
End of explanation |
13,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fractal Dimension and Lacunarity
Much of the code/theory is from
Step1: Generate Data
Generate a random walk of length n in d dimensions and plot it if it is 1, 2, or 3 dimensional.
Step2: Count Boxes
This function efficiently counts the boxes filled at a given scale by points in the path. Note tha this assumes a smooth, continuous path through space without jumps. The box counting will not interpolate between points at sufficiently small scales.
Step3: In addition to counting the boxes, if the scales being computed are too wide, we'll want to remove them from the estimate. Scales that are too large will result in only 1 or 2 boxes while scales that are too small will result in the number of boxes equaling the number of points. To combat this, we can filter the scales to be used according to the standard error of the log of the box counts. This removes the lower and upper asymptotes under many conditions (though it isn't perfect).
Step4: Fit Line
Next, we'll fit a linear trend to the log of the box count and scale parameters. The slope of this line is the fractal dimension while the intercept is the lacunarity.
Step5: Plot
We can plot the log-log plot of the scale and box count to see the fit. The slope will often be slightly underestimated and the intercept slightly overestimated due to the imperfect asymptote filter.
Step6: Final Functions
Now, we can wrap all of the above functionality in some functions for convenience.
Step7: Benchmarking
Finally, we can benchmark and see that the calculation is fairly efficient, with the curve fitting being the most expensive step (thus allowing the function to scale very well to longer paths).
Step8: Scale Bounds Variability
Finally, we need to look at if the scale bounds vary significant across the sample. This is so we can decide a unified scale bounds for all participants.
Step9: So we will use the window [1,2...,14] as it is the most conservative window for spacetime, but because all participants have identical windows for space and time only, we'll use those. It is reasonable to expect each of these to have different scale parameters as they are in different units.
Scale Bounds Variability (4-Room)
Finally, we need to look at if the scale bounds vary significant across the sample. This is so we can decide a unified scale bounds for all participants. | Python Code:
import scipy.optimize
from pandas import Series, DataFrame
import statsmodels.formula.api as sm
import numpy as np, scipy, scipy.stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
Explanation: Fractal Dimension and Lacunarity
Much of the code/theory is from: http://connor-johnson.com/2014/03/04/fractal-dimension-and-box-counting/
This notebook contains the prototype code for calculating the fractal dimension and lacunarity of a path in any positive integer dimension. It is tested on random walk data.
Import Packages
End of explanation
n = 10000
d = 2
start_pos = [0]*d
steps = np.random.rand(n - 1, len(start_pos)) * 2 - [[1]*len(start_pos)]*(n - 1)
random_walk = [start_pos]
for s in steps:
random_walk.append(random_walk[-1] + s)
if d == 1 or d == 2:
plt.plot(*np.transpose(random_walk))
if d == 3:
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(*np.transpose(random_walk), label='parametric curve')
plt.show()
Explanation: Generate Data
Generate a random walk of length n in d dimensions and plot it if it is 1, 2, or 3 dimensional.
End of explanation
def count_boxes(data, scale):
boxed_path = np.floor(np.divide(data, scale))
unique = np.unique(boxed_path, axis=0)
min_range = np.min(unique, axis=0)
max_range = np.max(unique, axis=0)
possible_boxes_in_range = np.prod(np.abs(np.subtract(min_range, max_range)))
filled_boxes = len(unique)
return filled_boxes
Explanation: Count Boxes
This function efficiently counts the boxes filled at a given scale by points in the path. Note tha this assumes a smooth, continuous path through space without jumps. The box counting will not interpolate between points at sufficiently small scales.
End of explanation
data = random_walk
scale_range = 20
r = np.array([2.0**(scale_range/2)/(2.0**i) for i in range(scale_range,0,-1)]) # Powers of 2
N = [ count_boxes( data, ri) for ri in r ]
rlog = np.log(r)
Nlog = np.log(N)
ste = np.std(Nlog)/np.sqrt(len(data))
indicies = [idx for idx, (a, b) in enumerate(zip(rlog, Nlog)) if (not b <= (min(Nlog) + ste) and not b >= (max(Nlog) - ste))]
r_original = r[:]
N_original = N[:]
N = np.take(N, indicies)
r = np.take(r, indicies)
Explanation: In addition to counting the boxes, if the scales being computed are too wide, we'll want to remove them from the estimate. Scales that are too large will result in only 1 or 2 boxes while scales that are too small will result in the number of boxes equaling the number of points. To combat this, we can filter the scales to be used according to the standard error of the log of the box counts. This removes the lower and upper asymptotes under many conditions (though it isn't perfect).
End of explanation
def f( x, A, Df ):
return Df * x + A
popt, pcov = scipy.optimize.curve_fit( f, np.log( 1./r ), np.log( N ) )
A, Df = popt
Explanation: Fit Line
Next, we'll fit a linear trend to the log of the box count and scale parameters. The slope of this line is the fractal dimension while the intercept is the lacunarity.
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plt.plot( 1./r_original, N_original, 'b.-' )
#ax.plot( 1./r_original, np.exp(A)*1./r_original**Df, 'g', alpha=1.0 )
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_aspect(1)
plt.xlabel('Box Size')
plt.ylabel('Number of Boxes')
plt.grid(which='minor', ls='-', color='0.75')
plt.grid(which='major', ls='-', color='0.25')
plt.title('Box-Counting')
plt.show()
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plt.plot( 1./r, N, 'b.-' )
#ax.plot( 1./r, np.exp(A)*1./r**Df, 'g', alpha=1.0 )
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_aspect(1)
plt.xlabel('Box Size')
plt.ylabel('Number of Boxes')
plt.grid(which='minor', ls='-', color='0.75')
plt.grid(which='major', ls='-', color='0.25')
plt.title('Box-Counting')
plt.show()
Explanation: Plot
We can plot the log-log plot of the scale and box count to see the fit. The slope will often be slightly underestimated and the intercept slightly overestimated due to the imperfect asymptote filter.
End of explanation
def generate_random_walk(n=100000, d=3):
start_pos = [0]*d
steps = np.random.rand(n - 1, len(start_pos)) * 2 - [[1]*len(start_pos)]*(n - 1)
random_walk = [start_pos]
for s in steps:
random_walk.append(random_walk[-1] + s)
return random_walk
def count_boxes(data, scale):
boxed_path = np.floor(np.divide(data, scale))
unique = np.unique(boxed_path, axis=0)
filled_boxes = len(unique)
return filled_boxes
def calculate_fd_and_lacunarity(data):
scale_range = 20
r = np.array([2.0 ** (scale_range / 2) / (2.0 ** i) for i in range(scale_range, 0, -1)]) # Powers of 2 around 0
N = [count_boxes(data, ri) for ri in r]
Nlog = np.log(N)
ste = np.std(Nlog) / np.sqrt(len(data))
indicies = [idx for idx, n in enumerate(Nlog) if (not n <= (min(Nlog) + ste) and not n >= (max(Nlog) - ste))]
N = np.take(N, indicies)
r = np.take(r, indicies)
def linear_function(x, A, Df):
return Df * x + A
popt, pcov = scipy.optimize.curve_fit(linear_function, np.log(1. / r), np.log(N))
lacunarity, fd = popt
return fd, lacunarity
Explanation: Final Functions
Now, we can wrap all of the above functionality in some functions for convenience.
End of explanation
import time
import matplotlib.pyplot as plt
def time_function(iterations=100, n=100000, d=3):
data = generate_random_walk(n=n, d=d)
t0 = time.time()
for i in range(0, iterations):
fd, lac = calculate_fd_and_lacunarity(data)
avg_time = (time.time()-t0)/iterations
print('{0} seconds average runtime for n={1}, d={2}, iters={3}.'.format(str(avg_time), n, d, iterations))
return avg_time
ns = [1000, 10000, 100000]
ds = [1, 2, 3]
results = [[time_function(iterations=25, n=n, d=d) for d in ds] for n in ns]
plt.imshow(results)
plt.colorbar()
plt.show()
Explanation: Benchmarking
Finally, we can benchmark and see that the calculation is fairly efficient, with the curve fitting being the most expensive step (thus allowing the function to scale very well to longer paths).
End of explanation
import numpy as np
import scipy.optimize
def calculate_fd_and_lacunarity(data, indicies=None):
scale_range = 20
r = np.array([2.0 ** (scale_range / 2) / (2.0 ** i) for i in range(scale_range, 0, -1)]) # Powers of 2 around 0
N = [count_boxes(data, ri) for ri in r]
Nlog = np.log(N)
ste = np.std(Nlog) / np.sqrt(len(data))
if indicies is None:
indicies = [idx for idx, n in enumerate(Nlog) if (not n <= (min(Nlog) + ste) and not n >= (max(Nlog) - ste))]
N = np.take(N, indicies)
r = np.take(r, indicies)
def linear_function(x, A, Df):
return Df * x + A
popt, pcov = scipy.optimize.curve_fit(linear_function, np.log(1. / r), np.log(N))
lacunarity, fd = popt
return fd, lacunarity, indicies
from cogrecon.core.data_flexing.time_travel_task.time_travel_task_binary_reader import find_data_files_in_directory, read_binary_file
search_directory=r'C:\Users\Kevin\Documents\GitHub\msl-iposition-pipeline\examples\saved_data\Paper Data (cleaned)'
file_regex="\d\d\d_\d_1_\d_\d\d\d\d-\d\d-\d\d_\d\d-\d\d-\d\d.dat"
output_path='time_travel_task_navigation_summary.csv'
last_pilot_id=20
temporal_boundary_regions = [[[-100, 15]], [[15, 30]], [[30, 45]], [[45, 100]]]
files = find_data_files_in_directory(search_directory, file_regex=file_regex)
len(files)
from tqdm import tqdm
# Precalculate FD/Lacunarity Thresholds
index0 = []
index1 = []
index2 = []
for path in tqdm(files):
iterations = read_binary_file(path)
timeline = [[i['time_val']] for i in iterations]
spaceline = [[i['x'], i['z']] for i in iterations]
spacetimeline = [[i['x'], i['z'], i['time_val']] for i in iterations]
fd_t, lac_t, idxs0 = calculate_fd_and_lacunarity(timeline)
fd_s, lac_s, idxs1 = calculate_fd_and_lacunarity(spaceline)
fd_st, lac_st, idxs2 = calculate_fd_and_lacunarity(spacetimeline)
index0.append(idxs0)
index1.append(idxs1)
index2.append(idxs2)
unique0 = [list(x) for x in list(set(frozenset(item) for item in index0))]
unique1 = [list(x) for x in list(set(frozenset(item) for item in index1))]
unique2 = [list(x) for x in list(set(frozenset(item) for item in index2))]
print(unique0)
print(unique1)
print(unique2)
Explanation: Scale Bounds Variability
Finally, we need to look at if the scale bounds vary significant across the sample. This is so we can decide a unified scale bounds for all participants.
End of explanation
import numpy as np
import scipy.optimize
def calculate_fd_and_lacunarity(data, indicies=None):
scale_range = 20
r = np.array([2.0 ** (scale_range / 2) / (2.0 ** i) for i in range(scale_range, 0, -1)]) # Powers of 2 around 0
N = [count_boxes(data, ri) for ri in r]
Nlog = np.log(N)
ste = np.std(Nlog) / np.sqrt(len(data))
if indicies is None:
indicies = [idx for idx, n in enumerate(Nlog) if (not n <= (min(Nlog) + ste) and not n >= (max(Nlog) - ste))]
N = np.take(N, indicies)
r = np.take(r, indicies)
def linear_function(x, A, Df):
return Df * x + A
popt, pcov = scipy.optimize.curve_fit(linear_function, np.log(1. / r), np.log(N))
lacunarity, fd = popt
return fd, lacunarity, indicies
import pandas as pd
import os
output_directory = '2018-04-25_16-55-14'
data = pd.read_csv(os.path.join('.', output_directory, 'study_path.csv'))
grp = data.groupby(['subject_id', 'trial_number'])
import tqdm
import numpy as np
# Precalculate FD/Lacunarity Thresholds
index = []
for name, group in tqdm.tqdm(grp):
spaceline = np.array(group[['x', 'z']])
fd_s, lac_s, idxs = calculate_fd_and_lacunarity(spaceline)
index.append(idxs)
unique = [list(x) for x in list(set(frozenset(item) for item in index))]
print(unique)
Explanation: So we will use the window [1,2...,14] as it is the most conservative window for spacetime, but because all participants have identical windows for space and time only, we'll use those. It is reasonable to expect each of these to have different scale parameters as they are in different units.
Scale Bounds Variability (4-Room)
Finally, we need to look at if the scale bounds vary significant across the sample. This is so we can decide a unified scale bounds for all participants.
End of explanation |
13,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the dmdd tutorial!
A python package that enables simple simulation and Bayesian posterior analysis
of nuclear-recoil data from dark matter direct detection experiments
for a wide variety of theories of dark matter-nucleon interactions.
dmdd has the following features
Step1: Let's calculate, separately, a differential rate for a standard spin-independent interaction (with $f_n/f_p=1$), and for an electric-dipole interaction with a massive mediator, assuming a xenon target, and a WIMP mass of 50 GeV, for standard values of the velocity parameters and local DM density
Step2: Get the total rate for the same scenario, in the energy window between 5 and 40 keV (assuming unit efficiency)
Step3: You can also plot the corresponding recoil-energy spectra; e.g. for 1000 kg-year exposure
Step4: NOTES
Step5: NOTE
Step6: IV. Simulation Object
This object handles a single simulated data set (nuclear recoil energy spectrum). It is generaly initialized and used by the MultinestRun object, but can be used stand-alone.
Simulation data will only be generated if a simulation with the right parameters and name does not already exist, or if force_sim=True is provided upon Simulation initialization; if the data exist, it will just be read in. (Data is a list of nuclear recoil energies of "observed" events.) Initializing Simulation with given parameters for the first time will produce 3 files, located by default at $DMDD_PATH/simulations (or ./simulations if $DMDD_PATH not defined)
Step7: V. MultinestRun Object
This is a "master" class of dmdd that makes use of all other objects. It takes in experimental parameters, particle-physics parameters, and astrophysical parameters, and then generates a simulation (if it doesn't already exist), and prepares to perform MultiNest analysis of simulated data. It has methods to do a MultiNest run (.fit() method) and to visualize outputs (.visualize() method). Model used for simulation does not have to be the same as the Model used for fitting. Simulated spectra from multiple experiments will be analyzed jointly if MultiNest run is initialized with a list of appropriate Experiment objects.
The likelihod function is an argument of the fitting model (Model object); for UV models it is set to dmdd.rate_UV.loglikelihood, and for models that would correspond to rate_genNR, dmdd.rate_genNR.loglikelihood. Both likelihood functions include the Poisson factor, and, if energy_resolution=True of the Experiment at hand, the factors that evaluate probability of each individual event, given the fitting model.
Example usage of MultinestRun is given below
Step8: The .visualize() method produces 2 types of plots (shown above) | Python Code:
I. Nuclear-recoil rates
-----
______
`dmdd` has three modules that let you calculate differential rate $\frac{dR}{dE_R}$ and total rate $R(E_R)$ of nuclear-recoil events:
I) `rate_UV`: rates for a variety of UV-complete theories (from Gresham & Zurek, 2014)
II) `rate_genNR`: rates for all non-relativistic scattering operators, including interference terms (from Fitzpatrick et al., 2013)
III) `rate_NR`: rates for individual nuclear responses compatible with the EFT, not automatically including interference terms (from Fitzpatrick et al., 2013)
Appropriate nuclear response functions (accompanied by the right momentum and energy dependencies of the rate) are automatically folded in, and for a specified target element natural abundance of its isotopes (with their specific response functions) are taken into account.
Explanation: Welcome to the dmdd tutorial!
A python package that enables simple simulation and Bayesian posterior analysis
of nuclear-recoil data from dark matter direct detection experiments
for a wide variety of theories of dark matter-nucleon interactions.
dmdd has the following features:
Calculation of the nuclear-recoil rates for various non-standard momentum-, velocity-, and spin-dependent scattering models.
Calculation of the appropriate nuclear response functions triggered by the chosen scattering model.
Inclusion of natural abundances of isotopes for a variety of target elements: Xe, Ge, Ar, F, I, Na.
Simple simulation of data (where data is a list of nuclear recoil energies, including Poisson noise) under different models.
Bayesian analysis (parameter estimation and model selection) of data using MultiNest.
All rate and response functions directly implement the calculations of Anand et al. (2013) and Fitzpatrick et al. (2013) (for non-relativistic operators, in rate_genNR and rate_NR), and Gresham & Zurek (2014) (for UV-motivated scattering models in rate_UV). Simulations follow the prescription from Gluscevic & Peter (2014), and Gluscevic et al. (2015).
This document demonstrates basic usage and describes inputs and outputs so you can quickly get started with dmdd. For more details, refer to the online documentation, or raise an issue on GitHub with questions or feedback.
End of explanation
%matplotlib inline
import numpy as np
import dmdd
# array of nuclear-recoil energies at which to evaluate the rate:
energies = np.linspace(1,100,5)
SI_rate = dmdd.rate_UV.dRdQ(energies, mass=50., sigma_si=70., fnfp_si=1.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon')
ED_rate = dmdd.rate_UV.dRdQ(energies, mass=50., sigma_elecdip=70.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon')
print SI_rate
print ED_rate
Explanation: Let's calculate, separately, a differential rate for a standard spin-independent interaction (with $f_n/f_p=1$), and for an electric-dipole interaction with a massive mediator, assuming a xenon target, and a WIMP mass of 50 GeV, for standard values of the velocity parameters and local DM density:
End of explanation
Rtot_SI = dmdd.rate_UV.R(dmdd.eff.efficiency_unit, mass=50.,
sigma_si=70., fnfp_si=1.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon', Qmin=5, Qmax=50)
Rtot_ED = dmdd.rate_UV.R(dmdd.eff.efficiency_unit, mass=50.,
sigma_elecdip=70.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon', Qmin=5, Qmax=50)
print 'Total spin-independent rate: {:.1e} events/sec/kg'.format(Rtot_SI)
print 'Total electric-dipole rate: {:.1e} events/sec/kg'.format(Rtot_ED)
Explanation: Get the total rate for the same scenario, in the energy window between 5 and 40 keV (assuming unit efficiency):
End of explanation
dmdd.dp.plot_spectrum('xenon',Qmin=5,Qmax=50,exposure=1000,
sigma_name='sigma_si',sigma_val=70,
fnfp_name='fnfp_si', fnfp_val=1,
mass=50, title='theory: SI',color='BlueViolet')
dmdd.dp.plot_spectrum('xenon',Qmin=5,Qmax=50,exposure=1000,
sigma_name='sigma_elecdip',sigma_val=70,
mass=50, title='theory: ED',color='DarkBlue')
Explanation: You can also plot the corresponding recoil-energy spectra; e.g. for 1000 kg-year exposure:
End of explanation
# intialize and instances of Experiment object with a germanium target, with energy resolution,
# and lower energy threshold of keV, upper threshold of 100 keV, and 200 kg-year exposure:
ge = dmdd.Experiment('Ge','germanium',1,100,200,dmdd.eff.efficiency_unit, energy_resolution=True)
# and a similar fluorine target with no energy resolution:
flu = dmdd.Experiment('F','fluorine',1,100,200,dmdd.eff.efficiency_unit, energy_resolution=False)
print 'experiment: {} ({:.0f} kg-yr)'.format(ge.name, ge.exposure)
minimum_mx = ge.find_min_mass(v_esc=540., v_lag=220., mx_guess=1.)
# this is the minimum detectable WIMP mass,
# given the recoil-energy threshold, and escape velocity
# from the Galaxy in the lab frame = v_esc + v_lag.
print 'minimum detectable WIMP mass: {:.1f} GeV'.format(minimum_mx)
# this is how to get the projected reach for such experiment for mx=50GeV,
# for sigma_p under a given theory, in this case, the standard spin-dependent scattering,
# assuming the experiment has 4 expected background events:
sigma = ge.sigma_limit(sigma_name='sigma_sd', fnfp_name='fnfp_sd', fnfp_val=-1.1,
mass=50, Nbackground=4, sigma_guess = 1e10, mx_guess=1.,
v_esc=540., v_lag=220., v_rms=220., rho_x=0.3)
sigma_normalized = sigma * dmdd.PAR_NORMS['sigma_sd']
print 'projected exclusion for SD scattering @ 50 GeV: sigma_p = {:.2e} cm^2'.format(sigma_normalized)
Explanation: NOTES:
Values of the cross-sections passed to the rate functions are normalized with normalizations stored in PAR_NORMS dictionary in globals module; the values used in all calculations are always of this form: sigma_si * dmdd.PAR_NORMS['sigma_si']
v_rms variable is equal to 3/2 * (Maxwellian rms velocity of ~155km/sec) ~ 220 km/sec
v_esc is in the Galactic frame
II. Experiment Object
This object packages all the information that defines a single "experiment". For statistical analysis, a list of these objects is passed to initialize an instance of a MultinestRun object, or to initialize an instance of a Simulation object. It can also be used on its own to explore the capabilities of a theoretical experiment. Experiments set up here can either have perfect energy resolution in a given analysis window, or no resolution (controlled by the parameter energy_resolution, default being True).
This is how you can define and use an instance of Experiment:
End of explanation
# more general way that uses a general Model class:
# set all sigma_p to zero by default:
default_rate_parameters = dict(mass=50., sigma_si=0., sigma_sd=0., sigma_anapole=0., sigma_magdip=0., sigma_elecdip=0.,
sigma_LS=0., sigma_f1=0., sigma_f2=0., sigma_f3=0.,
sigma_si_massless=0., sigma_sd_massless=0.,
sigma_anapole_massless=0., sigma_magdip_massless=0., sigma_elecdip_massless=0.,
sigma_LS_massless=0., sigma_f1_massless=0., sigma_f2_massless=0., sigma_f3_massless=0.,
fnfp_si=1., fnfp_sd=1.,
fnfp_anapole=1., fnfp_magdip=1., fnfp_elecdip=1.,
fnfp_LS=1., fnfp_f1=1., fnfp_f2=1., fnfp_f3=1.,
fnfp_si_massless=1., fnfp_sd_massless=1.,
fnfp_anapole_massless=1., fnfp_magdip_massless=1., fnfp_elecdip_massless=1.,
fnfp_LS_massless=1., fnfp_f1_massless=1., fnfp_f2_massless=1., fnfp_f3_massless=1.,
v_lag=220., v_rms=220., v_esc=544., rho_x=0.3)
elecdip = dmdd.Model('Elec.dip.light', ['mass','sigma_elecdip'],
dmdd.rate_UV.dRdQ, dmdd.rate_UV.loglikelihood,
default_rate_parameters)
# shortcut for scattering models corresponding to rates coded in rate_UV:
elecdip = dmdd.UV_Model('Elec.dip.', ['mass','sigma_elecdip'])
print 'model: {}, parameters: {}'.format(elecdip.name, elecdip.param_names)
# if you wish to set some of the parameters to be fixed
# when this model is used to fit data, you can define a dict fixed_params, e.g.:
millicharge = dmdd.UV_Model('Millicharge', ['mass', 'sigma_si_massless'],
fixed_params={'fnfp_si_massless': 0})
print 'model: {}, parameters: {}; fixed: {}'.format(millicharge.name,
millicharge.param_names,
millicharge.fixed_params)
Explanation: NOTE: initialization of this class requires passing of the efficiency function. Flat unit efficiency is available in dmdd.dmdd_efficiency module. You may want to include in there any new specific efficiency function you'd like to use.
III. Model Object
This object facilitates handling of a "hypothesis" that describes the scattering interaction at hand (to be used either to simulate recoil spectra, or to fit to the simulated recoil events). You have an option to set any parameter to have a fixed value, which will not be varied if the model is used to fit data.
Here's how you can use a general Model object, or its sub-class UV_Model:
End of explanation
# intialize an Experiment with iodine target, to be passed to Simulation:
iod = dmdd.Experiment('I','iodine',5,80,1000,dmdd.eff.efficiency_unit, energy_resolution=True)
# initialize a simulation with iod, for elecdip model defined above,
# for 50 GeV WIMP, for sigma_si = 70*PAR_NORMS['sigma_elecdip'] = 7e-43 cm^2:
test = dmdd.Simulation('simdemo', iod, elecdip, {'mass':50.,'sigma_elecdip':70.})
# you can easily access various attributes of this class, e.g.:
print 'simulation \'{}\' was done for experiment \'{}\', \
it had N={:.0f} events (<N>={:.0f} events), \n and \
the parameters passed to dRdQ were:\n\n {}'.format(test.name,
test.experiment.name,
test.N,
test.model_N,
test.dRdQ_params)
print '\n List of energies generated in {} is: \n\n'.format(test.name),test.Q
Explanation: IV. Simulation Object
This object handles a single simulated data set (nuclear recoil energy spectrum). It is generaly initialized and used by the MultinestRun object, but can be used stand-alone.
Simulation data will only be generated if a simulation with the right parameters and name does not already exist, or if force_sim=True is provided upon Simulation initialization; if the data exist, it will just be read in. (Data is a list of nuclear recoil energies of "observed" events.) Initializing Simulation with given parameters for the first time will produce 3 files, located by default at $DMDD_PATH/simulations (or ./simulations if $DMDD_PATH not defined):
.dat file with a list of nuclear-recoil energies (keV), drawn from a Poisson distribution with $<N>$ = number of events expected at a given energy for a given underlying scattering model and given experimental parameters.
.pkl file with all relevant initialization parameters for record
.pdf plot of the simulated recoil-energy spectrum with simulated data points (with Poisson error bars) on top of the underlying model
Below is an example of Simulation.
End of explanation
# simulate and analyze data from germanium and xenon targets:
xe = dmdd.Experiment('Xe', 'xenon', 5, 40, 1000, dmdd.eff.efficiency_unit)
ge = dmdd.Experiment('Ge', 'germanium', 0.4, 100, 100, dmdd.eff.efficiency_unit)
# simulate data for anapole interaction:
simmodel = dmdd.UV_Model('Anapole', ['mass','sigma_anapole'])
# fit data with standard SI interaction
fitmodel = dmdd.UV_Model('SI', ['mass', 'sigma_si'], fixed_params={'fnfp_si': 1.})
# initialize run:
run = dmdd.MultinestRun('simdemo1', [xe,ge], simmodel,{'mass':50.,'sigma_anapole':45.},
fitmodel, prior_ranges={'mass':(1,1000), 'sigma_si':(0.001,10000)})
# now run MultiNest and visualize data:
run.fit()
run.visualize()
Explanation: V. MultinestRun Object
This is a "master" class of dmdd that makes use of all other objects. It takes in experimental parameters, particle-physics parameters, and astrophysical parameters, and then generates a simulation (if it doesn't already exist), and prepares to perform MultiNest analysis of simulated data. It has methods to do a MultiNest run (.fit() method) and to visualize outputs (.visualize() method). Model used for simulation does not have to be the same as the Model used for fitting. Simulated spectra from multiple experiments will be analyzed jointly if MultiNest run is initialized with a list of appropriate Experiment objects.
The likelihod function is an argument of the fitting model (Model object); for UV models it is set to dmdd.rate_UV.loglikelihood, and for models that would correspond to rate_genNR, dmdd.rate_genNR.loglikelihood. Both likelihood functions include the Poisson factor, and, if energy_resolution=True of the Experiment at hand, the factors that evaluate probability of each individual event, given the fitting model.
Example usage of MultinestRun is given below:
End of explanation
print run.chainspath
Explanation: The .visualize() method produces 2 types of plots (shown above):
recoil spectra for each experiment used in the analysis, where data points, theory model, and best-fit model are all shown.
2d (marginalized) posteriors for every pair or fitting parameters, showing typically mass vs. cross-section $\sigma_p$.
Simulations are saved in $DMDD_PATH/simulations directory directly, and MultiNest chains and plots produced by the .visualize() method are saved in the appropriate chains file, in this case the following directory:
End of explanation |
13,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate test data for SASS implementation of the Gibson-Lanni PSF.
This Python algorithm has been verified against the original MATLAB code from the paper Li, J., Xue, F., & Blu, T. (2017). Fast and accurate three-dimensional point spread function computation for fluorescence microscopy. JOSA A, 34(6), 1029-1034.
Step1: Simulation setup
Define the simulation parameters
Step2: Create the coordinate systems
Step3: Step 1
Step4: Step 2
Step5: Now compute the point-spread function via
\begin{equation}
PSF \left( r, z; z_p, \mathbf{p} \right) = \left| \mathbf{R} \left( r; \mathbf{p} \right) \mathbf{c} \left( z \right) \right|^2
\end{equation}
Step6: Step 3
Step7: Compute the cumulative distribution
Step8: Interpolate the cumulative distribution
Here, we also create the Python equivalent to the getPixelSignature function.
Note that the ground truth is not symmetric about the center pixel because of the finite sampling of the CDF; it becomes more symmetric the smaller resPSF is and the larger sizeX/Y are. | Python Code:
import sys
%pylab inline
import scipy.special
from scipy.interpolate import interp1d
from scipy.interpolate import RectBivariateSpline
print('Python {}\n'.format(sys.version))
print('NumPy\t\t{}'.format(np.__version__))
print('matplotlib\t{}'.format(matplotlib.__version__))
print('SciPy\t\t{}'.format(scipy.__version__))
Explanation: Generate test data for SASS implementation of the Gibson-Lanni PSF.
This Python algorithm has been verified against the original MATLAB code from the paper Li, J., Xue, F., & Blu, T. (2017). Fast and accurate three-dimensional point spread function computation for fluorescence microscopy. JOSA A, 34(6), 1029-1034.
End of explanation
# Image properties
# Size of the PSF array, pixels
size_x = 256
size_y = 256
size_z = 1
# Precision control
num_basis = 100 # Number of rescaled Bessels that approximate the phase function
num_samples = 1000 # Number of pupil samples along radial direction
oversampling = 2 # Defines the upsampling ratio on the image space grid for computations
# Microscope parameters
NA = 1.4
wavelength = 0.610 # microns
M = 100 # magnification
ns = 1.33 # specimen refractive index (RI)
ng0 = 1.5 # coverslip RI design value
ng = 1.5 # coverslip RI experimental value
ni0 = 1.5 # immersion medium RI design value
ni = 1.5 # immersion medium RI experimental value
ti0 = 150 # microns, working distance (immersion medium thickness) design value
tg0 = 170 # microns, coverslip thickness design value
tg = 170 # microns, coverslip thickness experimental value
resPSF = 0.02 # microns (resPSF in the Java code)
resLateral = 0.1 # microns (resLateral in the Java code)
res_axial = 0.25 # microns
pZ = 2 # microns, particle distance from coverslip
z = [-2] # microns, stage displacement away from best focus
# Scaling factors for the Fourier-Bessel series expansion
min_wavelength = 0.436 # microns
scaling_factor = NA * (3 * np.arange(1, num_basis + 1) - 2) * min_wavelength / wavelength
Explanation: Simulation setup
Define the simulation parameters
End of explanation
# Place the origin at the center of the final PSF array
x0 = (size_x - 1) / 2
y0 = (size_y - 1) / 2
# Find the maximum possible radius coordinate of the PSF array by finding the distance
# from the center of the array to a corner
max_radius = round(sqrt((size_x - x0) * (size_x - x0) + (size_y - y0) * (size_y - y0))) + 1;
# Radial coordinates, image space
r = resPSF * np.arange(0, oversampling * max_radius) / oversampling
# Radial coordinates, pupil space
a = min([NA, ns, ni, ni0, ng, ng0]) / NA
rho = np.linspace(0, a, num_samples)
# Convert z to array
z = np.array(z)
Explanation: Create the coordinate systems
End of explanation
# Define the wavefront aberration
OPDs = pZ * np.sqrt(ns * ns - NA * NA * rho * rho) # OPD in the sample
OPDi = (z.reshape(-1,1) + ti0) * np.sqrt(ni * ni - NA * NA * rho * rho) - ti0 * np.sqrt(ni0 * ni0 - NA * NA * rho * rho) # OPD in the immersion medium
OPDg = tg * np.sqrt(ng * ng - NA * NA * rho * rho) - tg0 * np.sqrt(ng0 * ng0 - NA * NA * rho * rho) # OPD in the coverslip
W = 2 * np.pi / wavelength * (OPDs + OPDi + OPDg)
# Sample the phase
# Shape is (number of z samples by number of rho samples)
phase = np.cos(W) + 1j * np.sin(W)
# Define the basis of Bessel functions
# Shape is (number of basis functions by number of rho samples)
J = scipy.special.jv(0, scaling_factor.reshape(-1, 1) * rho)
# Compute the approximation to the sampled pupil phase by finding the least squares
# solution to the complex coefficients of the Fourier-Bessel expansion.
# Shape of C is (number of basis functions by number of z samples).
# Note the matrix transposes to get the dimensions correct.
C, residuals, _, _ = np.linalg.lstsq(J.T, phase.T)
Explanation: Step 1: Approximate the pupil phase with a Fourier-Bessel series
z.reshape(-1,1) flips z from a row array to a column array so that it may be broadcast across rho.
The coefficients C are found by a least-squares solution to the equation
\begin{equation}
\mathbf{\phi} \left( \rho , z \right)= \mathbf{J} \left( \rho \right) \mathbf{c} \left( z \right)
\end{equation}
\( \mathbf{c} \) has dimensions num_basis \( \times \) len(z). The J array has dimensions num_basis \( \times \) len(rho) and the phase array has dimensions len(z) \( \times \) len(rho). The J and phase arrays are therefore transposed to get the dimensions right in the call to np.linalg.lstsq.
End of explanation
b = 2 * np. pi * r.reshape(-1, 1) * NA / wavelength
# Convenience functions for J0 and J1 Bessel functions
J0 = lambda x: scipy.special.jv(0, x)
J1 = lambda x: scipy.special.jv(1, x)
# See equation 5 in Li, Xue, and Blu
denom = scaling_factor * scaling_factor - b * b
R = (scaling_factor * J1(scaling_factor * a) * J0(b * a) * a - b * J0(scaling_factor * a) * J1(b * a) * a)
R /= denom
Explanation: Step 2: Compute the PSF
Here, we use the Fourier-Bessel series expansion of the phase function and a Bessel integral identity to compute the approximate PSF. Each coefficient \( c_{m} \left( z \right) \) needs to be multiplied by
\begin{equation}
R \left(r; \mathbf{p} \right) = \frac{\sigma_m J_1 \left( \sigma_m a \right) J_0 \left( \beta a \right)a - \beta J_0 \left( \sigma_m a \right) J_1 \left( \beta a \right)a }{\sigma_m^2 - \beta^2}
\end{equation}
and the resulting products summed over the number of basis functions. \( \mathbf{p} \) is the parameter vector for the Gibson-Lanni model, \( \sigma_m \) is the scaling factor for the argument to the \( m'th \) Bessel basis function, and \( \beta = kr\text{NA} \).
b is defined such that R has dimensions of len(r) \( \times \) len(rho).
End of explanation
# The transpose places the axial direction along the first dimension of the array, i.e. rows
# This is only for convenience.
PSF_rz = (np.abs(R.dot(C))**2).T
Explanation: Now compute the point-spread function via
\begin{equation}
PSF \left( r, z; z_p, \mathbf{p} \right) = \left| \mathbf{R} \left( r; \mathbf{p} \right) \mathbf{c} \left( z \right) \right|^2
\end{equation}
End of explanation
# Create the fleshed-out xy grid of radial distances from the center
xy = np.mgrid[0:size_y, 0:size_x]
r_pixel = np.sqrt((xy[1] - x0) * (xy[1] - x0) + (xy[0] - y0) * (xy[0] - y0)) * resPSF
PSF = np.zeros((size_y, size_x, size_z))
for z_index in range(PSF.shape[2]):
# Interpolate the radial PSF function
PSF_interp = interp1d(r, PSF_rz[z_index, :])
# Evaluate the PSF at each value of r_pixel
PSF[:,:, z_index] = PSF_interp(r_pixel.ravel()).reshape(size_y, size_x)
# Normalize to the area
norm_const = np.sum(np.sum(PSF[:,:,0])) * resPSF**2
PSF /= norm_const
plt.imshow(PSF[:,:,0])
plt.show()
Explanation: Step 3: Resample the PSF onto a rotationally-symmetric Cartesian grid
Here we generate a two dimensional grid where the value at each grid point is the distance of the point from the center of the grid. These values are supplied to an interpolation function computed from PSF_rz to produce a rotationally-symmetric 2D PSF at each z-position.
End of explanation
cdf = np.cumsum(PSF[:,:,0], axis=1) * resPSF
cdf = np.cumsum(cdf, axis=0) * resPSF
print('Min: {:.4f}'.format(np.min(cdf)))
print('Max: {:.4f}'.format(np.max(cdf)))
plt.imshow(cdf)
plt.show()
Explanation: Compute the cumulative distribution
End of explanation
x = (resPSF * (xy[1] - x0))[0]
y = (resPSF * (xy[0] - y0))[:,0]
# Compute the interpolated CDF
f = RectBivariateSpline(x, y, cdf)
def generatePixelSignature(pX, pY, eX, eY, eZ):
value = f((pX - eX + 0.5) * resLateral, (pY - eY + 0.5) * resLateral) + \
f((pX - eX - 0.5) * resLateral, (pY - eY - 0.5) * resLateral) - \
f((pX - eX + 0.5) * resLateral, (pY - eY - 0.5) * resLateral) - \
f((pX - eX - 0.5) * resLateral, (pY - eY + 0.5) * resLateral)
return value
generatePixelSignature(0, 0, 0, -1, 0)
generatePixelSignature(1, 1, 1, 1, 0)
generatePixelSignature(2, 1, 1, 1, 0)
generatePixelSignature(0, 1, 1, 1, 0)
generatePixelSignature(1, 2, 1, 1, 0)
generatePixelSignature(1, 0, 1, 1, 0)
generatePixelSignature(-1, 1, 1, 1, 0)
generatePixelSignature(3, 1, 1, 1, 0)
Explanation: Interpolate the cumulative distribution
Here, we also create the Python equivalent to the getPixelSignature function.
Note that the ground truth is not symmetric about the center pixel because of the finite sampling of the CDF; it becomes more symmetric the smaller resPSF is and the larger sizeX/Y are.
End of explanation |
13,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
观测者模式(Observer Pattern)
1 代码
在门面模式中,我们提到过火警报警器。在当时,我们关注的是通过封装减少代码重复。而今天,我们将从业务流程的实现角度,来再次实现该火警报警器。
Step1: 以上是门面模式中的三个传感器类的结构。仔细分析业务,报警器、洒水器、拨号器都是“观察”烟雾传感器的情况来做反应的。因而,他们三个都是观察者,而烟雾传感器则是被观察对象了。根据分析,将三个类提取共性,泛化出“观察者”类,并构造被观察者。
观察者如下:
Step2: 观察者中定义了update接口,如果被观察者状态比较多,或者每个具体的观察者方法比较多,可以通过update传参数进行更丰富的控制。
下面构造被观察者。
Step3: 被观察者中首先将观察对象加入到观察者数组中,若发生情况,则通过notifyAll通知各观察者。
业务代码如下: | Python Code:
class AlarmSensor:
def run(self):
print ("Alarm Ring...")
class WaterSprinker:
def run(self):
print ("Spray Water...")
class EmergencyDialer:
def run(self):
print ("Dial 119...")
Explanation: 观测者模式(Observer Pattern)
1 代码
在门面模式中,我们提到过火警报警器。在当时,我们关注的是通过封装减少代码重复。而今天,我们将从业务流程的实现角度,来再次实现该火警报警器。
End of explanation
class Observer:
def update(self):
pass
class AlarmSensor(Observer):
def update(self,action):
print ("Alarm Got: %s" % action)
self.runAlarm()
def runAlarm(self):
print ("Alarm Ring...")
class WaterSprinker(Observer):
def update(self,action):
print ("Sprinker Got: %s" % action)
self.runSprinker()
def runSprinker(self):
print ("Spray Water...")
class EmergencyDialer(Observer):
def update(self,action):
print ("Dialer Got: %s"%action)
self.runDialer()
def runDialer(self):
print ("Dial 119...")
Explanation: 以上是门面模式中的三个传感器类的结构。仔细分析业务,报警器、洒水器、拨号器都是“观察”烟雾传感器的情况来做反应的。因而,他们三个都是观察者,而烟雾传感器则是被观察对象了。根据分析,将三个类提取共性,泛化出“观察者”类,并构造被观察者。
观察者如下:
End of explanation
class Observed:
observers=[]
action=""
def addObserver(self,observer):
self.observers.append(observer)
def notifyAll(self):
for obs in self.observers:
obs.update(self.action)
class smokeSensor(Observed):
def setAction(self,action):
self.action=action
def isFire(self):
return True
Explanation: 观察者中定义了update接口,如果被观察者状态比较多,或者每个具体的观察者方法比较多,可以通过update传参数进行更丰富的控制。
下面构造被观察者。
End of explanation
alarm=AlarmSensor()
sprinker=WaterSprinker()
dialer=EmergencyDialer()
smoke_sensor=smokeSensor()
smoke_sensor.addObserver(alarm)
smoke_sensor.addObserver(sprinker)
smoke_sensor.addObserver(dialer)
if smoke_sensor.isFire():
smoke_sensor.setAction("On Fire!")
smoke_sensor.notifyAll()
Explanation: 被观察者中首先将观察对象加入到观察者数组中,若发生情况,则通过notifyAll通知各观察者。
业务代码如下:
End of explanation |
13,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this example, we'e going to actually run a short simulation with OpenMM
saving the results to disk with MDTraj's HDF5 reporter
Obviously, running this example calculation on your machine requires
having OpenMM installed. OpenMM can be downloaded and installed from
https
Step1: And a few things from OpenMM
Step2: First, let's find a PDB for alanine dipeptide, the system we'll
be simulating. We happen to have one inlcuded in the mdtraj
package for testing named "native.pdb". Under normal circumstances
circumstances, you shouldn't have much need for mdtraj.testing.get_fn
(unless you're contributing tests to mdtraj!)
Step3: Lets use the amber99sb-ildn forcefield with implicit solvent
and a Langevin integrator. This is a relatively "standard" OpenMM
protocol for setting up a system.
Step4: Set the initial positions to the "first frame" of the PDB
file (it only has one frame). Note that while the pdb is an mdtraj
trajectory passing in its positions to OpenMM is just fine.
Step5: Let's use one of the OpenMM reporters that mdtraj provides. This is
the hdf5 reporter, which saves all kinds of information, including
the topology, positions, energies, etc to disk. To visualize the h5
trajectory with a non-hdf5 enabled app like PyMol or VMD, you can
use mdconvert on the command line to easily transform it to NetCDF, DCD,
or any other format of your preference. | Python Code:
import os
import mdtraj
import mdtraj.reporters
Explanation: In this example, we'e going to actually run a short simulation with OpenMM
saving the results to disk with MDTraj's HDF5 reporter
Obviously, running this example calculation on your machine requires
having OpenMM installed. OpenMM can be downloaded and installed from
https://simtk.org/home/openmm.
Lets import some things we're going to need from mdtraj
End of explanation
from simtk import unit
import simtk.openmm as mm
from simtk.openmm import app
Explanation: And a few things from OpenMM
End of explanation
import mdtraj.testing
pdb = mdtraj.load(mdtraj.testing.get_fn('native.pdb'))
topology = pdb.topology.to_openmm()
Explanation: First, let's find a PDB for alanine dipeptide, the system we'll
be simulating. We happen to have one inlcuded in the mdtraj
package for testing named "native.pdb". Under normal circumstances
circumstances, you shouldn't have much need for mdtraj.testing.get_fn
(unless you're contributing tests to mdtraj!)
End of explanation
forcefield = app.ForceField('amber99sbildn.xml', 'amber99_obc.xml')
system = forcefield.createSystem(topology, nonbondedMethod=app.CutoffNonPeriodic)
integrator = mm.LangevinIntegrator(330*unit.kelvin, 1.0/unit.picoseconds, 2.0*unit.femtoseconds)
simulation = app.Simulation(topology, system, integrator)
Explanation: Lets use the amber99sb-ildn forcefield with implicit solvent
and a Langevin integrator. This is a relatively "standard" OpenMM
protocol for setting up a system.
End of explanation
simulation.context.setPositions(pdb.xyz[0])
simulation.context.setVelocitiesToTemperature(330*unit.kelvin)
Explanation: Set the initial positions to the "first frame" of the PDB
file (it only has one frame). Note that while the pdb is an mdtraj
trajectory passing in its positions to OpenMM is just fine.
End of explanation
if not os.path.exists('ala2.h5'):
simulation.reporters.append(mdtraj.reporters.HDF5Reporter('ala2.h5', 1000))
simulation.step(100000)
Explanation: Let's use one of the OpenMM reporters that mdtraj provides. This is
the hdf5 reporter, which saves all kinds of information, including
the topology, positions, energies, etc to disk. To visualize the h5
trajectory with a non-hdf5 enabled app like PyMol or VMD, you can
use mdconvert on the command line to easily transform it to NetCDF, DCD,
or any other format of your preference.
End of explanation |
13,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Tutorial
Step1: We will try summarizing a small toy example; later we will use a larger piece of text. In reality, the text is too small, but it suffices as an illustrative example.
Step2: To summarize this text, we pass the <b>raw string data</b> as input to the function "summarize", and it will return a summary.
Note
Step3: Use the "split" option if you want a list of strings instead of a single string.
Step4: You can adjust how much text the summarizer outputs via the "ratio" parameter or the "word_count" parameter. Using the "ratio" parameter, you specify what fraction of sentences in the original text should be returned as output. Below we specify that we want 50% of the original text (the default is 20%).
Step5: Using the "word_count" parameter, we specify the maximum amount of words we want in the summary. Below we have specified that we want no more than 50 words.
Step6: As mentioned earlier, this module also supports <b>keyword</b> extraction. Keyword extraction works in the same way as summary generation (i.e. sentence extraction), in that the algorithm tries to find words that are important or seem representative of the entire text. They keywords are not always single words; in the case of multi-word keywords, they are typically all nouns.
Step7: <h2>Larger example</h2>
Let us try an example with a larger piece of text. We will be using a synopsis of the movie "The Matrix", which we have taken from this IMDb page.
In the code below, we read the text file directly from a web-page using "requests". Then we produce a summary and some keywords.
Step8: If you know this movie, you see that this summary is actually quite good. We also see that some of the most important characters (Neo, Morpheus, Trinity) were extracted as keywords.
<h2>Another example</h2>
Let's try an example similar to the one above. This time, we will use the IMDb synopsis of "The Big Lebowski".
Again, we download the text and produce a summary and some keywords. | Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim.summarization import summarize
Explanation: <h1>Tutorial: automatic summarization using Gensim</h1>
This module automatically summarizes the given text, by extracting one or more important sentences from the text. In a similar way, it can also extract keywords. This tutorial will teach you to use this summarization module via some examples. First, we will try a small example, then we will try two larger ones, and then we will review the performance of the summarizer in terms of speed.
This summarizer is based on the "TextRank" algorithm, from an article by Mihalcea et al. This algorithm was later improved upon by Barrios et al. in another article, by introducing something called a "BM25 ranking function".
This tutorial assumes that you are familiar with Python and have installed Gensim.
<b>Note</b>: Gensim's summarization only works for English for now, because the text is pre-processed so that stopwords are removed and the words are stemmed, and these processes are language-dependent.
<h2>Small example</h2>
First of all, we import the function "summarize".
End of explanation
text = "Thomas A. Anderson is a man living two lives. By day he is an " + \
"average computer programmer and by night a hacker known as " + \
"Neo. Neo has always questioned his reality, but the truth is " + \
"far beyond his imagination. Neo finds himself targeted by the " + \
"police when he is contacted by Morpheus, a legendary computer " + \
"hacker branded a terrorist by the government. Morpheus awakens " + \
"Neo to the real world, a ravaged wasteland where most of " + \
"humanity have been captured by a race of machines that live " + \
"off of the humans' body heat and electrochemical energy and " + \
"who imprison their minds within an artificial reality known as " + \
"the Matrix. As a rebel against the machines, Neo must return to " + \
"the Matrix and confront the agents: super-powerful computer " + \
"programs devoted to snuffing out Neo and the entire human " + \
"rebellion. "
print ('Input text:')
print (text)
Explanation: We will try summarizing a small toy example; later we will use a larger piece of text. In reality, the text is too small, but it suffices as an illustrative example.
End of explanation
print ('Summary:')
print (summarize(text))
Explanation: To summarize this text, we pass the <b>raw string data</b> as input to the function "summarize", and it will return a summary.
Note: make sure that the string does not contain any newlines where the line breaks in a sentence. A sentence with a newline in it (i.e. a carriage return, "\n") will be treated as two sentences.
End of explanation
print (summarize(text, split=True))
Explanation: Use the "split" option if you want a list of strings instead of a single string.
End of explanation
print ('Summary:')
print (summarize(text, ratio=0.5))
Explanation: You can adjust how much text the summarizer outputs via the "ratio" parameter or the "word_count" parameter. Using the "ratio" parameter, you specify what fraction of sentences in the original text should be returned as output. Below we specify that we want 50% of the original text (the default is 20%).
End of explanation
print ('Summary:')
print (summarize(text, word_count=50))
Explanation: Using the "word_count" parameter, we specify the maximum amount of words we want in the summary. Below we have specified that we want no more than 50 words.
End of explanation
from gensim.summarization import keywords
print ('Keywords:')
print (keywords(text))
Explanation: As mentioned earlier, this module also supports <b>keyword</b> extraction. Keyword extraction works in the same way as summary generation (i.e. sentence extraction), in that the algorithm tries to find words that are important or seem representative of the entire text. They keywords are not always single words; in the case of multi-word keywords, they are typically all nouns.
End of explanation
import requests
text = requests.get('http://rare-technologies.com/the_matrix_synopsis.txt').text
print ('Summary:')
print (summarize(text, ratio=0.01))
print ('\nKeywords:')
print (keywords(text, ratio=0.01))
Explanation: <h2>Larger example</h2>
Let us try an example with a larger piece of text. We will be using a synopsis of the movie "The Matrix", which we have taken from this IMDb page.
In the code below, we read the text file directly from a web-page using "requests". Then we produce a summary and some keywords.
End of explanation
import requests
text = requests.get('http://rare-technologies.com/the_big_lebowski_synopsis.txt').text
print ('Summary:')
print (summarize(text, ratio=0.01))
print ('\nKeywords:')
print (keywords(text, ratio=0.01))
Explanation: If you know this movie, you see that this summary is actually quite good. We also see that some of the most important characters (Neo, Morpheus, Trinity) were extracted as keywords.
<h2>Another example</h2>
Let's try an example similar to the one above. This time, we will use the IMDb synopsis of "The Big Lebowski".
Again, we download the text and produce a summary and some keywords.
End of explanation |
13,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create Space from an already existing .json input file
Step1: Probe group information and get particle index
Step2: Extract positions etc from a group
Step3: Calculate center of mass for a group
Step4: This part is purely experimental and merely to illustrate how the interface could look like | Python Code:
jsoninput = mc.InputMap('../src/examples/minimal.json')
space = mc.Space(jsoninput)
print(space.info())
print 'system volume = ',space.geo.getVolume()
Explanation: Create Space from an already existing .json input file:
End of explanation
groups = space.groupList()
for group in groups:
print group.name
print len(group)
print group.range()
Explanation: Probe group information and get particle index:
End of explanation
positions = [ np.array((space.p[i].x, space.p[i].y, space.p[i].z)) for i in groups[0].range()]
charges = [ space.p[i].charge for i in groups[0].range()]
print 'position of first atom =', positions[0]
print 'total charge =', sum(charges)
Explanation: Extract positions etc from a group
End of explanation
saltgroup = space.groupList()[0]
cm = mc.massCenter(space.geo, space.p, saltgroup)
print "center of mass = ", np.array(cm)
Explanation: Calculate center of mass for a group
End of explanation
spc = mc.Space(...)
spc.addAtomType(name='OW', weight=18, charge=-0.3, epsilon=0.03, sigma=0.3)
spc.addMoleculeType(name='water', atoms=['HW HW OW'], rigid=True)
spc.addMoleculeType(name='salt', atoms=['Na Cl'], atomic=True)
spc.addMolecules(type='water', N=100)
potential = mc.potentials.Coulomb(epsr=80, cutoff=10) +
mc.potentials.LennardJones(combinationrule='LB')
Explanation: This part is purely experimental and merely to illustrate how the interface could look like
End of explanation |
13,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Biosignals Processing in Python
Welcome to the course for biosignals processing using NeuroKit and python. You'll find the necessary files to run this example in the examples section.
Import Necessary Packages
Step1: Block Paradigms
Preprocessing
Step2: df contains about 5 minutes of data recorded at 1000Hz. There are 4 channels, EDA, ECG, RSP and the Photosensor used to localize events. In the present case, there is only one event, one sequence of 5 min during which the participant was instructed to to nothing.
First thing that we're gonna do is crop that data according to the photosensor channel, to keep only the sequence of interest.
Step3: find_events returns a dict containing onsets and durations of each event. Here, it correctly detected only one event. Then, we're gonna crop our data according to that event. The create_epochs function returns a list containing epochs of data corresponding to each event. As we have only one event, we're gonna select the 0th element of that list.
Step4: Processing
Biosignals processing can be done quite easily using NeuroKit with the bio_process function. Simply provide the biosignal channels and additional channels that you want to keep (for example, the photosensor). bio_process returns a dict containing a dataframe df, including raw and processed signals, as well as features relevant to each provided signal.
Step5: Bio Features Extraction
Aside from this dataframe, bio contains also several features computed signal wise.
Heart-Rate Variability (HRV)
Many indices of HRV, a finely tuned measure of heart-brain communication, are computed.
Step6: Respiratory Sinus Arrythmia (RSA)
TO BE DONE.
Entropy
TO BE DONE.
Heart Beats
The processing functions automatically extracts each individual heartbeat, synchronized by their R peak. You can plot all of them.
Step7: Heart Rate Variability (HRV)
Step8: Event-Related Analysis
This experiment consisted of 8 events (when the photosensor signal goes down), which were 2 types of images that were shown to the participant
Step9: Find Events
First, we must find events onset within our photosensor's signal using the find_events() function. This function requires a treshold and a cut direction (should it select events that are higher or lower than the treshold).
Step10: Create Epochs
Then, we divise our dataframe in epochs, i.e. segments of data around the event. We set our epochs to start at the event start (onset=0) and to last for 5000 data points, in our case equal to 5 s (since the signal is sampled at 1000Hz).
Step11: Create Evoked-Data
We can then itereate through the epochs and store the interesting results in a new dict that will be, at the end, converted to a dataframe.
Step12: Plot Results | Python Code:
# Import packages
import neurokit as nk
import pandas as pd
import numpy as np
import matplotlib
import seaborn as sns
# Plotting preferences
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = [14.0, 10.0] # Bigger figures
sns.set_style("whitegrid") # White background
sns.set_palette(sns.color_palette("colorblind")) # Better colours
Explanation: Biosignals Processing in Python
Welcome to the course for biosignals processing using NeuroKit and python. You'll find the necessary files to run this example in the examples section.
Import Necessary Packages
End of explanation
# Download resting-state data
df = pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit.py/master/examples/Bio/data/bio_rest.csv", index_col=0)
# Plot it
df.plot()
Explanation: Block Paradigms
Preprocessing
End of explanation
# We want to find events on the Photosensor channel, when it goes down (hence, cut is set to lower).
events = nk.find_events(df["Photosensor"], cut="lower")
print(events)
Explanation: df contains about 5 minutes of data recorded at 1000Hz. There are 4 channels, EDA, ECG, RSP and the Photosensor used to localize events. In the present case, there is only one event, one sequence of 5 min during which the participant was instructed to to nothing.
First thing that we're gonna do is crop that data according to the photosensor channel, to keep only the sequence of interest.
End of explanation
df = nk.create_epochs(df, events["onsets"], duration=events["durations"], onset=0)
df = df[0] # Select the first (0th) element of that list.
Explanation: find_events returns a dict containing onsets and durations of each event. Here, it correctly detected only one event. Then, we're gonna crop our data according to that event. The create_epochs function returns a list containing epochs of data corresponding to each event. As we have only one event, we're gonna select the 0th element of that list.
End of explanation
bio = nk.bio_process(ecg=df["ECG"], rsp=df["RSP"], eda=df["EDA"], add=df["Photosensor"])
# Plot the processed dataframe
bio["df"].plot()
Explanation: Processing
Biosignals processing can be done quite easily using NeuroKit with the bio_process function. Simply provide the biosignal channels and additional channels that you want to keep (for example, the photosensor). bio_process returns a dict containing a dataframe df, including raw and processed signals, as well as features relevant to each provided signal.
End of explanation
bio["ECG"]["HRV"]
Explanation: Bio Features Extraction
Aside from this dataframe, bio contains also several features computed signal wise.
Heart-Rate Variability (HRV)
Many indices of HRV, a finely tuned measure of heart-brain communication, are computed.
End of explanation
bio["ECG"]["Heart_Beats"]
pd.DataFrame(bio["ECG"]["Heart_Beats"]).T.plot(legend=False) # Plot all the heart beats
Explanation: Respiratory Sinus Arrythmia (RSA)
TO BE DONE.
Entropy
TO BE DONE.
Heart Beats
The processing functions automatically extracts each individual heartbeat, synchronized by their R peak. You can plot all of them.
End of explanation
# Print all the HRV indices
bio["ECG_Features"]["ECG_HRV"]
Explanation: Heart Rate Variability (HRV)
End of explanation
condition_list = ["Negative", "Negative", "Neutral", "Neutral", "Neutral", "Negative", "Negative", "Neutral"]
Explanation: Event-Related Analysis
This experiment consisted of 8 events (when the photosensor signal goes down), which were 2 types of images that were shown to the participant: "Negative" vs "Neutral". The following list is the condition order.
End of explanation
events = nk.find_events(df["Photosensor"], treshold = 3, cut="lower")
events
Explanation: Find Events
First, we must find events onset within our photosensor's signal using the find_events() function. This function requires a treshold and a cut direction (should it select events that are higher or lower than the treshold).
End of explanation
epochs = nk.create_epochs(bio["Bio"], events["onsets"], duration=5000, onset=0)
Explanation: Create Epochs
Then, we divise our dataframe in epochs, i.e. segments of data around the event. We set our epochs to start at the event start (onset=0) and to last for 5000 data points, in our case equal to 5 s (since the signal is sampled at 1000Hz).
End of explanation
evoked = {} # Initialize an empty dict
for epoch in epochs:
evoked[epoch] = {} # Initialize an empty dict for the current epoch
evoked[epoch]["Heart_Rate"] = epochs[epoch]["Heart_Rate"].mean() # Heart Rate mean
evoked[epoch]["RSP_Rate"] = epochs[epoch]["RSP_Rate"].mean() # Respiration Rate mean
evoked[epoch]["EDA_Filtered"] = epochs[epoch]["EDA_Filtered"].mean() # EDA mean
evoked[epoch]["EDA_Max"] = max(epochs[epoch]["EDA_Filtered"]) # Max EDA value
# SRC_Peaks are scored np.nan (NaN values) in the absence of peak. We want to change it to 0
if np.isnan(epochs[epoch]["SCR_Peaks"].mean()):
evoked[epoch]["SCR_Peaks"] = 0
else:
evoked[epoch]["SCR_Peaks"] = epochs[epoch]["SCR_Peaks"].mean()
evoked = pd.DataFrame.from_dict(evoked, orient="index") # Convert to a dataframe
evoked["Condition"] = condition_list # Add the conditions
evoked # Print
Explanation: Create Evoked-Data
We can then itereate through the epochs and store the interesting results in a new dict that will be, at the end, converted to a dataframe.
End of explanation
sns.boxplot(x="Condition", y="Heart_Rate", data=evoked)
sns.boxplot(x="Condition", y="RSP_Rate", data=evoked)
sns.boxplot(x="Condition", y="EDA_Filtered", data=evoked)
sns.boxplot(x="Condition", y="EDA_Max", data=evoked)
sns.boxplot(x="Condition", y="SCR_Peaks", data=evoked)
Explanation: Plot Results
End of explanation |
13,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy random snippets
Most comments are taken from the Numpy documentation.
Import directive
Step1: Tool functions
Step2: Discrete distributions
Bernoulli distribution
Step3: Binomial distribution
Samples are drawn from a binomial distribution with specified parameters,
$n$ trials and $p$ probability of success where $n \in \mathbb{N}$
and $p$ is in the interval $[0,1]$.
The probability density for the binomial distribution is
$$P(N) = \binom{n}{N}p^N(1-p)^{n-N}$$
where $n$ is the number of trials, $p$ is the probability
of success, and $N$ is the number of successes.
When estimating the standard error of a proportion in a population by
using a random sample, the normal distribution works well unless the
product $pn <=5$, where $p$ = population proportion estimate, and n =
number of samples, in which case the binomial distribution is used
instead. For example, a sample of 15 people shows 4 who are left
handed, and 11 who are right handed. Then $p = 4/15 = 27%. 0.2715 = 4$,
so the binomial distribution should be used in this case.
See https
Step4: Hypergeometric distribution
Samples are drawn from a hypergeometric distribution with specified
parameters, ngood (ways to make a good selection), nbad (ways to make
a bad selection), and nsample = number of items sampled, which is less
than or equal to the sum ngood + nbad.
ngood
Step5: Poisson distribution
The Poisson distribution is the limit of the binomial distribution for large N
Step6: Lambda=2
Step7: Lambda=3
Step8: Lambda=4
Step9: Lambda=5
Step10: Geometric distribution
Bernoulli trials are experiments with one of two outcomes
Step11: Pascal distribution (negative binomial distribution)
Samples are drawn from a negative binomial distribution with specified
parameters, $n$ trials and $p$ probability of success where $n$ is an
integer > 0 and $p$ is in the interval $[0, 1]$.
The probability density for the negative binomial distribution is
$$P(N;n,p) = \binom{N+n-1}{n-1}p^{n}(1-p)^{N},$$
where $n-1$ is the number of successes, $p$ is the
probability of success, and $N+n-1$ is the number of trials.
The negative binomial distribution gives the probability of $n-1$
successes and $N$ failures in $N+n-1$ trials, and success on the $(N+n)$th
trial.
If one throws a die repeatedly until the third time a "1" appears,
then the probability distribution of the number of non-"1"s that
appear before the third "1" is a negative binomial distribution.
Step12: Uniform distribution
Step13: Miscellaneous
Step14: Continuous distribution
Uniform distribution
Step15: Normal distribution
The probability density function of the normal distribution, first
derived by De Moivre and 200 years later by both Gauss and Laplace
independently, is often called the bell curve because of
its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it
describes the commonly occurring distribution of samples influenced
by a large number of tiny, random disturbances, each with its own
unique distribution.
The probability density for the Gaussian distribution is
$$p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },$$
where $\mu$ is the mean and $\sigma$ the standard
deviation. The square of the standard deviation, $\sigma^2$,
is called the variance.
The function has its peak at the mean, and its "spread" increases with
the standard deviation (the function reaches 0.607 times its maximum at
$x + \sigma$ and $x - \sigma$).
This implies that numpy.random.normal is more likely to return samples
lying close to the mean, rather than those far away.
Step16: Log normal distribution
Draw samples from a log-normal distribution with specified mean,
standard deviation, and array shape. Note that the mean and standard
deviation are not the values for the distribution itself, but of the
underlying normal distribution it is derived from.
A variable $x$ has a log-normal distribution if $log(x)$ is normally
distributed. The probability density function for the log-normal
distribution is
Step17: Power distribution
Draws samples in $[0, 1]$ from a power distribution with positive exponent $a - 1$ (with $a > 0$).
Also known as the power function distribution.
The probability density function is
$$P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.$$
The power function distribution is just the inverse of the Pareto
distribution. It may also be seen as a special case of the Beta
distribution.
It is used, for example, in modeling the over-reporting of insurance
claims.
Step18: Beta distribution
Exponential distribution
Its probability density function is
$$f\left( x; \frac{1}{\beta} \right) = \frac{1}{\beta} \exp \left( \frac{-x}{\beta} \right)$$
for $x > 0$ and 0 elsewhere.
$\beta$ is the scale parameter, which is the inverse of the rate parameter $\lambda = 1/\beta$.
The rate parameter is an alternative, widely used parameterization of the exponential distribution.
The exponential distribution is a continuous analogue of the
geometric distribution. It describes many common situations, such as
the size of raindrops measured over many rainstorms, or the time
between page requests to Wikipedia.
The scale parameter, $\beta = 1/\lambda$.
Step19: Chi-square distribution
When df independent random variables, each with standard normal
distributions (mean=0, variance=1), are squared and summed, the
resulting distribution is chi-square.
This distribution is often used in hypothesis testing.
The variable obtained by summing the squares of df independent,
standard normally distributed random variables | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Numpy random snippets
Most comments are taken from the Numpy documentation.
Import directive
End of explanation
def plot(data, bins=30):
plt.hist(data, bins)
plt.show()
Explanation: Tool functions
End of explanation
def bernoulli(p=None, size=1):
return np.random.binomial(n=1, p=p, size=size)
bernoulli(p=0.5, size=100)
Explanation: Discrete distributions
Bernoulli distribution
End of explanation
np.random.binomial(n=10, p=0.5, size=100)
data = np.random.binomial(n=10, p=0.25, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.binomial(n=10, p=0.5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.binomial(n=10, p=0.75, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.binomial(n=25, p=0.5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Binomial distribution
Samples are drawn from a binomial distribution with specified parameters,
$n$ trials and $p$ probability of success where $n \in \mathbb{N}$
and $p$ is in the interval $[0,1]$.
The probability density for the binomial distribution is
$$P(N) = \binom{n}{N}p^N(1-p)^{n-N}$$
where $n$ is the number of trials, $p$ is the probability
of success, and $N$ is the number of successes.
When estimating the standard error of a proportion in a population by
using a random sample, the normal distribution works well unless the
product $pn <=5$, where $p$ = population proportion estimate, and n =
number of samples, in which case the binomial distribution is used
instead. For example, a sample of 15 people shows 4 who are left
handed, and 11 who are right handed. Then $p = 4/15 = 27%. 0.2715 = 4$,
so the binomial distribution should be used in this case.
See https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.binomial.html
End of explanation
np.random.hypergeometric(ngood=15, nbad=15, nsample=15, size=100)
data = np.random.hypergeometric(ngood=15, nbad=15, nsample=15, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Hypergeometric distribution
Samples are drawn from a hypergeometric distribution with specified
parameters, ngood (ways to make a good selection), nbad (ways to make
a bad selection), and nsample = number of items sampled, which is less
than or equal to the sum ngood + nbad.
ngood : Number of ways to make a good selection. Must be nonnegative.
nbad : Number of ways to make a bad selection. Must be nonnegative.
nsample : Number of items sampled. Must be at least 1 and at most ngood + nbad.
The probability density for the Hypergeometric distribution is
$$P(x) = \frac{\binom{m}{n}\binom{N-m}{n-x}}{\binom{N}{n}},$$
where $0 \le x \le m$ and $n+m-N \le x \le n$
for $P(x)$ the probability of x successes, n = ngood, m = nbad, and
N = number of samples.
Consider an urn with black and white marbles in it, ngood of them
black and nbad are white. If you draw nsample balls without
replacement, then the hypergeometric distribution describes the
distribution of black balls in the drawn sample.
Note that this distribution is very similar to the binomial
distribution, except that in this case, samples are drawn without
replacement, whereas in the Binomial case samples are drawn with
replacement (or the sample space is infinite). As the sample space
becomes large, this distribution approaches the binomial.
Suppose you have an urn with 15 white and 15 black marbles.
If you pull 15 marbles at random, how likely is it that
12 or more of them are one color ?
>>> s = np.random.hypergeometric(15, 15, 15, 100000)
>>> sum(s>=12)/100000. + sum(s<=3)/100000.
# answer = 0.003 ... pretty unlikely!
End of explanation
np.random.poisson(lam=1, size=100)
data = np.random.poisson(lam=1, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Poisson distribution
The Poisson distribution is the limit of the binomial distribution for large N:
$$f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}$$
For events with an expected separation $\lambda$ the Poisson distribution $f(k; \lambda)$ describes the probability of $k$ events occurring within the observed interval $\lambda$.
Because the output is limited to the range of the C long type, a ValueError is raised when lam is within 10 sigma of the maximum representable value.
Lambda=1
End of explanation
data = np.random.poisson(lam=2, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Lambda=2
End of explanation
data = np.random.poisson(lam=3, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Lambda=3
End of explanation
data = np.random.poisson(lam=4, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Lambda=4
End of explanation
data = np.random.poisson(lam=5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Lambda=5
End of explanation
np.random.geometric(p=0.5, size=100)
data = np.random.geometric(p=0.5, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Geometric distribution
Bernoulli trials are experiments with one of two outcomes:
success or failure (an example of such an experiment is flipping
a coin). The geometric distribution models the number of trials
that must be run in order to achieve success. It is therefore
supported on the positive integers, $k = 1, 2, \dots$
The probability mass function of the geometric distribution is
$$f(k) = (1 - p)^{k - 1} p$$
where $p$ is the probability of success of an individual trial.
End of explanation
np.random.negative_binomial(n=1, p=0.1, size=100)
data = np.random.negative_binomial(n=1, p=0.1, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Pascal distribution (negative binomial distribution)
Samples are drawn from a negative binomial distribution with specified
parameters, $n$ trials and $p$ probability of success where $n$ is an
integer > 0 and $p$ is in the interval $[0, 1]$.
The probability density for the negative binomial distribution is
$$P(N;n,p) = \binom{N+n-1}{n-1}p^{n}(1-p)^{N},$$
where $n-1$ is the number of successes, $p$ is the
probability of success, and $N+n-1$ is the number of trials.
The negative binomial distribution gives the probability of $n-1$
successes and $N$ failures in $N+n-1$ trials, and success on the $(N+n)$th
trial.
If one throws a die repeatedly until the third time a "1" appears,
then the probability distribution of the number of non-"1"s that
appear before the third "1" is a negative binomial distribution.
End of explanation
np.random.choice(range(10), size=100, replace=True, p=None)
data = np.random.choice(range(25), size=100000, replace=True, p=None)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Uniform distribution
End of explanation
np.random.choice(range(10), size=10, replace=False, p=None)
np.random.choice([1, 2, 3], size=100, replace=True, p=[0.8, 0.1, 0.1])
Explanation: Miscellaneous
End of explanation
np.random.uniform(high=0.0, low=1.0, size=50)
Explanation: Continuous distribution
Uniform distribution
End of explanation
np.random.normal(loc=0.0, scale=1.0, size=50)
data = np.random.normal(loc=0.0, scale=1.0, size=100000)
plot(data, bins=np.arange(-5, 6, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.normal(loc=2.0, scale=1.0, size=100000)
plot(data, bins=np.arange(-5, 6, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.normal(loc=0.0, scale=1.5, size=100000)
plot(data, bins=np.arange(-5, 6, 0.2))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Normal distribution
The probability density function of the normal distribution, first
derived by De Moivre and 200 years later by both Gauss and Laplace
independently, is often called the bell curve because of
its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it
describes the commonly occurring distribution of samples influenced
by a large number of tiny, random disturbances, each with its own
unique distribution.
The probability density for the Gaussian distribution is
$$p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },$$
where $\mu$ is the mean and $\sigma$ the standard
deviation. The square of the standard deviation, $\sigma^2$,
is called the variance.
The function has its peak at the mean, and its "spread" increases with
the standard deviation (the function reaches 0.607 times its maximum at
$x + \sigma$ and $x - \sigma$).
This implies that numpy.random.normal is more likely to return samples
lying close to the mean, rather than those far away.
End of explanation
np.random.lognormal(mean=0.0, sigma=1.0, size=50)
data = np.random.lognormal(mean=0.0, sigma=1.0, size=100000)
plot(data, bins=np.arange(0, 10, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.lognormal(mean=2.0, sigma=1.0, size=100000)
plot(data, bins=np.arange(0, 10, 0.2))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.lognormal(mean=0.0, sigma=1.5, size=100000)
plot(data, bins=np.arange(0, 10, 0.2))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Log normal distribution
Draw samples from a log-normal distribution with specified mean,
standard deviation, and array shape. Note that the mean and standard
deviation are not the values for the distribution itself, but of the
underlying normal distribution it is derived from.
A variable $x$ has a log-normal distribution if $log(x)$ is normally
distributed. The probability density function for the log-normal
distribution is:
$$p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}$$
where $\mu$ is the mean and $\sigma$ is the standard
deviation of the normally distributed logarithm of the variable.
A log-normal distribution results if a random variable is the product
of a large number of independent, identically-distributed variables in
the same way that a normal distribution results if the variable is the
sum of a large number of independent, identically-distributed
variables.
End of explanation
np.random.power(a=1.0, size=50)
data = np.random.power(a=0.25, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=0.5, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=1.0, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=2.0, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.power(a=5.0, size=100000)
plot(data, bins=np.arange(0, 2, 0.05))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Power distribution
Draws samples in $[0, 1]$ from a power distribution with positive exponent $a - 1$ (with $a > 0$).
Also known as the power function distribution.
The probability density function is
$$P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.$$
The power function distribution is just the inverse of the Pareto
distribution. It may also be seen as a special case of the Beta
distribution.
It is used, for example, in modeling the over-reporting of insurance
claims.
End of explanation
np.random.exponential(scale=1.0, size=50)
data = np.random.exponential(scale=1.0, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.exponential(scale=2.0, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.exponential(scale=5.0, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.exponential(scale=0.5, size=100000)
plot(data, bins=np.arange(0, 30, 0.5))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Beta distribution
Exponential distribution
Its probability density function is
$$f\left( x; \frac{1}{\beta} \right) = \frac{1}{\beta} \exp \left( \frac{-x}{\beta} \right)$$
for $x > 0$ and 0 elsewhere.
$\beta$ is the scale parameter, which is the inverse of the rate parameter $\lambda = 1/\beta$.
The rate parameter is an alternative, widely used parameterization of the exponential distribution.
The exponential distribution is a continuous analogue of the
geometric distribution. It describes many common situations, such as
the size of raindrops measured over many rainstorms, or the time
between page requests to Wikipedia.
The scale parameter, $\beta = 1/\lambda$.
End of explanation
np.random.chisquare(df=1.0, size=50)
data = np.random.chisquare(df=1.0, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.chisquare(df=2.0, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
data = np.random.chisquare(df=5.0, size=10000)
plot(data, bins=range(30))
print("mean:", data.mean())
print("std:", data.std())
Explanation: Chi-square distribution
When df independent random variables, each with standard normal
distributions (mean=0, variance=1), are squared and summed, the
resulting distribution is chi-square.
This distribution is often used in hypothesis testing.
The variable obtained by summing the squares of df independent,
standard normally distributed random variables:
$$Q = \sum_{i=0}^{\mathtt{df}} X^2_i$$
is chi-square distributed, denoted
$$Q \sim \chi^2_k.$$
The probability density function of the chi-squared distribution is
$$p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},$$
where $\Gamma$ is the gamma function,
$$\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.$$
End of explanation |
13,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom training and online prediction
<table align="left">
<td>
<a href="https
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the pillow library for loading images.
Step3: Install the numpy library for manipulation of image data.
Step4: Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
Step5: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Otherwise, set your project ID here.
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
Step13: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify
Step14: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
Step15: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard
Step16: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image
Step17: Training script
In the next cell, you will write the contents of the training script, task.py. In summary
Step18: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters
Step19: Deploy the model
Before you use your model to make predictions, you need to deploy it to an Endpoint. You can do this by calling the deploy function on the Model resource. This will do two things
Step20: Make an online prediction request
Send an online prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset
Step21: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test
Step22: Send the prediction request
Now that you have test images, you can use them to send a prediction request. Use the Endpoint object's predict function, which takes the following parameters
Step23: Undeploy the model
To undeploy your Model resource from the serving Endpoint resource, use the endpoint's undeploy method with the following parameter
Step24: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Custom training and online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK for Python to train and deploy a custom image classification model for online prediction.
Dataset
The dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.
The steps performed include:
Create a Vertex AI custom job for training a model.
Train a TensorFlow model.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest (preview) version of Vertex SDK for Python.
End of explanation
! pip install {USER_FLAG} --upgrade google-cloud-storage
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip install {USER_FLAG} --upgrade pillow
Explanation: Install the pillow library for loading images.
End of explanation
! pip install {USER_FLAG} --upgrade numpy
Explanation: Install the numpy library for manipulation of image data.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
if not os.getenv("IS_TESTING"):
# Get your Google Cloud project ID from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import gapic as aip
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
End of explanation
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
See the locations where accelerators are available.
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.
End of explanation
TRAIN_VERSION = "tf-gpu.2-1"
DEPLOY_VERSION = "tf2-gpu.2-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
End of explanation
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
Explanation: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Define the command args for the training script
Prepare the command-line arguments to pass to your training script.
- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
%%writefile task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
MODEL_DIR = os.getenv("AIP_MODEL_DIR")
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(MODEL_DIR)
Explanation: Training script
In the next cell, you will write the contents of the training script, task.py. In summary:
Get the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(MODEL_DIR)) to the specified model directory.
End of explanation
job = aiplatform.CustomTrainingJob(
display_name=JOB_NAME,
script_path="task.py",
container_uri=TRAIN_IMAGE,
requirements=["tensorflow_datasets==1.3.0"],
model_serving_container_image_uri=DEPLOY_IMAGE,
)
MODEL_DISPLAY_NAME = "cifar10-" + TIMESTAMP
# Start the training
if TRAIN_GPU:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
)
else:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_count=0,
)
Explanation: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters:
display_name: The user-defined name of this training pipeline.
script_path: The local path to the training script.
container_uri: The URI of the training container image.
requirements: The list of Python package dependencies of the script.
model_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.
Use the run function to start training, which takes the following parameters:
args: The command line arguments to be passed to the Python script.
replica_count: The number of worker replicas.
model_display_name: The display name of the Model if the script produces a managed Model.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.
End of explanation
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU.name,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_COMPUTE.name,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Before you use your model to make predictions, you need to deploy it to an Endpoint. You can do this by calling the deploy function on the Model resource. This will do two things:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
The function takes the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Traffic split
The traffic_split parameter is specified as a Python dictionary. You can deploy more than one instance of your model to an endpoint, and then set the percentage of traffic that goes to each instance.
You can use a traffic split to introduce a new model gradually into production. For example, if you had one existing model in production with 100% of the traffic, you could deploy a new model to the same endpoint, direct 10% of traffic to it, and reduce the original model's traffic to 90%. This allows you to monitor the new model's performance while minimizing the distruption to the majority of users.
Compute instance scaling
You can specify a single instance (or node) to serve your online prediction requests. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to serve your online prediction requests, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
Endpoint
The method will block until the model is deployed and eventually return an Endpoint object. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
# Download the images
! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .
Explanation: Make an online prediction request
Send an online prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset:
End of explanation
import numpy as np
from PIL import Image
# Load image data
IMAGE_DIRECTORY = "cifar_test_images"
image_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(".jpg")]
# Decode JPEG images into numpy arrays
image_data = [
np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files
]
# Scale and convert to expected format
x_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]
# Extract labels from image name
y_test = [int(file.split("_")[1]) for file in image_files]
Explanation: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test:
Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:
You can extract the labels from the image filenames. Each image's filename format is "image_{LABEL}_{IMAGE_NUMBER}.jpg"
End of explanation
predictions = endpoint.predict(instances=x_test)
y_predicted = np.argmax(predictions.predictions, axis=1)
correct = sum(y_predicted == np.array(y_test))
accuracy = len(y_predicted)
print(
f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}"
)
Explanation: Send the prediction request
Now that you have test images, you can use them to send a prediction request. Use the Endpoint object's predict function, which takes the following parameters:
instances: A list of image instances. According to your custom model, each image instance should be a 3-dimensional matrix of floats. This was prepared in the previous step.
The predict function returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
Confidence level for the prediction (predictions), between 0 and 1, for each of the ten classes.
You can then run a quick evaluation on the prediction results:
1. np.argmax: Convert each list of confidence levels to a label
2. Compare the predicted labels to the actual labels
3. Calculate accuracy as correct/total
End of explanation
deployed_model_id = endpoint.list_models()[0].id
endpoint.undeploy(deployed_model_id=deployed_model_id)
Explanation: Undeploy the model
To undeploy your Model resource from the serving Endpoint resource, use the endpoint's undeploy method with the following parameter:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed. You can retrieve the deployed models using the endpoint's deployed_models property.
Since this is the only deployed model on the Endpoint resource, you can omit traffic_split.
End of explanation
delete_training_job = True
delete_model = True
delete_endpoint = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Delete the model
model.delete()
# Delete the endpoint
endpoint.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Endpoint
Cloud Storage Bucket
End of explanation |
13,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recurrent Neural Networks in Theano
Credits
Step1: We now define a class that uses scan to initialize an RNN and apply it to a sequence of data vectors. The constructor initializes the shared variables after which the instance can be called on a symbolic variable to construct an RNN graph. Note that this class only handles the computation of the hidden layer activations. We'll define a set of output weights later.
Step2: For visualization purposes and to keep the optimization time managable, we will train the RNN on a short synthetic chaotic time series. Let's first have a look at the data
Step3: To train an RNN model on this sequences, we need to generate a theano graph that computes the cost and its gradient. In this case, the task will be to predict the next time step and the error objective will be the mean squared error (MSE). We also need to define shared variables for the output weights. Finally, we also add a regularization term to the cost.
Step4: We now compile the function that will update the parameters of the model using gradient descent.
Step5: We can now train the network by supplying this function with our data and calling it repeatedly.
Step6: Since we're only looking at a very small toy problem here, the model probably already memorized the train data quite well. Let's find out by plotting the predictions of the network
Step7: Small scale optimizations of this type often benefit from more advanced second order methods. The following block defines some functions that allow you to experiment with off-the-shelf optimization routines. In this case we used BFGS.
Step8: Generating sequences
Predicting a single step ahead is a relatively easy task. It would be more intresting to see if the network actually learned how to generate multiple time steps such that it can continue the sequence.
Write code that generates the next 1000 examples after processing the train sequence. | Python Code:
%matplotlib inline
from synthetic import mackey_glass
import matplotlib.pyplot as plt
import theano
import theano.tensor as T
import numpy
floatX = theano.config.floatX
Explanation: Recurrent Neural Networks in Theano
Credits: Forked from summerschool2015 by mila-udem
First, we import some dependencies:
End of explanation
class SimpleRNN(object):
def __init__(self, input_dim, recurrent_dim):
w_xh = numpy.random.normal(0, .01, (input_dim, recurrent_dim))
w_hh = numpy.random.normal(0, .02, (recurrent_dim, recurrent_dim))
self.w_xh = theano.shared(numpy.asarray(w_xh, dtype=floatX), name='w_xh')
self.w_hh = theano.shared(numpy.asarray(w_hh, dtype=floatX), name='w_hh')
self.b_h = theano.shared(numpy.zeros((recurrent_dim,), dtype=floatX), name='b_h')
self.parameters = [self.w_xh, self.w_hh, self.b_h]
def _step(self, input_t, previous):
return T.tanh(T.dot(previous, self.w_hh) + input_t)
def __call__(self, x):
x_w_xh = T.dot(x, self.w_xh) + self.b_h
result, updates = theano.scan(self._step,
sequences=[x_w_xh],
outputs_info=[T.zeros_like(self.b_h)])
return result
Explanation: We now define a class that uses scan to initialize an RNN and apply it to a sequence of data vectors. The constructor initializes the shared variables after which the instance can be called on a symbolic variable to construct an RNN graph. Note that this class only handles the computation of the hidden layer activations. We'll define a set of output weights later.
End of explanation
data = numpy.asarray(mackey_glass(2000)[0], dtype=floatX)
plt.plot(data)
plt.show()
data_train = data[:1500]
data_val = data[1500:]
Explanation: For visualization purposes and to keep the optimization time managable, we will train the RNN on a short synthetic chaotic time series. Let's first have a look at the data:
End of explanation
w_ho_np = numpy.random.normal(0, .01, (15, 1))
w_ho = theano.shared(numpy.asarray(w_ho_np, dtype=floatX), name='w_ho')
b_o = theano.shared(numpy.zeros((1,), dtype=floatX), name='b_o')
x = T.matrix('x')
my_rnn = SimpleRNN(1, 15)
hidden = my_rnn(x)
prediction = T.dot(hidden, w_ho) + b_o
parameters = my_rnn.parameters + [w_ho, b_o]
l2 = sum((p**2).sum() for p in parameters)
mse = T.mean((prediction[:-1] - x[1:])**2)
cost = mse + .0001 * l2
gradient = T.grad(cost, wrt=parameters)
Explanation: To train an RNN model on this sequences, we need to generate a theano graph that computes the cost and its gradient. In this case, the task will be to predict the next time step and the error objective will be the mean squared error (MSE). We also need to define shared variables for the output weights. Finally, we also add a regularization term to the cost.
End of explanation
lr = .3
updates = [(par, par - lr * gra) for par, gra in zip(parameters, gradient)]
update_model = theano.function([x], cost, updates=updates)
get_cost = theano.function([x], mse)
predict = theano.function([x], prediction)
get_hidden = theano.function([x], hidden)
get_gradient = theano.function([x], gradient)
Explanation: We now compile the function that will update the parameters of the model using gradient descent.
End of explanation
for i in range(1001):
mse_train = update_model(data_train)
if i % 100 == 0:
mse_val = get_cost(data_val)
print 'Epoch {}: train mse: {} validation mse: {}'.format(i, mse_train, mse_val)
Explanation: We can now train the network by supplying this function with our data and calling it repeatedly.
End of explanation
predict = theano.function([x], prediction)
prediction_np = predict(data)
plt.plot(data[1:], label='data')
plt.plot(prediction_np, label='prediction')
plt.legend()
plt.show()
Explanation: Since we're only looking at a very small toy problem here, the model probably already memorized the train data quite well. Let's find out by plotting the predictions of the network:
End of explanation
def vector_to_params(v):
return_list = []
offset = 0
# note the global variable here
for par in parameters:
par_size = numpy.product(par.get_value().shape)
return_list.append(v[offset:offset+par_size].reshape(par.get_value().shape))
offset += par_size
return return_list
def set_params(values):
for parameter, value in zip(parameters, values):
parameter.set_value(numpy.asarray(value, dtype=floatX))
def f_obj(x):
values = vector_to_params(x)
set_params(values)
return get_cost(data_train)
def f_prime(x):
values = vector_to_params(x)
set_params(values)
grad = get_gradient(data_train)
return numpy.asarray(numpy.concatenate([var.flatten() for var in grad]), dtype='float64')
from scipy.optimize import fmin_bfgs
x0 = numpy.asarray(numpy.concatenate([p.get_value().flatten() for p in parameters]), dtype='float64')
result = fmin_bfgs(f_obj, x0, f_prime)
print 'train mse: {} validation mse: {}'.format(get_cost(data_train), get_cost(data_val))
Explanation: Small scale optimizations of this type often benefit from more advanced second order methods. The following block defines some functions that allow you to experiment with off-the-shelf optimization routines. In this case we used BFGS.
End of explanation
x_t = T.vector()
h_p = T.vector()
preactivation = T.dot(x_t, my_rnn.w_xh) + my_rnn.b_h
h_t = my_rnn._step(preactivation, h_p)
o_t = T.dot(h_t, w_ho) + b_o
single_step = theano.function([x_t, h_p], [o_t, h_t])
def generate(single_step, x_t, h_p, n_steps):
output = numpy.zeros((n_steps, 1))
for output_t in output:
x_t, h_p = single_step(x_t, h_p)
output_t[:] = x_t
return output
output = predict(data_train)
hidden = get_hidden(data_train)
output = generate(single_step, output[-1], hidden[-1], n_steps=200)
plt.plot(output)
plt.plot(data_val[:200])
plt.show()
Explanation: Generating sequences
Predicting a single step ahead is a relatively easy task. It would be more intresting to see if the network actually learned how to generate multiple time steps such that it can continue the sequence.
Write code that generates the next 1000 examples after processing the train sequence.
End of explanation |
13,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib
This notebook is (will be) a small crash course on the functionality of the Matplotlib Python module for creating graphs (and embedding it in notebooks). It is of course no substitute for the proper Matplotlib thorough documentation.
Initialization
We need to add a bit of IPython magic to tell the notebook backend that we want to display all graphs within the notebook. Otherwise they would generate objects instead of displaying into the interface; objects that we later can output to file or display explicitly with plt.show().
This is done by the following declaration
Step1: Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
Step2: Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in "styles". Let's see which styles are available
Step3: Simple plots
Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted
Step4: And we plot it
Step5: We can extensively alter the aspect of the plot. For instance, we can add markers and change color
Step6: Matplotlib syntax
Matplotlib commands have two variants | Python Code:
%matplotlib inline
Explanation: Matplotlib
This notebook is (will be) a small crash course on the functionality of the Matplotlib Python module for creating graphs (and embedding it in notebooks). It is of course no substitute for the proper Matplotlib thorough documentation.
Initialization
We need to add a bit of IPython magic to tell the notebook backend that we want to display all graphs within the notebook. Otherwise they would generate objects instead of displaying into the interface; objects that we later can output to file or display explicitly with plt.show().
This is done by the following declaration:
End of explanation
import matplotlib.pyplot as plt
Explanation: Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
End of explanation
from __future__ import print_function
print(plt.style.available)
# Let's choose one style. And while we are at it, define thicker lines and big graphic sizes
plt.style.use('bmh')
plt.rcParams['lines.linewidth'] = 1.5
plt.rcParams['figure.figsize'] = (15, 5)
Explanation: Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in "styles". Let's see which styles are available:
End of explanation
import numpy as np
x = np.arange( -10, 11 )
y = x*x
Explanation: Simple plots
Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted
End of explanation
plt.plot(x,y)
plt.xlabel('x');
plt.ylabel('x square');
Explanation: And we plot it
End of explanation
plt.plot(x,y,'ro-');
Explanation: We can extensively alter the aspect of the plot. For instance, we can add markers and change color:
End of explanation
# Create a figure object
fig = plt.figure()
# Add a graph to the figure. We get an axes object
ax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum)
# Create two vectors: x, y
x = np.linspace(0, 10, 1000)
y = np.sin(x)
# Plot those vectors on the axes we have
ax.plot(x, y)
# Add another plot to the same axes
y2 = np.cos(x)
ax.plot(x, y2)
# Modify the axes
ax.set_ylim(-1.5, 1.5)
# Add labels
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
ax.set_title("Sinusoids")
# Add a legend
ax.legend(['sine', 'cosine']);
Explanation: Matplotlib syntax
Matplotlib commands have two variants:
* A declarative syntax, with direct plotting commands. It is inspired by Matlab graphics syntax, so if you know Matlab it will be easy. It is the one used above.
* An object-oriented syntax, more complicated but somehow more powerful
The next cell shows an example of the object-oriented syntax
End of explanation |
13,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
As always, we load things from files so we don't have to set them up again.
Step1: The flux_pairs variable is a list of 2-tuples, where the first element is the state we're calculating the flux out of, and the second element is the interface we're calculating the flux through.
Step2: Set up the simulation and run it!
Step3: Now we move on to the analysis. | Python Code:
old = paths.Storage("mistis.nc", 'r')
engine = old.engines[0]
network = old.networks[0]
states = set(network.initial_states + network.final_states)
# must ensure that the diskcache is disabled in order to save,
# otherwise it looks for things that aren't there!
cvs = old.cvs[:]
for cv in cvs:
cv.disable_diskcache()
Explanation: As always, we load things from files so we don't have to set them up again.
End of explanation
flux_pairs = [(t.stateA, t.interfaces[0]) for t in network.transitions.values()]
Explanation: The flux_pairs variable is a list of 2-tuples, where the first element is the state we're calculating the flux out of, and the second element is the interface we're calculating the flux through.
End of explanation
sim = paths.DirectSimulation(
storage=None,
engine=engine,
states=states,
flux_pairs=flux_pairs,
initial_snapshot=old.snapshots[0]
)
%%time
sim.run(150000) # 30 sec
#sim.run(1500000) # 6 min
#sim.run(15000000) # 60 min
#sim.run(150000000) # 10 hr
#sim.run(800000000) # 2 days
Explanation: Set up the simulation and run it!
End of explanation
sim.rate_matrix
sim.n_transitions
fluxes = sim.fluxes
for f in fluxes:
print f[0].name, f[1].name, fluxes[f]
sim.n_flux_events
sim.results
output = paths.Storage("direct_simulation.nc", 'w')
output.save(old.snapshots[0])
output.save(sim)
output.tag['direct_results'] = sim.results
output.close()
Explanation: Now we move on to the analysis.
End of explanation |
13,795 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
how to delete a particular row in dataframe using python
| Python Code::
dataFrame = dataFrame.drop(columns)
|
13,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demonstrates pymatgen's functionality in terms of creating and editing molecules, as well as its integration with OpenBabel. For the latter, please note that you will need to have openbabel with python bindings installed. Please refer to pymatgen's documentation for installation details.
Molecules
Step1: Openbabel interface
This section demonstrates pymatgen's integration with openbabel.
Step2: Input/Output
Pymatgen has built-in support for the XYZ and Gaussian, NWchem file formats. It also has support for most other file formats if you have openbabel with Python bindings installed.
Step3: For more fine-grained control over output, you can use the underlying IO classes Gaussian and Nwchem, two commonly used computational chemistry programs. | Python Code:
from pymatgen import Molecule
# Create a methane molecule.
coords = [[0.000000, 0.000000, 0.000000],
[0.000000, 0.000000, 1.089000],
[1.026719, 0.000000, -0.363000],
[-0.513360, -0.889165, -0.363000],
[-0.513360, 0.889165, -0.363000]]
mol = Molecule(["C", "H", "H", "H", "H"], coords)
print(mol)
# A Molecule is simply a list of Sites.
print(mol[0])
print(mol[1])
# Break a Molecule into two by breaking a bond.
for frag in mol.break_bond(0, 1):
print(frag)
# Getting neighbors that are within 3 angstroms from C atom.
print(mol.get_neighbors(mol[0], 3))
#Detecting bonds
print(mol.get_covalent_bonds())
# If you need to run the molecule in a box with a periodic boundary condition
# code, you can generate the boxed structure as follows (in a 10Ax10Ax10A box)
structure = mol.get_boxed_structure(10, 10, 10)
print(structure)
# Writing to XYZ files (easy to open with many molecule file viewers)
from pymatgen.io.xyz import XYZ
xyz = XYZ(mol)
xyz.write_file("methane.xyz")
Explanation: Introduction
This notebook demonstrates pymatgen's functionality in terms of creating and editing molecules, as well as its integration with OpenBabel. For the latter, please note that you will need to have openbabel with python bindings installed. Please refer to pymatgen's documentation for installation details.
Molecules
End of explanation
from pymatgen.io.babel import BabelMolAdaptor
import pybel as pb
a = BabelMolAdaptor(mol)
# Create a pybel.Molecule, which simplifies a lot of access
pm = pb.Molecule(a.openbabel_mol)
# Print canonical SMILES representation (unique and comparable).
print("Canonical SMILES = {}".format(pm.write("can")))
# Print Inchi representation
print("Inchi= {}".format(pm.write("inchi")))
# pb.outformats provides a listing of available formats.
# Let's do a write to the commonly used PDB file.
pm.write("pdb", filename="methane.pdb", overwrite=True)
# Generating ethylene carbonate (SMILES obtained from Wikipedia)
# And displaying the svg.
ec = pb.readstring("smi", "C1COC(=O)O1")
ec.make3D()
from IPython.core.display import SVG, display_svg
svg = SVG(ec.write("svg"))
display_svg(svg)
Explanation: Openbabel interface
This section demonstrates pymatgen's integration with openbabel.
End of explanation
print(mol.to(fmt="xyz"))
print(mol.to(fmt="g09"))
print(mol.to(fmt="pdb")) #Needs Openbabel.
mol.to(filename="methane.xyz")
mol.to(filename="methane.pdb") #Needs Openbabel.
print(Molecule.from_file("methane.pdb"))
Explanation: Input/Output
Pymatgen has built-in support for the XYZ and Gaussian, NWchem file formats. It also has support for most other file formats if you have openbabel with Python bindings installed.
End of explanation
from pymatgen.io.gaussian import GaussianInput
gau = GaussianInput(mol, charge=0, spin_multiplicity=1, title="methane",
functional="B3LYP", basis_set="6-31G(d)",
route_parameters={'Opt': "", "SCF": "Tight"},
link0_parameters={"%mem": "1000MW"})
print(gau)
# A standard relaxation + SCF energy nwchem calculation input file for methane.
from pymatgen.io.nwchem import NwTask, NwInput
tasks = [
NwTask.dft_task(mol, operation="optimize", xc="b3lyp",
basis_set="6-31G"),
NwTask.dft_task(mol, operation="freq", xc="b3lyp",
basis_set="6-31G"),
NwTask.dft_task(mol, operation="energy", xc="b3lyp",
basis_set="6-311G"),
]
nwi = NwInput(mol, tasks, geometry_options=["units", "angstroms"])
print(nwi)
Explanation: For more fine-grained control over output, you can use the underlying IO classes Gaussian and Nwchem, two commonly used computational chemistry programs.
End of explanation |
13,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-9S
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
13,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploration of a problem interpreting binary test results
Copyright 2015 Allen Downey
MIT License
Step1: p is the prevalence of a condition
s is the sensititivity of the test
The false positive rate is known to be either t1 (with probability q) or t2 (with probability 1-q)
Step2: I'll use a through h for each of the 8 possible conditions.
Step3: And here are the probabilities of the conditions.
Step4: pmf_t represents the distribution of t
Step5: I'll consider two sets of parameters, d1 and d2, which have the same mean value of t.
Step6: prob takes two numbers that represent odds in favor and returns the corresponding probability.
Step7: Scenario A
In the first scenario, there are two kinds of people in the world, or two kinds of tests, so there are four outcomes that yield positive tests
Step8: In this scenario, the two parameter sets yield the same answer
Step9: Scenario B
Now suppose instead of two kinds of people, or two kinds of tests, the distribution of t represents our uncertainty about t. That is, we are only considering one test, and we think the false positive rate is the same for everyone, but we don't know what it is.
In this scenario, we need to think about the sampling process that brings patients to see doctors. There are three possibilities
Step10: Scenario B2
If all patients see a doctor, the doctor can learn about t based on the number of positive and negative tests.
The likelihood of a positive test given t1 is (a+c)/q
The likelihood of a positive test given t2 is (e+g)/(1-q)
update takes a pmf and updates it with these likelihoods
Step11: post is what we should believe about p after seeing one patient with a positive test
Step12: When q is 0.5, the posterior mean is p
Step13: But other distributions of t yield different values.
Step14: Let's see what we get after seeing two patients
Step15: Positive tests are more likely under t2 than t1, so each positive test makes it more likely that t=t2. So the expected value of p converges on p2. | Python Code:
from __future__ import print_function, division
import thinkbayes2
from sympy import symbols
Explanation: Exploration of a problem interpreting binary test results
Copyright 2015 Allen Downey
MIT License
End of explanation
p, q, s, t1, t2 = symbols('p q s t1 t2')
Explanation: p is the prevalence of a condition
s is the sensititivity of the test
The false positive rate is known to be either t1 (with probability q) or t2 (with probability 1-q)
End of explanation
a, b, c, d, e, f, g, h = symbols('a b c d e f g h')
Explanation: I'll use a through h for each of the 8 possible conditions.
End of explanation
a = q * p * s
b = q * p * (1-s)
c = q * (1-p) * t1
d = q * (1-p) * (1-t1)
e = (1-q) * p * s
f = (1-q) * p * (1-s)
g = (1-q) * (1-p) * t2
h = (1-q) * (1-p) * (1-t2)
pmf1 = thinkbayes2.Pmf()
pmf1['sick'] = p*s
pmf1['notsick'] = (1-p)*t1
pmf1
nc1 = pmf1.Normalize()
nc1.simplify()
pmf2 = thinkbayes2.Pmf()
pmf2['sick'] = p*s
pmf2['notsick'] = (1-p)*t2
pmf2
nc2 = pmf2.Normalize()
nc2.simplify()
pmf_t = thinkbayes2.Pmf({t1:q, t2:1-q})
pmf_t[t1] *= nc1
pmf_t[t2] *= nc2
pmf_t.Normalize()
pmf_t.Mean().simplify()
d1 = dict(q=0.5, p=0.1, s=0.5, t1=0.2, t2=0.8)
pmf_t.Mean().evalf(subs=d1)
d2 = dict(q=0.75, p=0.1, s=0.5, t1=0.4, t2=0.8)
pmf_t.Mean().evalf(subs=d2)
pmf_t[t1].evalf(subs=d2)
x = pmf_t[t1] * pmf1['sick'] + pmf_t[t2] * pmf2['sick']
x.simplify()
x.evalf(subs=d1)
x.evalf(subs=d2)
t = q * t1 + (1-q) * t2
pmf = thinkbayes2.Pmf()
pmf['sick'] = p*s
pmf['notsick'] = (1-p)*t
pmf
pmf.Normalize()
pmf['sick'].simplify()
pmf['sick'].evalf(subs=d1)
pmf['sick'].evalf(subs=d2)
gold = thinkbayes2.Pmf()
gold['0 sick t1'] = q * (1-p)**2 * t1**2
gold['1 sick t1'] = q * 2*p*(1-p) * s * t1
gold['2 sick t1'] = q * p**2 * s**2
gold['0 sick t2'] = (1-q) * (1-p)**2 * t2**2
gold['1 sick t2'] = (1-q) * 2*p*(1-p) * s * t2
gold['2 sick t2'] = (1-q) * p**2 * s**2
gold.Normalize()
p0 = gold['0 sick t1'] + gold['0 sick t2']
p0.evalf(subs=d1)
p0.evalf(subs=d2)
t = q * t1 + (1-q) * t2
collapsed = thinkbayes2.Pmf()
collapsed['0 sick'] = (1-p)**2 * t**2
collapsed['1 sick'] = 2*p*(1-p) * s * t
collapsed['2 sick'] = p**2 * s**2
collapsed.Normalize()
collapsed['0 sick'].evalf(subs=d1)
collapsed['0 sick'].evalf(subs=d2)
pmf1 = thinkbayes2.Pmf()
pmf1['0 sick'] = (1-p)**2 * t1**2
pmf1['1 sick'] = 2*p*(1-p) * s * t1
pmf1['2 sick'] = p**2 * s**2
nc1 = pmf1.Normalize()
pmf2 = thinkbayes2.Pmf()
pmf2['0 sick'] = (1-p)**2 * t2**2
pmf2['1 sick'] = 2*p*(1-p) * s * t2
pmf2['2 sick'] = p**2 * s**2
nc2 = pmf2.Normalize()
pmf_t = thinkbayes2.Pmf({t1:q, t2:1-q})
pmf_t[t1] *= nc1
pmf_t[t2] *= nc2
pmf_t.Normalize()
x = pmf_t[t1] * pmf1['0 sick'] + pmf_t[t2] * pmf2['0 sick']
x.simplify()
x.evalf(subs=d1), p0.evalf(subs=d1)
x.evalf(subs=d2), p0.evalf(subs=d2)
Explanation: And here are the probabilities of the conditions.
End of explanation
pmf_t = thinkbayes2.Pmf({t1:q, t2:1-q})
pmf_t.Mean().simplify()
Explanation: pmf_t represents the distribution of t
End of explanation
d1 = dict(q=0.5, p=0.1, s=0.5, t1=0.2, t2=0.8)
pmf_t.Mean().evalf(subs=d1)
d2 = dict(q=0.75, p=0.1, s=0.5, t1=0.4, t2=0.8)
pmf_t.Mean().evalf(subs=d2)
Explanation: I'll consider two sets of parameters, d1 and d2, which have the same mean value of t.
End of explanation
def prob(yes, no):
return yes / (yes + no)
Explanation: prob takes two numbers that represent odds in favor and returns the corresponding probability.
End of explanation
res = prob(a+e, c+g)
res.simplify()
Explanation: Scenario A
In the first scenario, there are two kinds of people in the world, or two kinds of tests, so there are four outcomes that yield positive tests: two true positives (a and d) and two false positives (c and g).
We can compute the probability of a true positive given a positive test:
End of explanation
res.evalf(subs=d1)
res.evalf(subs=d2)
Explanation: In this scenario, the two parameter sets yield the same answer:
End of explanation
p1 = prob(a, c)
p1.simplify()
p1.evalf(subs=d1)
p2 = prob(e, g)
p2.simplify()
p2.evalf(subs=d1)
pmf_p = thinkbayes2.Pmf([p1, p2])
pmf_p.Mean().simplify()
pmf_p.Mean().evalf(subs=d1)
p1.evalf(subs=d2), p2.evalf(subs=d2), pmf_p.Mean().evalf(subs=d2)
Explanation: Scenario B
Now suppose instead of two kinds of people, or two kinds of tests, the distribution of t represents our uncertainty about t. That is, we are only considering one test, and we think the false positive rate is the same for everyone, but we don't know what it is.
In this scenario, we need to think about the sampling process that brings patients to see doctors. There are three possibilities:
B1. Only patients who test positive see a doctor.
B2. All patients see a doctor with equal probability, regardless of test results and regardless of whether they are sick or not.
B3. Patients are more or less likely to see a doctor, depending on the test results and whether they are sick or not.
Scenario B1
If patients only see a doctor after testing positive, the doctor doesn't learn anything about t just because a patient tests positive. In that case, the doctor should compute the conditional probabilities:
p1 is the probability the patient is sick given a positive test and t1
p2 is the probability the patient is sick given a positive test and t2
We can compute p1 and p2, form pmf_p, and compute its mean:
End of explanation
def update(pmf):
post = pmf.Copy()
post[p1] *= (a + c) / q
post[p2] *= (e + g) / (1-q)
post.Normalize()
return post
Explanation: Scenario B2
If all patients see a doctor, the doctor can learn about t based on the number of positive and negative tests.
The likelihood of a positive test given t1 is (a+c)/q
The likelihood of a positive test given t2 is (e+g)/(1-q)
update takes a pmf and updates it with these likelihoods
End of explanation
post = update(pmf_p)
post[p1].simplify()
post.Mean().simplify()
Explanation: post is what we should believe about p after seeing one patient with a positive test:
End of explanation
post.Mean().evalf(subs=d1)
Explanation: When q is 0.5, the posterior mean is p:
End of explanation
post.Mean().evalf(subs=d2)
Explanation: But other distributions of t yield different values.
End of explanation
post2 = update(post)
post2.Mean().simplify()
Explanation: Let's see what we get after seeing two patients
End of explanation
post2.Mean().evalf(subs=d1)
post2.Mean().evalf(subs=d2)
post3 = update(post2)
post3.Mean().evalf(subs=d1)
post3.Mean().evalf(subs=d2)
Explanation: Positive tests are more likely under t2 than t1, so each positive test makes it more likely that t=t2. So the expected value of p converges on p2.
End of explanation |
13,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Récupération des données
Ce notebook donne quelques exemples de codes qui permettent de récupérer les données utilisées par d'autres notebooks. Le module actuariat_python est implémenté avec Python 3. Pour les utilisateurs de Python 2.7, il suffira de recopier le code chaque fonction dans le notebook (suivre les liens insérés dans le notebook).
Step1: Population française janvier 2017
Les données sont disponibles sur le site de l'INSEE Pyramide des âges au 1er janvier. Elles sont disponibles au format Excel. Le format n'est pas le plus simple et il a le don d'être parfois illisible avec pandas. Le plus simple est de le convertir au format texte avec Excel.
Step2: La récupération de ces données est implémentée dans la fonction population_france_year
Step3: D'après cette table, il y a plus de personnes âgées de 110 ans que de 109 ans. C'est dû au fait que la dernière ligne aggrège toutes les personnes âgées de plus de 110 ans.
Table de mortalité 2000-2002 (France)
On utilise quelques raccourcis afin d'éviter d'y passer trop de temps. Les données sont fournis au format Excel à l'adresse
Step4: Taux de fécondité (France)
On procède de même pour cette table avec la fonction fecondite_france. Source
Step5: Table de mortalité étendue 1960-2010
table de mortalité de 1960 à 2010 qu'on récupère à l'aide de la fonction table_mortalite_euro_stat. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# le code qui suit n'est pas indispensable, il génère automatiquement un menu
# dans le notebook
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Récupération des données
Ce notebook donne quelques exemples de codes qui permettent de récupérer les données utilisées par d'autres notebooks. Le module actuariat_python est implémenté avec Python 3. Pour les utilisateurs de Python 2.7, il suffira de recopier le code chaque fonction dans le notebook (suivre les liens insérés dans le notebook).
End of explanation
url = "https://www.insee.fr/fr/statistiques/fichier/1892086/pop-totale-france.xls"
url = "pop-totale-france.txt"
import pandas
df=pandas.read_csv(url, sep="\t", encoding="latin-1")
df.head(n=5)
df=pandas.read_csv(url, sep="\t", encoding="latin-1", skiprows=3)
df.head(n=5)
df.tail(n=5)
Explanation: Population française janvier 2017
Les données sont disponibles sur le site de l'INSEE Pyramide des âges au 1er janvier. Elles sont disponibles au format Excel. Le format n'est pas le plus simple et il a le don d'être parfois illisible avec pandas. Le plus simple est de le convertir au format texte avec Excel.
End of explanation
from actuariat_python.data import population_france_year
df = population_france_year()
df.head(n=3)
df.tail(n=3)
Explanation: La récupération de ces données est implémentée dans la fonction population_france_year :
End of explanation
from actuariat_python.data import table_mortalite_france_00_02
df=table_mortalite_france_00_02()
df.head()
df.plot(x="Age",y=["Homme", "Femme"],xlim=[0,100])
Explanation: D'après cette table, il y a plus de personnes âgées de 110 ans que de 109 ans. C'est dû au fait que la dernière ligne aggrège toutes les personnes âgées de plus de 110 ans.
Table de mortalité 2000-2002 (France)
On utilise quelques raccourcis afin d'éviter d'y passer trop de temps. Les données sont fournis au format Excel à l'adresse : http://www.institutdesactuaires.com/gene/main.php?base=314. La fonction table_mortalite_france_00_02 permet de les récupérer.
End of explanation
from actuariat_python.data import fecondite_france
df=fecondite_france()
df.head()
df.plot(x="age", y=["2005","2015"])
Explanation: Taux de fécondité (France)
On procède de même pour cette table avec la fonction fecondite_france. Source : INSEE : Fécondité selon l'âge détaillé de la mère.
End of explanation
from actuariat_python.data import table_mortalite_euro_stat
table_mortalite_euro_stat()
import os
os.stat("mortalite.txt")
import pandas
df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False)
df.head()
df [ ((df.age=="Y60") | (df.age=="Y61")) & (df.annee == 2000) & (df.pays=="FR") & (df.genre=="F")]
Explanation: Table de mortalité étendue 1960-2010
table de mortalité de 1960 à 2010 qu'on récupère à l'aide de la fonction table_mortalite_euro_stat.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.