Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ISCpy
ISCpy a robust ISC config file parser. It has virtually unlimited
possibilities for depth and quantity of ISC config files. ISC config files include BIND and DHCP config files among a few others.
The example below shows how to parse a canonical Bind configuration as generated by running named-checkonf -p.
Step1: The config dictionary is keyed by the different sections of the Bind configuration.
Step2: The sections are dictionaries again. Note that lists are converted into dictionaries with values set to True. | Python Code:
import iscpy
with open('named.conf') as fp:
s = fp.read()
config = iscpy.ParseISCString(s)
type(config)
Explanation: ISCpy
ISCpy a robust ISC config file parser. It has virtually unlimited
possibilities for depth and quantity of ISC config files. ISC config files include BIND and DHCP config files among a few others.
The example below shows how to parse a canonical Bind configuration as generated by running named-checkonf -p.
End of explanation
config.keys()[0]
set([key.split()[0] for key in config.keys()]) # 'view' missing in this example
acls = {key: value for key,value in config.items() if key.startswith('acl')}
zones = {key: value for key,value in config.items() if key.startswith('zone')}
# etc.
acls.keys()
Explanation: The config dictionary is keyed by the different sections of the Bind configuration.
End of explanation
config['zone "86.168.192.in-addr.arpa"']
config['zone "16.172.in-addr.arpa"']
Explanation: The sections are dictionaries again. Note that lists are converted into dictionaries with values set to True.
End of explanation |
11,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Automatic Differentiation in JAX
Authors
Step1: The higher-order derivatives of $f$ are
Step2: Evaluating the above in $x=1$ would give us
Step3: In the multivariable case, higher-order derivatives are more complicated. The second-order derivative of a function is represented by its Hessian matrix, defined according to
$$(\mathbf{H}f)_{i,j} = \frac{\partial^2 f}{\partial_i\partial_j}.$$
The Hessian of a real-valued function of several variables, $f
Step4: Let's double check this is correct on the dot-product $f
Step5: Often, however, we aren't interested in computing the full Hessian itself, and doing so can be very inefficient. The Autodiff Cookbook explains some tricks, like the Hessian-vector product, that allow to use it without materialising the whole matrix.
If you plan to work with higher-order derivatives in JAX, we strongly recommend reading the Autodiff Cookbook.
Higher order optimization
Some meta-learning techniques, such as Model-Agnostic Meta-Learning (MAML), require differentiating through gradient updates. In other frameworks this can be quite cumbersome, but in JAX it's much easier
Step7: Consider a transition from a state $s_{t-1}$ to a state $s_t$ during which we observed the reward $r_t$
Step8: The TD(0) update to the network parameters is
Step9: But td_update will not compute a TD(0) update, because the gradient computation will include the dependency of target on $\theta$.
We can use jax.lax.stop_gradient to force JAX to ignore the dependency of the target on $\theta$
Step10: This will treat target as if it did not depend on the parameters $\theta$ and compute the correct update to the parameters.
The jax.lax.stop_gradient may also be useful in other settings, for instance if you want the gradient from some loss to only affect a subset of the parameters of the neural network (because, for instance, the other parameters are trained using a different loss).
Straight-through estimator using stop_gradient
The straight-through estimator is a trick for defining a 'gradient' of a function that is otherwise non-differentiable. Given a non-differentiable function $f
Step11: Per-example gradients
While most ML systems compute gradients and updates from batches of data, for reasons of computational efficiency and/or variance reduction, it is sometimes necessary to have access to the gradient/update associated with each specific sample in the batch.
For instance, this is needed to prioritise data based on gradient magnitude, or to apply clipping / normalisations on a sample by sample basis.
In many frameworks (PyTorch, TF, Theano) it is often not trivial to compute per-example gradients, because the library directly accumulates the gradient over the batch. Naive workarounds, such as computing a separate loss per example and then aggregating the resulting gradients are typically very inefficient.
In JAX we can define the code to compute the gradient per-sample in an easy but efficient way.
Just combine the jit, vmap and grad transformations together
Step12: Let's walk through this one transformation at a time.
First, we apply jax.grad to td_loss to obtain a function that computes the gradient of the loss w.r.t. the parameters on single (unbatched) inputs
Step13: This function computes one row of the array above.
Then, we vectorise this function using jax.vmap. This adds a batch dimension to all inputs and outputs. Now, given a batch of inputs, we produce a batch of outputs -- each output in the batch corresponds to the gradient for the corresponding member of the input batch.
Step14: This isn't quite what we want, because we have to manually feed this function a batch of thetas, whereas we actually want to use a single theta. We fix this by adding in_axes to the jax.vmap, specifying theta as None, and the other args as 0. This makes the resulting function add an extra axis only to the other arguments, leaving theta unbatched, as we want
Step15: Almost there! This does what we want, but is slower than it has to be. Now, we wrap the whole thing in a jax.jit to get the compiled, efficient version of the same function | Python Code:
import jax
f = lambda x: x**3 + 2*x**2 - 3*x + 1
dfdx = jax.grad(f)
Explanation: Advanced Automatic Differentiation in JAX
Authors: Vlatimir Mikulik & Matteo Hessel
Computing gradients is a critical part of modern machine learning methods. This section considers a few advanced topics in the areas of automatic differentiation as it relates to modern machine learning.
While understanding how automatic differentiation works under the hood isn't crucial for using JAX in most contexts, we encourage the reader to check out this quite accessible video to get a deeper sense of what's going on.
The Autodiff Cookbook is a more advanced and more detailed explanation of how these ideas are implemented in the JAX backend. It's not necessary to understand this to do most things in JAX. However, some features (like defining custom derivatives) depend on understanding this, so it's worth knowing this explanation exists if you ever need to use them.
Higher-order derivatives
JAX's autodiff makes it easy to compute higher-order derivatives, because the functions that compute derivatives are themselves differentiable. Thus, higher-order derivatives are as easy as stacking transformations.
We illustrate this in the single-variable case:
The derivative of $f(x) = x^3 + 2x^2 - 3x + 1$ can be computed as:
End of explanation
d2fdx = jax.grad(dfdx)
d3fdx = jax.grad(d2fdx)
d4fdx = jax.grad(d3fdx)
Explanation: The higher-order derivatives of $f$ are:
$$
\begin{array}{l}
f'(x) = 3x^2 + 4x -3\
f''(x) = 6x + 4\
f'''(x) = 6\
f^{iv}(x) = 0
\end{array}
$$
Computing any of these in JAX is as easy as chaining the grad function:
End of explanation
print(dfdx(1.))
print(d2fdx(1.))
print(d3fdx(1.))
print(d4fdx(1.))
Explanation: Evaluating the above in $x=1$ would give us:
$$
\begin{array}{l}
f'(1) = 4\
f''(1) = 10\
f'''(1) = 6\
f^{iv}(1) = 0
\end{array}
$$
Using JAX:
End of explanation
def hessian(f):
return jax.jacfwd(jax.grad(f))
Explanation: In the multivariable case, higher-order derivatives are more complicated. The second-order derivative of a function is represented by its Hessian matrix, defined according to
$$(\mathbf{H}f)_{i,j} = \frac{\partial^2 f}{\partial_i\partial_j}.$$
The Hessian of a real-valued function of several variables, $f: \mathbb R^n\to\mathbb R$, can be identified with the Jacobian of its gradient. JAX provides two transformations for computing the Jacobian of a function, jax.jacfwd and jax.jacrev, corresponding to forward- and reverse-mode autodiff. They give the same answer, but one can be more efficient than the other in different circumstances – see the video about autodiff linked above for an explanation.
End of explanation
import jax.numpy as jnp
def f(x):
return jnp.dot(x, x)
hessian(f)(jnp.array([1., 2., 3.]))
Explanation: Let's double check this is correct on the dot-product $f: \mathbf{x} \mapsto \mathbf{x} ^\top \mathbf{x}$.
if $i=j$, $\frac{\partial^2 f}{\partial_i\partial_j}(\mathbf{x}) = 2$. Otherwise, $\frac{\partial^2 f}{\partial_i\partial_j}(\mathbf{x}) = 0$.
End of explanation
# Value function and initial parameters
value_fn = lambda theta, state: jnp.dot(theta, state)
theta = jnp.array([0.1, -0.1, 0.])
Explanation: Often, however, we aren't interested in computing the full Hessian itself, and doing so can be very inefficient. The Autodiff Cookbook explains some tricks, like the Hessian-vector product, that allow to use it without materialising the whole matrix.
If you plan to work with higher-order derivatives in JAX, we strongly recommend reading the Autodiff Cookbook.
Higher order optimization
Some meta-learning techniques, such as Model-Agnostic Meta-Learning (MAML), require differentiating through gradient updates. In other frameworks this can be quite cumbersome, but in JAX it's much easier:
```python
def meta_loss_fn(params, data):
Computes the loss after one step of SGD.
grads = jax.grad(loss_fn)(params, data)
return loss_fn(params - lr * grads, data)
meta_grads = jax.grad(meta_loss_fn)(params, data)
```
Stopping gradients
Auto-diff enables automatic computation of the gradient of a function with respect to its inputs. Sometimes, however, we might want some additional control: for instance, we might want to avoid back-propagating gradients through some subset of the computational graph.
Consider for instance the TD(0) (temporal difference) reinforcement learning update. This is used to learn to estimate the value of a state in an environment from experience of interacting with the environment. Let's assume the value estimate $v_{\theta}(s_{t-1}$) in a state $s_{t-1}$ is parameterised by a linear function.
End of explanation
# An example transition.
s_tm1 = jnp.array([1., 2., -1.])
r_t = jnp.array(1.)
s_t = jnp.array([2., 1., 0.])
Explanation: Consider a transition from a state $s_{t-1}$ to a state $s_t$ during which we observed the reward $r_t$
End of explanation
def td_loss(theta, s_tm1, r_t, s_t):
v_tm1 = value_fn(theta, s_tm1)
target = r_t + value_fn(theta, s_t)
return (target - v_tm1) ** 2
td_update = jax.grad(td_loss)
delta_theta = td_update(theta, s_tm1, r_t, s_t)
delta_theta
Explanation: The TD(0) update to the network parameters is:
$$
\Delta \theta = (r_t + v_{\theta}(s_t) - v_{\theta}(s_{t-1})) \nabla v_{\theta}(s_{t-1})
$$
This update is not the gradient of any loss function.
However, it can be written as the gradient of the pseudo loss function
$$
L(\theta) = [r_t + v_{\theta}(s_t) - v_{\theta}(s_{t-1})]^2
$$
if the dependency of the target $r_t + v_{\theta}(s_t)$ on the parameter $\theta$ is ignored.
How can we implement this in JAX? If we write the pseudo loss naively we get:
End of explanation
def td_loss(theta, s_tm1, r_t, s_t):
v_tm1 = value_fn(theta, s_tm1)
target = r_t + value_fn(theta, s_t)
return (jax.lax.stop_gradient(target) - v_tm1) ** 2
td_update = jax.grad(td_loss)
delta_theta = td_update(theta, s_tm1, r_t, s_t)
delta_theta
Explanation: But td_update will not compute a TD(0) update, because the gradient computation will include the dependency of target on $\theta$.
We can use jax.lax.stop_gradient to force JAX to ignore the dependency of the target on $\theta$:
End of explanation
def f(x):
return jnp.round(x) # non-differentiable
def straight_through_f(x):
# Create an exactly-zero expression with Sterbenz lemma that has
# an exactly-one gradient.
zero = x - jax.lax.stop_gradient(x)
return zero + jax.lax.stop_gradient(f(x))
print("f(x): ", f(3.2))
print("straight_through_f(x):", straight_through_f(3.2))
print("grad(f)(x):", jax.grad(f)(3.2))
print("grad(straight_through_f)(x):", jax.grad(straight_through_f)(3.2))
Explanation: This will treat target as if it did not depend on the parameters $\theta$ and compute the correct update to the parameters.
The jax.lax.stop_gradient may also be useful in other settings, for instance if you want the gradient from some loss to only affect a subset of the parameters of the neural network (because, for instance, the other parameters are trained using a different loss).
Straight-through estimator using stop_gradient
The straight-through estimator is a trick for defining a 'gradient' of a function that is otherwise non-differentiable. Given a non-differentiable function $f : \mathbb{R}^n \to \mathbb{R}^n$ that is used as part of a larger function that we wish to find a gradient of, we simply pretend during the backward pass that $f$ is the identity function. This can be implemented neatly using jax.lax.stop_gradient:
End of explanation
perex_grads = jax.jit(jax.vmap(jax.grad(td_loss), in_axes=(None, 0, 0, 0)))
# Test it:
batched_s_tm1 = jnp.stack([s_tm1, s_tm1])
batched_r_t = jnp.stack([r_t, r_t])
batched_s_t = jnp.stack([s_t, s_t])
perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t)
Explanation: Per-example gradients
While most ML systems compute gradients and updates from batches of data, for reasons of computational efficiency and/or variance reduction, it is sometimes necessary to have access to the gradient/update associated with each specific sample in the batch.
For instance, this is needed to prioritise data based on gradient magnitude, or to apply clipping / normalisations on a sample by sample basis.
In many frameworks (PyTorch, TF, Theano) it is often not trivial to compute per-example gradients, because the library directly accumulates the gradient over the batch. Naive workarounds, such as computing a separate loss per example and then aggregating the resulting gradients are typically very inefficient.
In JAX we can define the code to compute the gradient per-sample in an easy but efficient way.
Just combine the jit, vmap and grad transformations together:
End of explanation
dtdloss_dtheta = jax.grad(td_loss)
dtdloss_dtheta(theta, s_tm1, r_t, s_t)
Explanation: Let's walk through this one transformation at a time.
First, we apply jax.grad to td_loss to obtain a function that computes the gradient of the loss w.r.t. the parameters on single (unbatched) inputs:
End of explanation
almost_perex_grads = jax.vmap(dtdloss_dtheta)
batched_theta = jnp.stack([theta, theta])
almost_perex_grads(batched_theta, batched_s_tm1, batched_r_t, batched_s_t)
Explanation: This function computes one row of the array above.
Then, we vectorise this function using jax.vmap. This adds a batch dimension to all inputs and outputs. Now, given a batch of inputs, we produce a batch of outputs -- each output in the batch corresponds to the gradient for the corresponding member of the input batch.
End of explanation
inefficient_perex_grads = jax.vmap(dtdloss_dtheta, in_axes=(None, 0, 0, 0))
inefficient_perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t)
Explanation: This isn't quite what we want, because we have to manually feed this function a batch of thetas, whereas we actually want to use a single theta. We fix this by adding in_axes to the jax.vmap, specifying theta as None, and the other args as 0. This makes the resulting function add an extra axis only to the other arguments, leaving theta unbatched, as we want:
End of explanation
perex_grads = jax.jit(inefficient_perex_grads)
perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t)
%timeit inefficient_perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t).block_until_ready()
%timeit perex_grads(theta, batched_s_tm1, batched_r_t, batched_s_t).block_until_ready()
Explanation: Almost there! This does what we want, but is slower than it has to be. Now, we wrap the whole thing in a jax.jit to get the compiled, efficient version of the same function:
End of explanation |
11,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The curse of hunting rare things
What are the chances of intersecting features with a grid of cross-sections?
I'd like to know the probability of intersecting features with a grid of cross-sections? These sections or transects might be 2D seismic lines, or outcrops.
This notebook goes with the Agile blog post — The curse of hunting rare things — from May 2015.
Linear sampling theory
I got this approach from some lab notes by Dave Oleyar for an ecology class at the Unisity of Idaho. He in turn refers to the following publication
Step1: Just for fun, we can — using the original formula in the reference — calculate the population size from an observation
Step2: It's a linear relationship, let's plot it.
Step3: <hr />
Geometric reasoning
If we think of a 2D line, we can reason that if a feature lies more than its radius from the line then it is not intersected. Here's the situation for a grid
Step4: Orthogonal grid of 2D lines
Step5: We're going to need a binomial distribution, scipy.stats.binom.
Step6: We can use the distribution to estimate the probability of seeing no features. Then we can use the survival function (or, equivalently, 1 - the cumulative distribution function), sf(x, n, p), to tell us the probability of drawing more than x in n trials, given a success probability p
Step7: <hr />
Interpretation accuracy
We can apply Bayes' theorem to update the prior probability (above) with the reliability of our interpretation (due to lack of resolution, data quality, or skill).
Step8: We can use a pandas DataFrame to show a quick table
Step9: We can compute the probability of a given feature being correctly interpreted | Python Code:
area = 120000.0 # km^2, area covered by transects
population = 120 # Total number of features (guess)
no_lines = 250 # Total number of transects
line_length = 150 # km, mean length of a transect
feature_width = 0.5 # km, width of features
density = population / area
length = no_lines * line_length
observed = 2 * density * length * feature_width
print "Expected number of features intersected:", observed
Explanation: The curse of hunting rare things
What are the chances of intersecting features with a grid of cross-sections?
I'd like to know the probability of intersecting features with a grid of cross-sections? These sections or transects might be 2D seismic lines, or outcrops.
This notebook goes with the Agile blog post — The curse of hunting rare things — from May 2015.
Linear sampling theory
I got this approach from some lab notes by Dave Oleyar for an ecology class at the Unisity of Idaho. He in turn refers to the following publication:
Buckland, S. T., Anderson, D. R., Burnham, K. P., and Laake, J. L. 1993. Distance sampling: estimating abundance of biological populations. Chapman and Hall, London.
The equation in the notes expresses density in terms of observations, but I was interested in the inverse problem: expected intersections given some population. Of course, we don't know the population, but we can tune our intuition with some modeling.
End of explanation
observed = 37.5
population = (observed * area) / (2. * length * feature_width)
print "Population:", population
Explanation: Just for fun, we can — using the original formula in the reference — calculate the population size from an observation:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make that last expression into a quick function
def pop(obs, area, length, width):
return (obs * area) / (2. * length * width)
# Pass in an array of values
obs = np.arange(50)
pop = pop(obs, area, length, feature_width)
plt.plot(obs, pop)
plt.xlabel('observed')
plt.ylabel('population')
plt.show()
Explanation: It's a linear relationship, let's plot it.
End of explanation
line_spacing = 3.0 # km, the width of the gap
# 'Invisible' means 'not intersected'
width_invisible = line_spacing - feature_width
prob_invisible = width_invisible / line_spacing
prob_visible = 1 - prob_invisible
print "Probability of intersecting a given feature:", prob_visible
Explanation: <hr />
Geometric reasoning
If we think of a 2D line, we can reason that if a feature lies more than its radius from the line then it is not intersected. Here's the situation for a grid:
<img src="https://dl.dropboxusercontent.com/u/14965965/2D-grid.png">
If there's a set of lines, then the problem is symmetric across the gaps between lines. The width of the 'invisible strip' (grey in the figure, which shows a grid rather than a swath) is the size of the gap minus the width of the feature. If we divide the width of the invisible strip by the width of the gap, we get a probability of randomly distributed features falling into the invisible strip.
Parallel 2D lines
End of explanation
x_spacing = 3.0 # km
y_spacing = 3.0 # km
# Think of the quadrilaterals between lines as 'units'
area_of_unit = x_spacing * y_spacing
area_invisible = (x_spacing - feature_width) * (y_spacing - feature_width)
area_visible = area_of_unit - area_invisible
prob_visible = area_visible / area_of_unit
print "Probability of intersecting a given feature:", prob_visible
Explanation: Orthogonal grid of 2D lines
End of explanation
import scipy.stats
Explanation: We're going to need a binomial distribution, scipy.stats.binom.
End of explanation
p = "Probability of intersecting"
print p, "no features:", scipy.stats.binom.pmf(0, population, prob_visible)
print p, "at least one:", scipy.stats.binom.sf(0, population, prob_visible)
print p, "at least two:", scipy.stats.binom.sf(1, population, prob_visible)
print p, "all features:", scipy.stats.binom.sf(population-1, population, prob_visible)
Explanation: We can use the distribution to estimate the probability of seeing no features. Then we can use the survival function (or, equivalently, 1 - the cumulative distribution function), sf(x, n, p), to tell us the probability of drawing more than x in n trials, given a success probability p:
End of explanation
reliability = 0.75
trials = 120
intersect_interpret = prob_visible * reliability * trials
intersect_xinterpret = prob_visible * (1 - reliability) * trials
xintersect_interpret = (1 - prob_visible) * (1 - reliability) * trials
xintersect_xinterpret = (1 - prob_visible) * reliability * trials
t = [[intersect_interpret, intersect_xinterpret], [xintersect_interpret, xintersect_xinterpret]]
Explanation: <hr />
Interpretation accuracy
We can apply Bayes' theorem to update the prior probability (above) with the reliability of our interpretation (due to lack of resolution, data quality, or skill).
End of explanation
from pandas import DataFrame
df = DataFrame(t, index=['Intersected', 'Not intersected'], columns=['Interpreted','Not interpreted'])
df
Explanation: We can use a pandas DataFrame to show a quick table:
End of explanation
prob_correct = intersect_interpret / (intersect_interpret + xintersect_interpret)
print "Probability of a feature existing if interpreted:", prob_correct
Explanation: We can compute the probability of a given feature being correctly interpreted:
End of explanation |
11,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple interactive bacgkround jobs with IPython
We start by loading the backgroundjobs library and defining a few trivial functions to illustrate things with.
Step1: Now, we can create a job manager (called simply jobs) and use it to submit new jobs.
Run the cell below, it will show when the jobs start. Wait a few seconds until you see the 'all done' completion message
Step2: You can check the status of your jobs at any time
Step3: For any completed job, you can get its result easily
Step4: Errors and tracebacks
The jobs manager tries to help you with debugging
Step5: You can get the traceback of any dead job. Run the line
below again interactively until it prints a traceback (check the status
of the job)
Step6: This will print all tracebacks for all dead jobs
Step7: The job manager can be flushed of all completed jobs at any time
Step8: After that, the status is simply empty
Step9: Jobs have a .join method that lets you wait on their thread for completion | Python Code:
from IPython.lib import backgroundjobs as bg
import sys
import time
def sleepfunc(interval=2, *a, **kw):
args = dict(interval=interval,
args=a,
kwargs=kw)
time.sleep(interval)
return args
def diefunc(interval=2, *a, **kw):
time.sleep(interval)
raise Exception("Dead job with interval %s" % interval)
def printfunc(interval=1, reps=5):
for n in range(reps):
time.sleep(interval)
print('In the background... %i' % n)
sys.stdout.flush()
print('All done!')
sys.stdout.flush()
Explanation: Simple interactive bacgkround jobs with IPython
We start by loading the backgroundjobs library and defining a few trivial functions to illustrate things with.
End of explanation
jobs = bg.BackgroundJobManager()
# Start a few jobs, the first one will have ID # 0
jobs.new(sleepfunc, 4)
jobs.new(sleepfunc, kw={'reps':2})
jobs.new('printfunc(1,3)')
Explanation: Now, we can create a job manager (called simply jobs) and use it to submit new jobs.
Run the cell below, it will show when the jobs start. Wait a few seconds until you see the 'all done' completion message:
End of explanation
jobs.status()
Explanation: You can check the status of your jobs at any time:
End of explanation
jobs[0].result
Explanation: For any completed job, you can get its result easily:
End of explanation
# This makes a couple of jobs which will die. Let's keep a reference to
# them for easier traceback reporting later
diejob1 = jobs.new(diefunc, 1)
diejob2 = jobs.new(diefunc, 2)
Explanation: Errors and tracebacks
The jobs manager tries to help you with debugging:
End of explanation
print("Status of diejob1: %s" % diejob1.status)
diejob1.traceback() # jobs.traceback(4) would also work here, with the job number
Explanation: You can get the traceback of any dead job. Run the line
below again interactively until it prints a traceback (check the status
of the job):
End of explanation
jobs.traceback()
Explanation: This will print all tracebacks for all dead jobs:
End of explanation
jobs.flush()
Explanation: The job manager can be flushed of all completed jobs at any time:
End of explanation
jobs.status()
Explanation: After that, the status is simply empty:
End of explanation
j = jobs.new(sleepfunc, 2)
j.join?
Explanation: Jobs have a .join method that lets you wait on their thread for completion:
End of explanation |
11,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Presented Below are two plots in subplot configuration. The upper plot is a sum of the cusp crossings. The lower plot is a plot of the cusp latitude and the spacecraft latitude/longitude plotted against time. Also included is the process of thoughts that led to that conclusion.
I used the Tsyganenko model of the cusp position. This model describes the cusp position in the noon-midnight meridian using a cylindrical coordinate system. The $r$ and $\theta$ components are occuring in the xz plane. The neutral axis is therefore the $y$ axis.
Tsyganenko provides the following equation for a simplified cusp model
\begin{equation}
\phi_{c} = arcsin(\frac{\sqrt{\rho}}{\sqrt{\rho + \sin^{-2}(\phi_{1}) -1}}) + \psi(t)
\end{equation}
Here, $\rho$ is the radial of the satellite from the center of the earth. $\phi_{c}$ is the colatitude of the cusp position. $\psi$ is the dipole tilt of the cusp. $\phi_{1}$ is provided by the equation $\phi_{c0} - (\alpha_{1} \psi + \alpha_{2}\psi^{2})$. As in (Tsyganenko) the values of $\phi_{c}$, $\alpha_{1}$, and $\alpha_{2}$ are 0.24, 0.1287, and 0.0314 respectively. ((note that that last sentence is almost a literal copy paste of the Tsyg paper)). Note that $\psi (t)$ is my own addition and that the original notation is $\psi$. I want to make it extremely clear that $\psi$ varies with time, requiring that the tsyganenko library be a function of time as well.
The dataset used here depicts the path of a satellite on a single day of its orbit 01-Jan-2019, at a $65^{\circ}$ inclined orbit.
Step1: Note that below, there are many ways that we could set up the subplot, but I just chose (programattically) to do it a certain way provided in http
Step2: The next thing to do is compute the cusp latitude and longitude using tsyganenko. The cusp angular position is a function of radial distance from the earth as given in the Tsyganenko equation. This means that different "tracks" will be given as we vary $R_{e}$. Since I am using a very circular orbit, with $e \sim 0.02$, I will plot the cusp latitude and longitude below as a function of the spacecraft altitude in my GMAT file. The semi major axis is given as $7191 km$, $\sim 1.127 R_{e}$.
Step3: So I suppose this does show that for a region of the stationary cusp, that the satellite potentially crosses it. However, the cusp location moves throughout the day so it's conceivable that the cusp avoids the orbit of the satellite, and I need a way to test for that. The other big question is how do I integrate dipole tilt into this at the same time. The first question I need to answer is whether or not the dipole tilt matters in SM.
One other approach I could take is to come up with the cusp position as $f(\rho,t)$, then compare that latitude and longitude to the spacecraft latitude and longitude. That's really what I need to do.
Step by Step what I will attempt to do
Step4: The biggest issues of this plot are that we aren't able to see where the cusp is at a certain time, it's just a ground track. Therefore, we need to also put in the plots of lat/lon vs. time.
Close to solving this. I think my cusp actually behaves somewhat correctly. The above plot was actually really promising, but for some reason my Now I need to use the Tsyganenko equation as an function of the satellite's location. It really don't make much sense to me why the cusp wouldn't seem to change in latitude. My guess is that there's tiny circles there if you look closely at the data.
I'll deal with what I suspect are innacuracies later I feel like I've done okay thus far. Like I "feel" like the equations are wrong.
Open Questions at this point | Python Code:
import tsyganenko as tsyg
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from spacepy import coordinates as coord
import spacepy.time as spt
from spacepy.time import Ticktock
import datetime as dt
from mpl_toolkits.mplot3d import Axes3D
import sys
#adding the year data here so I don't have to crush my github repo
pathname = '../../data-se3-path-planner/yearData/cuspCrossings2019/'
sys.path.append(pathname)
# find phi for a stationary cusp
r = np.linspace(0,10, 1000)
phi_c = tsyg.getPhi_c(r)
plt.plot(r,phi_c)
plt.ylabel('phi_c (radians)')
plt.xlabel('earth radii ($R_{e}$)')
plt.title('Tsyganenko Cusp Position')
plt.show()
Explanation: Presented Below are two plots in subplot configuration. The upper plot is a sum of the cusp crossings. The lower plot is a plot of the cusp latitude and the spacecraft latitude/longitude plotted against time. Also included is the process of thoughts that led to that conclusion.
I used the Tsyganenko model of the cusp position. This model describes the cusp position in the noon-midnight meridian using a cylindrical coordinate system. The $r$ and $\theta$ components are occuring in the xz plane. The neutral axis is therefore the $y$ axis.
Tsyganenko provides the following equation for a simplified cusp model
\begin{equation}
\phi_{c} = arcsin(\frac{\sqrt{\rho}}{\sqrt{\rho + \sin^{-2}(\phi_{1}) -1}}) + \psi(t)
\end{equation}
Here, $\rho$ is the radial of the satellite from the center of the earth. $\phi_{c}$ is the colatitude of the cusp position. $\psi$ is the dipole tilt of the cusp. $\phi_{1}$ is provided by the equation $\phi_{c0} - (\alpha_{1} \psi + \alpha_{2}\psi^{2})$. As in (Tsyganenko) the values of $\phi_{c}$, $\alpha_{1}$, and $\alpha_{2}$ are 0.24, 0.1287, and 0.0314 respectively. ((note that that last sentence is almost a literal copy paste of the Tsyg paper)). Note that $\psi (t)$ is my own addition and that the original notation is $\psi$. I want to make it extremely clear that $\psi$ varies with time, requiring that the tsyganenko library be a function of time as well.
The dataset used here depicts the path of a satellite on a single day of its orbit 01-Jan-2019, at a $65^{\circ}$ inclined orbit.
End of explanation
# the orbit is fairly easy to get
df = pd.read_csv('singleday.csv')
# df = pd.read_csv('5day.csv')
t = df['DefaultSC.A1ModJulian'] + 29999.5
x = df['DefaultSC.gse.X']
y = df['DefaultSC.gse.Y']
z = df['DefaultSC.gse.Z']
# set the "ticks"
cvals = coord.Coords([[i,j,k] for i,j,k in zip(x,y,z)], 'GSE', 'car')
cvals.ticks = Ticktock(t,'MJD')
# originally SM
sm = cvals.convert('SM', 'sph')
# t = np.asarray(t)
# plotting stuff
f, axarr = plt.subplots(2, sharex=True)
# axarr[0].plot(t, y)
axarr[0].set_title('Spacecraft Latitude')
axarr[0].set_ylabel('latitude (sm deg)')
axarr[0].plot(sm.ticks.MJD, sm.lati)
axarr[1].set_title('Spacecraft Longitude')
axarr[1].set_ylabel('longitude (sm deg)')
axarr[1].plot(sm.ticks.MJD, sm.long)
plt.xlabel('time (unit)')
plt.show()
Explanation: Note that below, there are many ways that we could set up the subplot, but I just chose (programattically) to do it a certain way provided in http://matplotlib.org/examples/pylab_examples/subplots_demo.html
Note: The units for this system is solar magnetic degrees. Some plots I did gave me that kind of bias to think that SM would be the best coordinate system for this. (That's probably not true but there are many ways to solve this problem).
End of explanation
rs = 1.127 # average of the radius of the orbit
# rs = np.array([1,2,3])
phic = tsyg.getPhi_c(rs)
x,y,z= tsyg.tsygCyl2Car(phic,rs)
# print("x,y,z",xt,yt,zt)
print("x",x)
print("y",y)
print("z",z)
# next lets put these three coordinates into a spacepy coordinates object
singletrack = coord.Coords([[x,y,z]]*len(t), 'SM', 'car')
singletrack.ticks = Ticktock(t,'MJD')
singletrack = singletrack.convert('GEI','sph')
f, axarr2 = plt.subplots(2, sharex=True)
# axarr[0].plot(t, y)
axarr2[0].set_title('Cusp Longitude and Latitude')
axarr2[0].set_ylabel('latitude (sm deg)')
axarr2[0].scatter(singletrack.ticks.MJD, singletrack.lati)
axarr2[1].set_title('Cusp and Spacecraft Latitude')
axarr2[1].set_ylabel('longitude (sm deg)')
axarr2[1].scatter(singletrack.ticks.MJD, singletrack.long)
plt.xlabel('time (unit)')
f,axarr3 = plt.subplots(1)
axarr3.set_title('Lat vs Lon')
axarr3.set_ylabel('latitude')
axarr3.set_xlabel('longitude')
axarr3.plot(singletrack.long,singletrack.lati, label='r = 1.127 Re')
axarr3.scatter(sm.long,sm.lati,label='spacecraft orbit actual')
axarr3.legend(loc='lower left')
plt.show()
Explanation: The next thing to do is compute the cusp latitude and longitude using tsyganenko. The cusp angular position is a function of radial distance from the earth as given in the Tsyganenko equation. This means that different "tracks" will be given as we vary $R_{e}$. Since I am using a very circular orbit, with $e \sim 0.02$, I will plot the cusp latitude and longitude below as a function of the spacecraft altitude in my GMAT file. The semi major axis is given as $7191 km$, $\sim 1.127 R_{e}$.
End of explanation
# the orbit is fairly easy to get
#df = pd.read_csv(pathname+'zero.csv')
df = pd.read_csv('5day.csv')
t = df['DefaultSC.A1ModJulian'] + 29999.5
x = df['DefaultSC.gse.X']
y = df['DefaultSC.gse.Y']
z = df['DefaultSC.gse.Z']
# set the "ticks"
cvals = coord.Coords([[i,j,k] for i,j,k in zip(x,y,z)], 'GSM', 'car')
cvals.ticks = Ticktock(t,'MJD')
sm = cvals.convert('SM','sph')
gsm = cvals.convert('GEI', 'sph')
# t = np.asarray(t)
# plotting stuff
f, axarr = plt.subplots(2, sharex=True)
# axarr[0].plot(t, y)
axarr[0].set_title('Spacecraft Latitude')
axarr[0].set_ylabel('latitude (sm deg)')
axarr[0].plot(sm.ticks.MJD, sm.lati)
axarr[1].set_title('Spacecraft Longitude')
axarr[1].set_ylabel('longitude (sm deg)')
axarr[1].plot(sm.ticks.MJD, sm.long)
plt.xlabel('time (unit)')
plt.show()
f, a = plt.subplots(1)
plt.title('Solar Magnetic Orbit')
plt.xlabel('Solar Magnetic Longitude')
plt.ylabel('Solar Magnetic Latitude')
a.plot(sm.long,sm.lati)
plt.show()
psi = tsyg.getTilt(t)
plt.plot(t,psi)
plt.title('Dipole Tilt')
plt.xlabel('MJD')
plt.ylabel('Dipole Tilt (gsm deg)')
plt.show()
# here the OUTPUT, psi is in degrees
# output cusp position with the dipole tilt
r = 1.127
psi = np.deg2rad(psi)
phi_c = tsyg.getPhi_c(r,psi)
phi_c = np.rad2deg(phi_c)
plt.figure()
plt.plot(t,phi_c)
plt.title('Cusp location over time for $r = 1.127R_{e}$')
plt.xlabel('MJD')
plt.ylabel('Cusp Location (gsm deg)')
plt.show()
# next lets get the spherical coordinates and plot the latitude and longitudes, which SHOULD
# allow us to get the cusp crossings.
xc,yc,zc = tsyg.tsygCyl2Car(phi_c,r)
#originally it was GSM
cusp_location = coord.Coords([[i,j,k] for i,j,k in zip(xc,yc,zc)], 'GSM', 'car')
cusp_location.ticks = Ticktock(t,'MJD')
cusp_location = cusp_location.convert('GEI','sph')
plt.scatter(cusp_location.long, cusp_location.lati)
plt.scatter(gsm.long,gsm.lati,color='green')
plt.xlabel('gei longitude (deg)')
plt.ylabel('gei latitude (deg)')
plt.title('Cusp Latitude vs. Longitude (gei)')
plt.show()
Explanation: So I suppose this does show that for a region of the stationary cusp, that the satellite potentially crosses it. However, the cusp location moves throughout the day so it's conceivable that the cusp avoids the orbit of the satellite, and I need a way to test for that. The other big question is how do I integrate dipole tilt into this at the same time. The first question I need to answer is whether or not the dipole tilt matters in SM.
One other approach I could take is to come up with the cusp position as $f(\rho,t)$, then compare that latitude and longitude to the spacecraft latitude and longitude. That's really what I need to do.
Step by Step what I will attempt to do:
Move the cusp and prove that it is moving. I'll use the r = 1.127Re track to do this and one years worth of data. x
Once I have the cusp latitude and longitude, compare the satellite latitude and longitude.
Do the cusp crossings count.
Plot the lat/lon of the cusp/spacecraft and the cusp crossings subplots together.
From here one we will be using the data from a year's flight around the earth, because these trends are periodic over a year.
End of explanation
# plot of the latitude and the longitude of the spacecraft at each timestep,
f,ax = plt.subplots(2)
# i can probably use a list comprehension here for the cusp location
# just feel really unmotivated rn
# xk,yk,zk = tsyg.tsygCyl2Car()
# longitude vs. time
ax[0].set_title('Cusp and Satellite Longitude')
ax[1].set_title('Cusp and Satellite Latitude')
ax[0].plot(gsm.ticks.MJD, gsm.long)
ax[1].plot(gsm.ticks.MJD, gsm.lati)
plt.show()
# testing the function for
x,y,z = tsyg.orbitalCuspLocation(cvals,t)
fig = plt.figure()
ax.plot(phi_c,r)
plt.show()
Explanation: The biggest issues of this plot are that we aren't able to see where the cusp is at a certain time, it's just a ground track. Therefore, we need to also put in the plots of lat/lon vs. time.
Close to solving this. I think my cusp actually behaves somewhat correctly. The above plot was actually really promising, but for some reason my Now I need to use the Tsyganenko equation as an function of the satellite's location. It really don't make much sense to me why the cusp wouldn't seem to change in latitude. My guess is that there's tiny circles there if you look closely at the data.
I'll deal with what I suspect are innacuracies later I feel like I've done okay thus far. Like I "feel" like the equations are wrong.
Open Questions at this point:
1. should the latitude go so low?
2.
End of explanation |
11,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objective
Show approximations by using gaussian approximation
Additionally, applying Gram-Schmidt for "orthonormalizing" a set of functions
Step1: definitions
Step2: Define Gram-Schmidt
Step3: now approximate a function | Python Code:
# importing
import numpy as np
import scipy.signal
import scipy as sp
import sympy as sym
from sympy.plotting import plot
Explanation: Content and Objective
Show approximations by using gaussian approximation
Additionally, applying Gram-Schmidt for "orthonormalizing" a set of functions
End of explanation
# define symbol
x = sym.Symbol('x')
# function to be approximated
f = sym.cos( x )
f = sym.exp( x )
#f = sym.sqrt( x )
# define lower and upper bound for L[a,b]
# -> might be relevant to be changed if you are adapting the function to be approximated
a = -1
b = 1
Explanation: definitions
End of explanation
# basis and their number of functions
M = [ x**c for c in range( 0, 4 ) ]
n = len( M )
print(M)
# apply Gram-Schmidt for user-defined set M
# init ONB
ONB = [ ]
# loop for new functions and apply Gram-Schmidt
for _n in range( n ):
# get function
f_temp = M[ _n ]
# subtract influence of past ONB functions
if _n >= 1:
for _k in range( _n ):
f_temp -= sym.integrate( M[ _n ] * ONB[ _k ], (x,a,b) ) * ONB[ _k ]
# get norm
norm = float( sym.integrate( f_temp * f_temp , (x,a,b) ) )
# return normalized function
ONB.append( f_temp / np.sqrt( norm) )
print(ONB)
# opt in if you like to see the correlation matrix
if 0:
corr_matrix = np.zeros( ( n, n ) )
for _m in range( n ):
for _n in range( n ):
corr_matrix[ _m, _n ] = float( sym.integrate( ONB[_m] * ONB[_n], (x,a,b) ) )
np.set_printoptions(precision=2)
corr_matrix[ np.isclose( corr_matrix, 0 ) ] = 0
print( corr_matrix )
# opt in if you like to see figures of the base functions
# NOTE: Become unhandy if it's too many of them
if 0:
for _n in range( n):
p = plot( M[_n], (x,a,b), show=False )
p.extend( plot( ONB[_n], (x,a,b), line_color='r', show=False ) )
p.show()
Explanation: Define Gram-Schmidt
End of explanation
# init approx and extend successively
approx = 0
# add next ONB function with according coefficient
for _n in range( n ):
coeff = sym.integrate( f * ONB[ _n ], (x,a,b) )
approx += coeff * ONB[ _n ]
# if you like to see the function
print( approx )
p = plot( f, (x,a,b), show=False)
p.extend( plot( approx, (x,a,b), line_color='r', show=False) )
p.show()
plot( f - approx, (x,a,b) )
Explanation: now approximate a function
End of explanation |
11,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS228 Python Tutorial
Adapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson (http
Step1: Python versions
There are currently two different supported versions of Python, 2.7 and 3.4. Somewhat confusingly, Python 3.0 introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.4 and vice versa. For this class all code will use Python 2.7.
You can check your Python version at the command line by running python --version.
Basic data types
Numbers
Integers and floats work as you would expect from other languages
Step2: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.)
Step3: Now we let's look at the operations
Step4: Strings
Step5: String objects have a bunch of useful methods; for example
Step6: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types
Step7: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing
Step8: Loops
You can loop over the elements of a list like this
Step9: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step10: List comprehensions
Step11: You can make this code simpler using a list comprehension
Step12: List comprehensions can also contain conditions
Step13: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this
Step14: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary
Step15: If you want access to keys and their corresponding values, use the iteritems method
Step16: Dictionary comprehensions
Step17: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following
Step18: Loops
Step19: Set comprehensions
Step20: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example
Step21: Functions
Python functions are defined using the def keyword. For example
Step22: We will often define functions to take optional keyword arguments, like this
Step23: Classes
The syntax for defining classes in Python is straightforward
Step24: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package
Step25: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets
Step26: Numpy also provides many functions to create arrays
Step27: Array indexing
Numpy offers several ways to index into arrays.
Slicing
Step28: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step29: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing
Step30: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array
Step31: Integer array indexing
Step32: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step33: Boolean array indexing
Step34: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example
Step35: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module
Step36: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step37: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step38: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object
Step39: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this
Step40: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step41: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step42: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules
Step43: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
Step44: By running this special iPython command, we will be displaying plots inline
Step45: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example
Step46: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step47: Subplots
You can plot different things in the same figure using the subplot function. Here is an example | Python Code:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) / 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
print quicksort([3,6,8,10,1,2,1])
Explanation: CS228 Python Tutorial
Adapted by Volodymyr Kuleshov and Isaac Caswell from the CS231n Python tutorial by Justin Johnson (http://cs231n.github.io/python-numpy-tutorial/).
Introduction
Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.
We expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.
Some of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).
In this tutorial, we will cover:
Basic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes
Numpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting
Matplotlib: Plotting, Subplots, Images
IPython: Creating notebooks, Typical workflows
Basics of Python
Python is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:
End of explanation
x = 3
print x, type(x)
print x + 1 # Addition;
print x - 1 # Subtraction;
print x * 2 # Multiplication;
print x ** 2 # Exponentiation;
x += 1
print x # Prints "4"
x *= 2
print x # Prints "8"
y = 2.5
print type(y) # Prints "<type 'float'>"
print y, y + 1, y * 2, y ** 2 # Prints "2.5 3.5 5.0 6.25"
Explanation: Python versions
There are currently two different supported versions of Python, 2.7 and 3.4. Somewhat confusingly, Python 3.0 introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.4 and vice versa. For this class all code will use Python 2.7.
You can check your Python version at the command line by running python --version.
Basic data types
Numbers
Integers and floats work as you would expect from other languages:
End of explanation
t, f = True, False
print type(t) # Prints "<type 'bool'>"
Explanation: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):
End of explanation
print t and f # Logical AND;
print t or f # Logical OR;
print not t # Logical NOT;
print t != f # Logical XOR;
Explanation: Now we let's look at the operations:
End of explanation
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print hello, len(hello)
hw = hello + ' ' + world # String concatenation
print hw # prints "hello world"
hw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting
print hw12 # prints "hello world 12"
Explanation: Strings
End of explanation
s = "hello"
print s.capitalize() # Capitalize a string; prints "Hello"
print s.upper() # Convert a string to uppercase; prints "HELLO"
print s.rjust(7) # Right-justify a string, padding with spaces; prints " hello"
print s.center(7) # Center a string, padding with spaces; prints " hello "
print s.replace('l', '(ell)') # Replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print ' world '.strip() # Strip leading and trailing whitespace; prints "world"
Explanation: String objects have a bunch of useful methods; for example:
End of explanation
xs = [3, 1, 2] # Create a list
print xs, xs[2]
print xs[-1] # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print xs
xs.append('bar') # Add a new element to the end of the list
print xs
x = xs.pop() # Remove and return the last element of the list
print x, xs
Explanation: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types: lists, dictionaries, sets, and tuples.
Lists
A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:
End of explanation
nums = range(5) # range is a built-in function that creates a list of integers
print nums # Prints "[0, 1, 2, 3, 4]"
print nums[2:4] # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print nums[2:] # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print nums[:2] # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print nums[:] # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print nums[:-1] # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print nums # Prints "[0, 1, 8, 8, 4]"
Explanation: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
End of explanation
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print animal
Explanation: Loops
You can loop over the elements of a list like this:
End of explanation
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print '#%d: %s' % (idx + 1, animal)
Explanation: If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print squares
Explanation: List comprehensions:
When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print squares
Explanation: You can make this code simpler using a list comprehension:
End of explanation
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print even_squares
Explanation: List comprehensions can also contain conditions:
End of explanation
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print d['cat'] # Get an entry from a dictionary; prints "cute"
print 'cat' in d # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print d['fish'] # Prints "wet"
print d['monkey'] # KeyError: 'monkey' not a key of d
print d.get('monkey', 'N/A') # Get an element with a default; prints "N/A"
print d.get('fish', 'N/A') # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print d.get('fish', 'N/A') # "fish" is no longer a key; prints "N/A"
Explanation: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print 'A %s has %d legs' % (animal, legs)
Explanation: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.iteritems():
print 'A %s has %d legs' % (animal, legs)
Explanation: If you want access to keys and their corresponding values, use the iteritems method:
End of explanation
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print even_num_to_square
Explanation: Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
End of explanation
animals = {'cat', 'dog'}
print 'cat' in animals # Check if an element is in a set; prints "True"
print 'fish' in animals # prints "False"
animals.add('fish') # Add an element to a set
print 'fish' in animals
print len(animals) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print len(animals)
animals.remove('cat') # Remove an element from a set
print len(animals)
Explanation: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following:
End of explanation
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print '#%d: %s' % (idx + 1, animal)
# Prints "#1: fish", "#2: dog", "#3: cat"
Explanation: Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:
End of explanation
from math import sqrt
print {int(sqrt(x)) for x in range(30)}
Explanation: Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:
End of explanation
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print type(t)
print d[t]
print d[(1, 2)]
t[0] = 1
Explanation: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:
End of explanation
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print sign(x)
Explanation: Functions
Python functions are defined using the def keyword. For example:
End of explanation
def hello(name, loud=False):
if loud:
print 'HELLO, %s' % name.upper()
else:
print 'Hello, %s!' % name
hello('Bob')
hello('Fred', loud=True)
Explanation: We will often define functions to take optional keyword arguments, like this:
End of explanation
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print 'HELLO, %s!' % self.name.upper()
else:
print 'Hello, %s' % self.name
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
Explanation: Classes
The syntax for defining classes in Python is straightforward:
End of explanation
import numpy as np
Explanation: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package:
End of explanation
a = np.array([1, 2, 3]) # Create a rank 1 array
print type(a), a.shape, a[0], a[1], a[2]
a[0] = 5 # Change an element of the array
print a
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print b
print b.shape
print b[0, 0], b[0, 1], b[1, 0]
Explanation: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets:
End of explanation
a = np.zeros((2,2)) # Create an array of all zeros
print a
b = np.ones((1,2)) # Create an array of all ones
print b
c = np.full((2,2), 7) # Create a constant array
print c
d = np.eye(2) # Create a 2x2 identity matrix
print d
e = np.random.random((2,2)) # Create an array filled with random values
print e
Explanation: Numpy also provides many functions to create arrays:
End of explanation
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print b
Explanation: Array indexing
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
End of explanation
print a[0, 1]
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print a[0, 1]
Explanation: A slice of an array is a view into the same data, so modifying it will modify the original array.
End of explanation
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print a
Explanation: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:
End of explanation
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print row_r1, row_r1.shape
print row_r2, row_r2.shape
print row_r3, row_r3.shape
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print col_r1, col_r1.shape
print
print col_r2, col_r2.shape
Explanation: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array:
End of explanation
a = np.array([[1,2], [3, 4], [5, 6]])
# An example of integer array indexing.
# The returned array will have shape (3,) and
print a[[0, 1, 2], [0, 1, 0]]
# The above example of integer array indexing is equivalent to this:
print np.array([a[0, 0], a[1, 1], a[2, 0]])
# When using integer array indexing, you can reuse the same
# element from the source array:
print a[[0, 0], [1, 1]]
# Equivalent to the previous integer array indexing example
print np.array([a[0, 1], a[0, 1]])
Explanation: Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:
End of explanation
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print a
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print a[np.arange(4), b] # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print a
Explanation: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
End of explanation
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print bool_idx
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print a[bool_idx]
# We can do all of the above in a single concise statement:
print a[a > 2]
Explanation: Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:
End of explanation
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print x.dtype, y.dtype, z.dtype
Explanation: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:
End of explanation
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print x + y
print np.add(x, y)
# Elementwise difference; both produce the array
print x - y
print np.subtract(x, y)
# Elementwise product; both produce the array
print x * y
print np.multiply(x, y)
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print x / y
print np.divide(x, y)
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print np.sqrt(x)
Explanation: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
End of explanation
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print v.dot(w)
print np.dot(v, w)
# Matrix / vector product; both produce the rank 1 array [29 67]
print x.dot(v)
print np.dot(x, v)
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print x.dot(y)
print np.dot(x, y)
Explanation: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
End of explanation
x = np.array([[1,2],[3,4]])
print np.sum(x) # Compute sum of all elements; prints "10"
print np.sum(x, axis=0) # Compute sum of each column; prints "[4 6]"
print np.sum(x, axis=1) # Compute sum of each row; prints "[3 7]"
Explanation: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:
End of explanation
print x
print x.T
v = np.array([1,2,3])
print v
print v.T
Explanation: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:
End of explanation
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
print y
Explanation: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
End of explanation
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print vv # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print y
Explanation: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:
End of explanation
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print y
Explanation: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:
End of explanation
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
print np.reshape(v, (3, 1)) * w
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
print x + v
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
print (x.T + w).T
# Another solution is to reshape w to be a row vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print x + np.reshape(w, (2, 1))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
print x * 2
Explanation: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules:
If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.
The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.
The arrays can be broadcast together if they are compatible in all dimensions.
After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.
In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension
If this explanation does not make sense, try reading the explanation from the documentation or this explanation.
Functions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.
Here are some applications of broadcasting:
End of explanation
import matplotlib.pyplot as plt
Explanation: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
End of explanation
%matplotlib inline
Explanation: By running this special iPython command, we will be displaying plots inline:
End of explanation
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
Explanation: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:
End of explanation
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
Explanation: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
End of explanation
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
Explanation: Subplots
You can plot different things in the same figure using the subplot function. Here is an example:
End of explanation |
11,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A method to use the present_load to balance the leg of poppy
Step1: A trick to switch from real time to simulated time using time function (because my VREP is not in real time - about 3 times slower)
Step2: The present load of hip and ankle
Step3: The present load function of time
Step4: Integration of different present_load to have a more stable value.
Step5: We need a primitive to record the load
Step6: Load of ankle and hip after an ankle movement.
Step7: Load of ankle and hip during a knee movement
Step8: On the previous graph, you can see the two forces in action during a movement. First, an inertia force due to the acceleration of the body. This force decrease the load of the motor for a short time and this force is in the opposite direction of the move. After, the load increase because the gravity center of poppy is moved behind poppy (in direction of the back of poppy). So, when poppy is balanced with his gravity center just above his feet, the load of the hip and ankle motors should be closed to zero.
Now, we are going to apply a very simple correction to maintien the load of ankle closed to zero.
Step9: The system is oscilating for two reasons
Step10: The system stabilises after few oscillations. What is possible now is to introduce a derivated parameter.
Step11: It is possible to add one more correction, in function of the integration of the difference between real load and goal load.
Step12: The correction is now more efficient. The correction is a kind of PID controller, if you like the theory and mathematics, more explanations here.
Now, we want to correct the balance when the knee are moving (and not after, like in the previous examples).
Step13: We can control several motor with the same method, for example we can add the hip.
Step14: Another try with a control during the descent and the rise. And also other PID parameters. | Python Code:
from poppy.creatures import PoppyHumanoid
poppy = PoppyHumanoid(simulator='vrep')
%pylab inline
#import time
Explanation: A method to use the present_load to balance the leg of poppy
End of explanation
import time as real_time
class time:
def __init__(self,robot):
self.robot=robot
def time(self):
t_simu = self.robot.current_simulation_time
return t_simu
def sleep(self,t):
t0 = self.robot.current_simulation_time
while (self.robot.current_simulation_time - t0) < t-0.01:
real_time.sleep(0.001)
time = time(poppy)
print time.time()
time.sleep(0.025) #0.025 is the minimum step according to the V-REP defined dt
print time.time()
Explanation: A trick to switch from real time to simulated time using time function (because my VREP is not in real time - about 3 times slower)
End of explanation
print poppy.l_ankle_y.present_load
print poppy.r_ankle_y.present_load
print poppy.l_hip_y.present_load
print poppy.r_hip_y.present_load
Explanation: The present load of hip and ankle
End of explanation
load_r = []
load_l = []
load1_r = []
load1_l = []
t = []
t0 = time.time()
while time.time()-t0 <5:
t_simu = poppy.current_simulation_time
time.sleep(0.01)
if poppy.current_simulation_time != t_simu:
load_r.append(poppy.r_ankle_y.present_load)
load_l.append(poppy.l_ankle_y.present_load)
load1_r.append(poppy.r_hip_y.present_load)
load1_l.append(poppy.l_hip_y.present_load)
t.append(poppy.current_simulation_time)
poppy.l_ankle_y.goto_position(-5, 1, wait=False)
poppy.r_ankle_y.goto_position(-5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
t_simu = poppy.current_simulation_time
time.sleep(0.01)
if poppy.current_simulation_time != t_simu:
load_r.append(poppy.r_ankle_y.present_load)
load_l.append(poppy.l_ankle_y.present_load)
load1_r.append(poppy.r_hip_y.present_load)
load1_l.append(poppy.l_hip_y.present_load)
t.append(poppy.current_simulation_time)
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=True)
figure(1)
plot(t,load_r)
plot(t,load_l)
figure(2)
plot(t,load1_r)
plot(t,load1_l)
Explanation: The present load function of time
End of explanation
class load:
def __init__(self,nb_record=10,goal=0):
self.nb_record = nb_record
self.goal = goal
self.record_pos=[0 for i in range(nb_record)]
self.filter_load=[[0,0] for i in range(nb_record*10)]
def add(self,l):
self.record_pos.append(l-self.goal)
del self.record_pos[0]
self.filter_load.append([time.time(),sum(self.record_pos)/len(self.record_pos)])
del self.filter_load[0]
def integrate(self,nb_values=10):
x=[i[0] for i in self.filter_load]
y=[i[1] for i in self.filter_load]
return np.trapz(y[-nb_values-1:-1],x[-nb_values-1:-1])
def derivative(self):
return (self.filter_load[-1][1]-self.filter_load[-3][1])/(self.filter_load[-1][0]-self.filter_load[-3][0])
def last(self):
return self.filter_load[-1][1]
Explanation: Integration of different present_load to have a more stable value.
End of explanation
from pypot.primitive import Primitive
class graph_primitive(Primitive):
def setup(self):
self.load_ankle = []
self.load_hip = []
self.t=[]
self.correction = []
self.position_ankle = []
def run(self):
while not self.should_stop():
self.load_ankle.append(load_ankle.last())
self.load_hip.append(load_hip.last())
self.t.append(time.time())
self.correction.append(correction_value)
self.position_ankle.append(poppy.r_ankle_y.present_position)
time.sleep(0.02)
Explanation: We need a primitive to record the load :
End of explanation
graph = graph_primitive(poppy)
load_ankle = load(40)
load_hip = load(40)
correction_value = 0
graph.start()
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
poppy.l_ankle_y.goto_position(-5, 1, wait=False)
poppy.r_ankle_y.goto_position(-5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
graph.stop()
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=True)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
Explanation: Load of ankle and hip after an ankle movement.
End of explanation
graph = graph_primitive(poppy)
load_ankle = load(40)
load_hip = load(40)
correction_value = 0
graph.start()
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
poppy.l_knee_y.goto_position(5, 1, wait=False)
poppy.r_knee_y.goto_position(5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
graph.stop()
poppy.l_knee_y.goto_position(0, 1, wait=False)
poppy.r_knee_y.goto_position(0, 1, wait=False)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
Explanation: Load of ankle and hip during a knee movement
End of explanation
graph = graph_primitive(poppy)
load_ankle = load(40)
load_hip = load(40)
correction_value = 0
graph.start()
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
poppy.l_knee_y.goto_position(5, 1, wait=False)
poppy.r_knee_y.goto_position(5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
t0 = time.time()
while time.time()-t0 <15:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
if load_ankle.last() > 2:
while load_ankle.last() > 2:
correction_value = -0.15
r_ankle_y_pos += correction_value
l_ankle_y_pos += correction_value
#print "load :",load_ankle.sum_load()
#print "correction négative :",r_ankle_y_pos, l_ankle_y_pos
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
time.sleep(0.05) # waiting for the movement to finish - according to the dt minimum define in VREP
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
elif load_ankle.last() < -2:
while load_ankle.last() < -2:
correction_value = 0.15
r_ankle_y_pos += correction_value
l_ankle_y_pos += correction_value
#print "load :",load_ankle.sum_load()
#print "correction positive :",r_ankle_y_pos, l_ankle_y_pos
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = r_ankle_y_pos
time.sleep(0.05) # waiting for the movement to finish - according to the dt minimum define in VREP
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
else :
#print "poppy is balanced"
time.sleep(0.025)
graph.stop()
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=False)
poppy.l_knee_y.goto_position(0, 1, wait=False)
poppy.r_knee_y.goto_position(0, 1, wait=True)
figure(1)
plot(graph.t,graph.load_ankle,"b-")
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
Explanation: On the previous graph, you can see the two forces in action during a movement. First, an inertia force due to the acceleration of the body. This force decrease the load of the motor for a short time and this force is in the opposite direction of the move. After, the load increase because the gravity center of poppy is moved behind poppy (in direction of the back of poppy). So, when poppy is balanced with his gravity center just above his feet, the load of the hip and ankle motors should be closed to zero.
Now, we are going to apply a very simple correction to maintien the load of ankle closed to zero.
End of explanation
graph = graph_primitive(poppy)
load_ankle = load(40)
load_hip = load(40)
correction_value = 0
graph.start()
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
poppy.l_knee_y.goto_position(5, 1, wait=False)
poppy.r_knee_y.goto_position(5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
t0 = time.time()
while time.time()-t0 <15:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.01*load_ankle.last()
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
graph.stop()
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=False)
poppy.l_knee_y.goto_position(0, 1, wait=False)
poppy.r_knee_y.goto_position(0, 1, wait=True)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
Explanation: The system is oscilating for two reasons :
* The correction have an action on the sensor. That means the correction movement create an inertia force which stop when the movement stop so the correction is two big when the movement stop and the movement restart on the other sens.
* There is a delay between the action of the correction and the action on the sensor.
What is possible to do to avoid the oscilation ? To introduce a proportional correction is better because the more close to the goal you are, the less correction you need.
End of explanation
graph = graph_primitive(poppy)
load_ankle = load(40)
load_hip = load(40)
correction_value = 0
graph.start()
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
poppy.l_knee_y.goto_position(5, 1, wait=False)
poppy.r_knee_y.goto_position(5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
t0 = time.time()
while time.time()-t0 <15:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.01*load_ankle.last()+0.003*load_ankle.derivative()
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
graph.stop()
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=False)
poppy.l_knee_y.goto_position(0, 1, wait=False)
poppy.r_knee_y.goto_position(0, 1, wait=True)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
Explanation: The system stabilises after few oscillations. What is possible now is to introduce a derivated parameter.
End of explanation
graph = graph_primitive(poppy)
load_ankle = load(40)
load_hip = load(40)
correction_value = 0
graph.start()
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
poppy.l_knee_y.goto_position(5, 1, wait=False)
poppy.r_knee_y.goto_position(5, 1, wait=False)
t0 = time.time()
while time.time()-t0 <5:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
time.sleep(0.025)
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
t0 = time.time()
while time.time()-t0 <15:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.01*load_ankle.last()+0.003*load_ankle.derivative()+0.002*load_ankle.integrate(30)
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
graph.stop()
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=False)
poppy.l_knee_y.goto_position(0, 1, wait=False)
poppy.r_knee_y.goto_position(0, 1, wait=True)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
figure(3)
plot(graph.t,graph.position_ankle)
xlabel('time seconds')
ylabel('degree')
title ('position of ankle')
Explanation: It is possible to add one more correction, in function of the integration of the difference between real load and goal load.
End of explanation
time.sleep(1)
graph = graph_primitive(poppy)
load_ankle = load(nb_record=40,goal=0)
load_hip = load(40)
correction_value = 0
poppy.l_knee_y.goto_position(70, 6, wait=False)
poppy.r_knee_y.goto_position(70, 6, wait=False)
graph.start()
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
t0 = time.time()
while time.time()-t0 <10:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.015*load_ankle.last()+0.004*load_ankle.derivative()+0.005*load_ankle.integrate(80)
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
graph.stop()
poppy.l_ankle_y.goto_position(0, 1.5, wait=False)
poppy.r_ankle_y.goto_position(0, 1.5, wait=False)
poppy.l_knee_y.goto_position(0, 1.5, wait=False)
poppy.r_knee_y.goto_position(0, 1.5, wait=True)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
figure(3)
plot(graph.t,graph.position_ankle)
xlabel('time seconds')
ylabel('degree')
title ('position of ankle')
Explanation: The correction is now more efficient. The correction is a kind of PID controller, if you like the theory and mathematics, more explanations here.
Now, we want to correct the balance when the knee are moving (and not after, like in the previous examples).
End of explanation
time.sleep(1)
graph = graph_primitive(poppy)
load_ankle = load(nb_record=40,goal=-8)
load_hip = load(40)
correction_value = 0
correction_value_hip = 0
graph.start()
i=0
while i<2 :
poppy.l_knee_y.goto_position(80, 9, wait=False)
poppy.r_knee_y.goto_position(80, 9, wait=False)
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
r_hip_y_pos = poppy.r_hip_y.present_position
l_hip_y_pos = poppy.l_hip_y.present_position
t0 = time.time()
while time.time()-t0 <10:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.008*load_ankle.last()+0.002*load_ankle.derivative()+0.002*load_ankle.integrate(30)
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
correction_value_hip = 0.04*load_hip.last()+0.006*load_hip.derivative()+0.008*load_hip.integrate(30)
r_hip_y_pos -= correction_value_hip
l_hip_y_pos -= correction_value_hip
poppy.r_hip_y.goal_position = r_hip_y_pos
poppy.l_hip_y.goal_position = l_hip_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
poppy.l_ankle_y.goto_position(0, 1, wait=False)
poppy.r_ankle_y.goto_position(0, 1, wait=False)
poppy.l_hip_y.goto_position(0, 1, wait=False)
poppy.r_hip_y.goto_position(0, 1, wait=False)
poppy.l_knee_y.goto_position(0, 1, wait=False)
poppy.r_knee_y.goto_position(0, 1, wait=True)
i+=1
graph.stop()
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
figure(3)
plot(graph.t,graph.position_ankle)
xlabel('time seconds')
ylabel('degree')
title ('position of ankle')
Explanation: We can control several motor with the same method, for example we can add the hip.
End of explanation
time.sleep(1)
graph = graph_primitive(poppy)
load_ankle = load(nb_record=40,goal=-4)
load_hip = load(40,goal=-2)
correction_value = 0
correction_value_hip = 0
graph.start()
i=0
while i<2 :
poppy.l_knee_y.goto_position(80, 8, wait=False)
poppy.r_knee_y.goto_position(80, 8, wait=False)
r_ankle_y_pos = poppy.r_ankle_y.present_position
l_ankle_y_pos = poppy.l_ankle_y.present_position
r_hip_y_pos = poppy.r_hip_y.present_position
l_hip_y_pos = poppy.l_hip_y.present_position
t0 = time.time()
while time.time()-t0 <10:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.008*load_ankle.last()+0.003*load_ankle.derivative()+0.005*load_ankle.integrate(80)
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
correction_value_hip = 0.04*load_hip.last()+0.006*load_hip.derivative()+0.008*load_hip.integrate(50)
r_hip_y_pos -= correction_value_hip
l_hip_y_pos -= correction_value_hip
poppy.r_hip_y.goal_position = r_hip_y_pos
poppy.l_hip_y.goal_position = l_hip_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
poppy.l_knee_y.goto_position(0, 6, wait=False)
poppy.r_knee_y.goto_position(0, 6, wait=False)
t0 = time.time()
while time.time()-t0 <10:
load_ankle.add((poppy.r_ankle_y.present_load+poppy.l_ankle_y.present_load)/2)
load_hip.add((poppy.r_hip_y.present_load+poppy.l_hip_y.present_load)/2)
correction_value = 0.008*load_ankle.last()+0.003*load_ankle.derivative()+0.005*load_ankle.integrate(80)
r_ankle_y_pos -= correction_value
l_ankle_y_pos -= correction_value
poppy.r_ankle_y.goal_position = r_ankle_y_pos
poppy.l_ankle_y.goal_position = l_ankle_y_pos
correction_value_hip = 0.04*load_hip.last()+0.006*load_hip.derivative()+0.008*load_hip.integrate(50)
r_hip_y_pos -= correction_value_hip
l_hip_y_pos -= correction_value_hip
poppy.r_hip_y.goal_position = r_hip_y_pos
poppy.l_hip_y.goal_position = l_hip_y_pos
time.sleep(0.025) # waiting for the movement to finish - according to the dt minimum define in VREP
i+=1
graph.stop()
poppy.l_ankle_y.goto_position(0, 2, wait=False)
poppy.r_ankle_y.goto_position(0, 2, wait=False)
poppy.l_hip_y.goto_position(0, 2, wait=False)
poppy.r_hip_y.goto_position(0, 2, wait=False)
poppy.l_knee_y.goto_position(0, 2, wait=False)
poppy.r_knee_y.goto_position(0, 2, wait=False)
figure(1)
plot(graph.t,graph.load_ankle)
xlabel('time seconds')
ylabel('load')
twinx()
plot(graph.t,graph.correction,"r-")
ylabel('correction')
title ('Load of ankle')
figure(2)
plot(graph.t,graph.load_hip)
xlabel('time seconds')
ylabel('load')
title ('Load of hip')
figure(3)
plot(graph.t,graph.position_ankle)
xlabel('time seconds')
ylabel('degree')
title ('position of ankle')
from pypot.primitive import Primitive
class load_primitive(Primitive):
def __init__(self,robot,motors_name,nb_records,goal):
# if you give more than one motor, the load for the motors will be combined, if you want independant
# load you have to instantiate several primitives
self.robot = robot
self.motors = [getattr(self.robot, name) for name in motors_name]
self.nb_records = nb_records
self.goal = goal
correction_value = 0
Primitive.__init__(self, robot)
def run(self):
load = load(nb_record=40,goal=-4)
correction_value = 0
correction_value_hip = 0
Explanation: Another try with a control during the descent and the rise. And also other PID parameters.
End of explanation |
11,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework
Step1: Task
Replace previous model with equivalent in prettytensor or tf.slim
Try to make you code as compact as possible
Step2: You can play generated sample using any midi player
Under linux I prefer timidity | Python Code:
import numpy as np
from music21 import stream, midi, tempo, note
from grammar import unparse_grammar
from preprocess import get_musical_data, get_corpus_data
from qa import prune_grammar, prune_notes, clean_up_notes
from generator import __sample, __generate_grammar, __predict
import tflearn
N_epochs = 128 # default
data_fn = 'midi/' + 'original_metheny.mid' # 'And Then I Knew' by Pat Metheny
out_fn = 'midi/' 'deepjazz_on_metheny...' + str(N_epochs)
max_len = 20
max_tries = 1000
diversity = 0.5
# musical settings
bpm = 130
# get data
chords, abstract_grammars = get_musical_data(data_fn)
corpus, values, val_indices, indices_val = get_corpus_data(abstract_grammars)
print('corpus length:', len(corpus))
print('total # of values:', len(values))
chords[0]
def get_keras_model(max_len, N_values):
# build a 2 stacked LSTM
model = Sequential()
model.add(LSTM(128, return_sequences=True, input_shape=(max_len, N_values)))
model.add(Dropout(0.2))
model.add(LSTM(128, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(N_values))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
return model
Explanation: Homework: Deep Jazz
End of explanation
from tflearn import input_data, lstm, fully_connected, regression, dropout, activation
def get_tflearn_model(max_len, N_values):
# Network building
net = input_data([None, max_len, N_values])
net = lstm(net, 128, return_seq=True)
net = dropout(net, 0.2)
net = tflearn.lstm(net, 128, return_seq=False)
net = dropout(net, 0.2)
net = fully_connected(net, N_values)
net = activation(net, activation='softmax')
net = tflearn.regression(net, loss='categorical_crossentropy', optimizer='rmsprop')
# Training
model = tflearn.DNN(net, tensorboard_dir='tflearn_log')
return model
get_model = get_tflearn_model
import numpy as np
''' Build a 2-layer LSTM from a training corpus '''
def build_model(corpus, val_indices, max_len, N_epochs=128):
# number of different values or words in corpus
N_values = len(set(corpus))
# cut the corpus into semi-redundant sequences of max_len values
step = 3
sentences = []
next_values = []
for i in range(0, len(corpus) - max_len, step):
sentences.append(corpus[i: i + max_len])
next_values.append(corpus[i + max_len])
print('nb sequences:', len(sentences))
# transform data into binary matrices
X = np.zeros((len(sentences), max_len, N_values), dtype=np.bool)
y = np.zeros((len(sentences), N_values), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, val in enumerate(sentence):
X[i, t, val_indices[val]] = 1
y[i, val_indices[next_values[i]]] = 1
model = get_model(max_len, N_values)
model.fit(X, y, batch_size=128, n_epoch=N_epochs)
return model
import tensorflow as tf
tf.reset_default_graph()
# build model
model = build_model(corpus=corpus, val_indices=val_indices,
max_len=max_len, N_epochs=N_epochs)
# set up audio stream
out_stream = stream.Stream()
# generation loop
curr_offset = 0.0
loopEnd = len(chords)
for loopIndex in range(1, loopEnd):
# get chords from file
curr_chords = stream.Voice()
for j in chords[loopIndex]:
curr_chords.insert((j.offset % 4), j)
# generate grammar
curr_grammar = __generate_grammar(model=model, corpus=corpus,
abstract_grammars=abstract_grammars,
values=values, val_indices=val_indices,
indices_val=indices_val,
max_len=max_len, max_tries=max_tries,
diversity=diversity)
curr_grammar = curr_grammar.replace(' A', ' C').replace(' X', ' C')
# Pruning #1: smoothing measure
curr_grammar = prune_grammar(curr_grammar)
# Get notes from grammar and chords
curr_notes = unparse_grammar(curr_grammar, curr_chords)
# Pruning #2: removing repeated and too close together notes
curr_notes = prune_notes(curr_notes)
# quality assurance: clean up notes
curr_notes = clean_up_notes(curr_notes)
# print # of notes in curr_notes
print('After pruning: %s notes' % (len([i for i in curr_notes
if isinstance(i, note.Note)])))
# insert into the output stream
for m in curr_notes:
out_stream.insert(curr_offset + m.offset, m)
for mc in curr_chords:
out_stream.insert(curr_offset + mc.offset, mc)
curr_offset += 4.0
out_stream.insert(0.0, tempo.MetronomeMark(number=bpm))
# Play the final stream through output (see 'play' lambda function above)
# play = lambda x: midi.realtime.StreamPlayer(x).play()
# play(out_stream)
# save stream
mf = midi.translate.streamToMidiFile(out_stream)
mf.open(out_fn, 'wb')
mf.write()
mf.close()
Explanation: Task
Replace previous model with equivalent in prettytensor or tf.slim
Try to make you code as compact as possible
End of explanation
!! timidity midi/deepjazz_on_metheny...128_epochs.midi
Explanation: You can play generated sample using any midi player
Under linux I prefer timidity
End of explanation |
11,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Today's Objectives
0. Cloning LectureNotes
1. Opening & Navigating the Jupyter Notebook
2. Data type basics
3. Loading data with pandas
4. Cleaning and Manipulating data with pandas
5. Visualizing data with pandas & matplotlib
0. Cloning Lecture Notes
The course materials are maintained on github. The next lecture will discuss github in detail. Today, you'll get minimal instructions to get access to today's lecture materials.
Open a terminal session
Type 'git clone https
Step1: 2.2.2 Floats
Step2: 2.2.3 Strings
Step3: 2.3 Tuples
A tuple is an ordered sequence of objects. Tuples cannot be changed; they are immuteable.
Step4: 2.4 Lists
A list is an ordered sequence of objects that can be changed.
Step5: 2.5 Dictionaries
A dictionary is a kind of associates a key with a value. A value can be any object, even another dictionary.
Step6: 2.7 A Shakespearean Detour
Step7: Key insight
Step8: Insights on python name resolution
* Names are assigned within a context.
* Context changes with the function and module.
* Assigning a name in a function creates a new name.
* Referencing an unassigned name in function uses an existing name.
2.7 Object Essentials
Objects are a "packaging" of data and code. Almost all python entities are objects. | Python Code:
# Integer arithematic
1 + 1
# Integer division version floating point division
print (6 // 4, 6/ 4)
Explanation: Today's Objectives
0. Cloning LectureNotes
1. Opening & Navigating the Jupyter Notebook
2. Data type basics
3. Loading data with pandas
4. Cleaning and Manipulating data with pandas
5. Visualizing data with pandas & matplotlib
0. Cloning Lecture Notes
The course materials are maintained on github. The next lecture will discuss github in detail. Today, you'll get minimal instructions to get access to today's lecture materials.
Open a terminal session
Type 'git clone https://github.com/UWSEDS/LectureNotes.git'
Wait until the download is complete
cd LectureNotes
cd 02_Procedural_Python
1. Opening and Navigating the IPython Notebook
We will start today with the interactive environment that we will be using often through the course: the Jupyter Notebook.
We will walk through the following steps together:
Download miniconda (be sure to get Version 3.6) and install it on your system (hopefully you have done this before coming to class)
Use the conda command-line tool to update your package listing and install the IPython notebook:
Update conda's listing of packages for your system:
$ conda update conda
Install IPython notebook and all its requirements
$ conda install jupyter notebook
Navigate to the directory containing the course material. For example:
$ cd LectureNotes/02_Procedural_Python
You should see a number of files in the directory, including these:
```
$ ls
```
Type jupyter notebook in the terminal to start the notebook
$ jupyter notebook
If everything has worked correctly, it should automatically launch your default browser
Click on Lecture-Python-And-Data.ipynb to open the notebook containing the content for this lecture.
With that, you're set up to use the Jupyter notebook!
2. Data Types Basics
2.1 Data type theory
Components with the same capabilities are of the same type.
For example, the numbers 2 and 200 are both integers.
A type is defined recursively. Some examples.
A list is a collection of objects that can be indexed by position.
A list of integers contains an integer at each position.
A type has a set of supported operations. For example:
Integers can be added
Strings can be concatented
A table can find the name of its columns
What type is returned from the operation?
In python, members (components and operations) are indicated by a '.'
If a is a list, the a.append(1) adds 1 to the list.
2.2 Primitive types
The primitive types are integers, floats, strings, booleans.
2.2.1 Integers
End of explanation
# Have the full set of "calculator functions" but need the numpy package
import numpy as np
print (6.0 * 3, np.sin(2*np.pi))
# Floats can have a null value called nan, not a number
a = np.nan
3*a
Explanation: 2.2.2 Floats
End of explanation
# Can concatenate, substring, find, count, ...
a = "The lazy"
b = "brown fox"
print ("Concatenation: ", a + b)
print ("First three letters: " + a[0:3])
print ("Index of 'z': " + str(a.find('z')))
Explanation: 2.2.3 Strings
End of explanation
a_tuple = (1, 'ab', (1,2))
a_tuple
a_tuple[2]
Explanation: 2.3 Tuples
A tuple is an ordered sequence of objects. Tuples cannot be changed; they are immuteable.
End of explanation
a_list = [1, 'a', [1,2]]
a_list[0]
a_list.append(2)
a_list
a_list
dir(a_list)
help (a_list)
a_list.count(1)
Explanation: 2.4 Lists
A list is an ordered sequence of objects that can be changed.
End of explanation
dessert_dict = {} # Empty dictionary
dessert_dict['Dave'] = "Cake"
dessert_dict["Joe"] = ["Cake", "Pie"]
print (dessert_dict)
dessert_dict["Dave"]
# This produces an error
dessert_dict["Bernease"] = {}
dessert_dict
dessert_dict["Bernease"] = {"Favorite": ["sorbet", "cobbler"], "Dislike": "Brownies"}
Explanation: 2.5 Dictionaries
A dictionary is a kind of associates a key with a value. A value can be any object, even another dictionary.
End of explanation
# A first name shell game
first_int = 1
second_int = first_int
second_int += 1
second_int
# What is first_int?
first_int
# A second name shell game
a_list = ['a', 'aa', 'aaa']
b_list = a_list
b_list.append('bb')
b_list
# What is a_list?
a_list
# Create a deep copy
import copy
# A second name shell game
a_list = ['a', 'aa', 'aaa']
b_list = copy.deepcopy(a_list)
b_list.append('bb')
print("b_list = %s" % str(b_list))
print("a_list = %s" % str(a_list))
Explanation: 2.7 A Shakespearean Detour: "What's in a Name?"
Deep vs. Shallow Copies
A deep copy can be manipulated separately. A shallow copy is a pointer to the same data as the original.
End of explanation
# Example 1 of name resolution in python
var = 10
def func(val):
var = val + 1
return val
# What is returned?
print("func(2) = %d" % func(2))
# What is var?
print("var = %d" % var)
# Example 2 of name resolution in python
var = 10
def func(val):
return val + var
# What is returned?
print("func(2) = %d" % func(2))
# What is var?
print("var = %d" % var)
Explanation: Key insight: Deep vs. Shallow Copies
* A deep copy can be manipulated separately from the original.
* A shallow copy cannot.
* Assigning a python immutable creates a deep copy. Non-immutables are shallow copies.
Name Resolution
The most common errors that you'll see in your python codes are:
* NameError
* AttributeError
A common error when using the bash shell is command not found.
Name resolution: Associating a name with code or data.
Resolving a name in the bash shell is done by searching the directories in the PATH environment variable. The first executable with the name is run.
End of explanation
# A list and a dict are objects.
# dict has been implemented so that you see its values when you type
# the instance name.
# This is done with many python objects, like list.
a_dict = {'a': [1, 2], 'b': [3, 4, 5]}
a_dict
# You access the data and methods (codes) associated with an object by
# using the "." operator. These are referred to collectively
# as attributes. Methods are followed by parentheses;
# values (properties) are not.
a_dict.keys()
# You can discover the attributes of an object using "dir"
dir(a_dict)
Explanation: Insights on python name resolution
* Names are assigned within a context.
* Context changes with the function and module.
* Assigning a name in a function creates a new name.
* Referencing an unassigned name in function uses an existing name.
2.7 Object Essentials
Objects are a "packaging" of data and code. Almost all python entities are objects.
End of explanation |
11,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 3b
Step1: Verify tables exist
Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
Step2: Lab Task #1
Step3: Create two SQL statements to evaluate the model.
Step4: Lab Task #2
Step5: Create three SQL statements to EVALUATE the model.
Let's now retrieve the training statistics and evaluate the model.
Step6: We now evaluate our model on our eval dataset
Step7: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
Step8: Lab Task #3
Step9: Let's retrieve the training statistics
Step10: We now evaluate our model on our eval dataset
Step11: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse. | Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
Explanation: LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform.
Learning Objectives
Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS
Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE
Create and evaluate linear model with ML.TRANSFORM
Introduction
In this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook BQML Baseline Model.
We will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
Explanation: Verify tables exist
Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_1
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
ML.FEATURE_CROSS(
# TODO: Cross categorical features
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_train
Explanation: Lab Task #1: Model 1: Apply the ML.FEATURE_CROSS clause to categorical features
BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations.
Create model with feature cross.
End of explanation
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
# TODO: Select just the calculated RMSE
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval
))
Explanation: Create two SQL statements to evaluate the model.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_2
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
# TODO: Bucketize mother_age
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
# TODO: Bucketize gestation_weeks
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_train
Explanation: Lab Task #2: Model 2: Apply the BUCKETIZE Function
Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds.
Apply the BUCKETIZE function within FEATURE_CROSS.
Hint: Create a model_2.
End of explanation
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)
Explanation: Create three SQL statements to EVALUATE the model.
Let's now retrieve the training statistics and evaluate the model.
End of explanation
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval))
Explanation: We now evaluate our model on our eval dataset:
End of explanation
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval))
Explanation: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_3
TRANSFORM(
# TODO: Add base features and label as you would in select
# TODO: Add transformed features as you would in select
)
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
Explanation: Lab Task #3: Model 3: Apply the TRANSFORM clause
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries.
Let's apply the TRANSFORM clause to the model_3 and run the query.
End of explanation
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)
Explanation: Let's retrieve the training statistics:
End of explanation
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
Explanation: We now evaluate our model on our eval dataset:
End of explanation
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
Explanation: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
End of explanation |
11,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Tutorial - How to work with the OpenEnergy Platform (OEP)
<br>
<div class="alert alert-block alert-danger">
This is an important information!
</div>
<div class="alert alert-block alert-info">
This is an information!
</div>
<div class="alert alert-block alert-success">
This is your task!
</div>
This tutorial gives you an overview of the OpenEnergy Platform and how you can work with the REST-full-HTTP API in Python. <br>
The full API documentaion can be found on ReadtheDocs.io.
Part IV
0 Setup token <br>
1 Select data <br>
2 Make a pandas dataframe <br>
3 Make calculations <br>
4 Save results as csv and excel files <br>
Part IV
0. Setup token
<br>
<div class="alert alert-block alert-danger">
Do not push your token to GitHub!
</div>
Step2: 1. Select data
Step3: <div class="alert alert-block alert-info">
**200** succesfully selected data! <br>
</div>
2. Make a pandas dataframe
Step4: 3. Make calculations
Get an overview of your DataFrame
Step5: Sum the installed Capacity by fuels and add the Unit MW to a new column. | Python Code:
__copyright__ = "Zentrum für nachhaltige Energiesysteme Flensburg"
__license__ = "GNU Affero General Public License Version 3 (AGPL-3.0)"
__url__ = "https://github.com/openego/data_processing/blob/master/LICENSE"
__author__ = "wolfbunke"
Explanation: <img src="http://193.175.187.164/static/OEP_logo_2_no_text.svg" alt="OpenEnergy Platform" height="100" width="100" align="left"/>
OpenEnergy Platform
<br>
End of explanation
import requests
import pandas as pd
from IPython.core.display import HTML
# oedb
oep_url= 'http://oep.iks.cs.ovgu.de/'
# token
your_token = ''
Explanation: Tutorial - How to work with the OpenEnergy Platform (OEP)
<br>
<div class="alert alert-block alert-danger">
This is an important information!
</div>
<div class="alert alert-block alert-info">
This is an information!
</div>
<div class="alert alert-block alert-success">
This is your task!
</div>
This tutorial gives you an overview of the OpenEnergy Platform and how you can work with the REST-full-HTTP API in Python. <br>
The full API documentaion can be found on ReadtheDocs.io.
Part IV
0 Setup token <br>
1 Select data <br>
2 Make a pandas dataframe <br>
3 Make calculations <br>
4 Save results as csv and excel files <br>
Part IV
0. Setup token
<br>
<div class="alert alert-block alert-danger">
Do not push your token to GitHub!
</div>
End of explanation
# select powerplant data
schema = 'supply'
table = 'ego_dp_conv_powerplant'
where = 'version=v0.2.10'
conv_powerplants = requests.get(oep_url+'/api/v0/schema/'+schema+'/tables/'+table+'/rows/?where='+where, )
conv_powerplants.status_code
Explanation: 1. Select data
End of explanation
df_pp = pd.DataFrame(conv_powerplants.json())
Explanation: <div class="alert alert-block alert-info">
**200** succesfully selected data! <br>
</div>
2. Make a pandas dataframe
End of explanation
df_pp.info()
Explanation: 3. Make calculations
Get an overview of your DataFrame:
End of explanation
results = df_pp[['capacity','fuel']].groupby('fuel').sum()
results['units'] = 'MW'
results
# Write DataFrame as csv
results.to_csv('Conventional_powerplants_germany.csv',
sep=',',
float_format='%.3f',
decimal='.',
date_format='%Y-%m-%d',
encoding='utf-8',
if_exists="replace")
# Write the results as xlsx file
writer = pd.ExcelWriter('Conventional_powerplants_germany.xlsx', engine='xlsxwriter')
# write results of installed Capacity by fuels
results.to_excel(writer, index=False, sheet_name='Installed Capacities by fuel')
# write orgininal data in second sheet
df_pp.to_excel(writer, index=False, sheet_name='Conventional Powerplants')
# Close the Pandas Excel writer and output the Excel file.
writer.save()
Explanation: Sum the installed Capacity by fuels and add the Unit MW to a new column.
End of explanation |
11,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Iris Dataset
Step2: Make Iris Dataset Imbalanced
Step3: Upsampling Minority Class To Match Majority | Python Code:
# Load libraries
import numpy as np
from sklearn.datasets import load_iris
Explanation: Title: Handling Imbalanced Classes With Upsampling
Slug: handling_imbalanced_classes_with_upsampling
Summary: How to handle imbalanced classes with upsampling during machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
<a alt="Upsampling" href="https://machinelearningflashcards.com">
<img src="handling_imbalanced_classes_with_upsampling/Upsampling_print.png" class="flashcard center-block">
</a>
In upsampling, for every observation in the majority class, we randomly select an observation from the minority class with replacement. The end result is the same number of observations from the minority and majority classes.
Preliminaries
End of explanation
# Load iris data
iris = load_iris()
# Create feature matrix
X = iris.data
# Create target vector
y = iris.target
Explanation: Load Iris Dataset
End of explanation
# Remove first 40 observations
X = X[40:,:]
y = y[40:]
# Create binary target vector indicating if class 0
y = np.where((y == 0), 0, 1)
# Look at the imbalanced target vector
y
Explanation: Make Iris Dataset Imbalanced
End of explanation
# Indicies of each class' observations
i_class0 = np.where(y == 0)[0]
i_class1 = np.where(y == 1)[0]
# Number of observations in each class
n_class0 = len(i_class0)
n_class1 = len(i_class1)
# For every observation in class 1, randomly sample from class 0 with replacement
i_class0_upsampled = np.random.choice(i_class0, size=n_class1, replace=True)
# Join together class 0's upsampled target vector with class 1's target vector
np.concatenate((y[i_class0_upsampled], y[i_class1]))
Explanation: Upsampling Minority Class To Match Majority
End of explanation |
11,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Forest
In random forests, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. In addition, when splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features.
As a result of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.
Data Preparation
Step1: Implementing Random Forest
Step2: Key input parameters (in addition to decision trees)
bootstrap
Step3: Exercise 1
Calculate the Feature Importance plot for max_depth = 6
Out-of-Bag Error
The out-of-bag (OOB) error is the average error for each training observation calculated using predictions from the trees that do not contain it in their respective bootstrap sample. This allows the RandomForest to be fit and validated whilst being trained. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
df = pd.read_csv("data/historical_loan.csv")
# refine the data
df.years = df.years.fillna(np.mean(df.years))
#Load the preprocessing module
from sklearn import preprocessing
categorical_variables = df.dtypes[df.dtypes=="object"].index.tolist()
for i in categorical_variables:
lbl = preprocessing.LabelEncoder()
lbl.fit(list(df[i]))
df[i] = lbl.transform(df[i])
df.head()
X = df.iloc[:,1:8]
y = df.iloc[:,0]
Explanation: Random Forest
In random forests, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. In addition, when splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features.
As a result of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.
Data Preparation
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X, y)
clf.me
Explanation: Implementing Random Forest
End of explanation
importances = clf.feature_importances_
# Importance of the features in the forest
importances
#Calculate the standard deviation of variable importance
std = np.std([tree.feature_importances_ for tree in clf.estimators_], axis=0)
std
indices = np.argsort(importances)[::-1]
indices
length = X.shape[1]
labels = []
for i in range(length):
labels.append(X.columns[indices[i]])
# Plot the feature importances of the forest
plt.figure(figsize=(16, 6))
plt.title("Feature importances")
plt.bar(range(length), importances[indices], yerr=std[indices], align="center")
plt.xticks(range(length), labels)
plt.xlim([-1, length])
plt.show()
Explanation: Key input parameters (in addition to decision trees)
bootstrap: Whether bootstrap samples are used when building trees
max_features: The number of features to consider when looking for the best split (auto = sqrt)
n_estimators: The number of trees in the forest
oob_score: Whether to use out-of-bag samples to estimate the generalization accuracy
Key output parameters
Feature Importance: The higher, the more important the feature
Out-of-Bag Score: Validation score of the training dataset obtained using an out-of-bag estimate.
Feature Importance
There are several ways to get feature "importances" with no strict consensus on what it means.
Mean Decrease Impurity
The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess the relative importance of that feature with respect to the predictability of the target variable. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features.
In scikit-learn, it is implemented by using "gini importance" or "mean decrease impurity" and is defined as the total decrease in node impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples reaching that node)) averaged over all trees of the ensemble.
Mean Decrease Accuracy
In the literature or in some other packages, you can also find feature importances implemented as the "mean decrease accuracy". Basically, the idea is to measure the decrease in accuracy on OOB data when you randomly permute the values for that feature. If the decrease is low, then the feature is not important, and vice-versa.
By averaging those expected activity rates over random trees one can reduce the variance of such an estimate and use it for feature selection.
End of explanation
import warnings
warnings.filterwarnings('ignore')
clf2 = RandomForestClassifier(warm_start=True, class_weight="balanced",
oob_score=True, max_features=None)
clf2.fit(X, y)
clf2.oob_score_
min_estimators = 10
max_estimators = 50
error_rate = []
for i in range(min_estimators, max_estimators + 1):
clf2.set_params(n_estimators=i)
clf2.fit(X, y)
oob_error = 1 - clf2.oob_score_
error_rate.append(oob_error)
error_rate_indice = [x for x in range(min_estimators, max_estimators + 1)]
plt.figure()
plt.figure(figsize=(16, 6))
plt.plot(error_rate_indice, error_rate)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.show()
Explanation: Exercise 1
Calculate the Feature Importance plot for max_depth = 6
Out-of-Bag Error
The out-of-bag (OOB) error is the average error for each training observation calculated using predictions from the trees that do not contain it in their respective bootstrap sample. This allows the RandomForest to be fit and validated whilst being trained.
End of explanation |
11,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Py-EMDE
Python Email Data Entry
The following code can gather data from weather stations reporting to the CHORDS portal, package it up into the proper format for GLOBE Email Data Entry , and send it using the SparkPost API.
In order to send email, you'll need to setup SparkPost by creating an account and confirming you own the domain you'll be sending emails from. You'll also need to create a SparkPost API key and set the environment variable SPARKPOST_API_KEY equal to the value of your API key. This script can be further modified to use a different method for sending email if needed.
This code will contact the CHORDS Portal and collect all the measurement data from the specified instrument, in the specified date range.
Step1: Now the collected data can be viewed simply by issuing the following command
Step2: This code is useful for looking at a specific measurement dataset
Step3: A modified version of the above code will format the data properly for GLOBE Email Data Entry
Step4: To see the data formatted in GLOBE Email Data Entry format, comment out the return data_list command above, uncomment the print command right above it, then issue the following command
Step5: To email the data set to GLOBE's email data entry server, run the following code.
Step6: Finally, this command sends the email | Python Code:
import requests
import json
r = requests.get('http://3d-kenya.chordsrt.com/instruments/2.geojson?start=2017-03-01T00:00&end=2017-05-01T00:00')
if r.status_code == 200:
d = r.json()['Data']
else:
print("Please verify that the URL for the weather station is correct. You may just have to try again with a different/smaller date range or different dates.")
Explanation: Py-EMDE
Python Email Data Entry
The following code can gather data from weather stations reporting to the CHORDS portal, package it up into the proper format for GLOBE Email Data Entry , and send it using the SparkPost API.
In order to send email, you'll need to setup SparkPost by creating an account and confirming you own the domain you'll be sending emails from. You'll also need to create a SparkPost API key and set the environment variable SPARKPOST_API_KEY equal to the value of your API key. This script can be further modified to use a different method for sending email if needed.
This code will contact the CHORDS Portal and collect all the measurement data from the specified instrument, in the specified date range.
End of explanation
d
Explanation: Now the collected data can be viewed simply by issuing the following command
End of explanation
for o in d:
if o['variable_shortname'] == 'msl1':
print(o['time'], o['value'], o['units'])
Explanation: This code is useful for looking at a specific measurement dataset
End of explanation
davad_tuple = (
'f1',
'f2',
'f3',
'f4',
'f5',
'f6',
'f7',
'f8',
'f9',
'f10',
'f11',
'f12',
'f13',
'f14',
)
def make_data_set(d):
data_list = []
for o in d:
if o['variable_shortname'] == 'msl1':
t = o['time'].split("T")
tdate = t[0].replace('-', '')
ttime = ''.join(t[1].split(':')[:-1])
pressure = o['value']
if ttime.endswith('00') or ttime.endswith('15') or ttime.endswith('30') or ttime.endswith('45'):
davad_tuple = ['DAVAD', 'GLID4TT4', 'SITE_ID:45013']+['X']*11
davad_tuple[3] = tdate + ttime
davad_tuple[13] = str(pressure)
data_list.append('{}'.format(' '.join(davad_tuple)))
#print('//AA\n{}\n//ZZ'.format('\n'.join(data_list)))
return data_list
Explanation: A modified version of the above code will format the data properly for GLOBE Email Data Entry
End of explanation
make_data_set(d)
Explanation: To see the data formatted in GLOBE Email Data Entry format, comment out the return data_list command above, uncomment the print command right above it, then issue the following command
End of explanation
def email_data(data_list):
import os
from sparkpost import SparkPost
FROM_EMAIL = os.getenv('FROM_EMAIL')
BCC_EMAIL = os.getenv('BCC_EMAIL')
# Send email using the SparkPost api
sp = SparkPost() # uses environment variable named SPARKPOST_API_KEY
response = sp.transmission.send(
recipients=['[email protected]'],
bcc=[BCC_EMAIL],
text='//AA\n{}\n//ZZ'.format('\n'.join(data_list)),
from_email=FROM_EMAIL,
subject='DATA'
)
print(response)
Explanation: To email the data set to GLOBE's email data entry server, run the following code.
End of explanation
email_data(make_data_set(d))
Explanation: Finally, this command sends the email
End of explanation |
11,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Experimenting with CV Scores
CVScores displays cross validation scores as a bar chart with the
average of the scores as a horizontal line.
Step2: Classification
Step3: Regression | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
import os
from yellowbrick.download import download_all
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"bikeshare": os.path.join(FIXTURES, "bikeshare", "bikeshare.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"energy": os.path.join(FIXTURES, "energy", "energy.csv"),
"game": os.path.join(FIXTURES, "game", "game.csv"),
"mushroom": os.path.join(FIXTURES, "mushroom", "mushroom.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
"spam": os.path.join(FIXTURES, "spam", "spam.csv"),
}
def load_data(name, download=True):
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
Explanation: Experimenting with CV Scores
CVScores displays cross validation scores as a bar chart with the
average of the scores as a horizontal line.
End of explanation
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
room = load_data("occupancy")
features = ["temperature", "relative humidity", "light", "C02", "humidity"]
# Extract the numpy arrays from the data frame
X = room[features].values
y = room.occupancy.values
# Create a new figure and axes
_, ax = plt.subplots()
# Create a cross-validation strategy
cv = StratifiedKFold(12)
# Create the cv score visualizer
oz = CVScores(
MultinomialNB(), ax=ax, cv=cv, scoring='f1_weighted'
)
oz.fit(X, y)
oz.show()
Explanation: Classification
End of explanation
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold
energy = load_data("energy")
targets = ["heating load", "cooling load"]
features = [col for col in energy.columns if col not in targets]
X = energy[features]
y = energy[targets[1]]
# Create a new figure and axes
_, ax = plt.subplots()
cv = KFold(12)
oz = CVScores(
Ridge(), ax=ax, cv=cv, scoring='r2'
)
oz.fit(X, y)
oz.show()
Explanation: Regression
End of explanation |
11,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And we'll attach some dummy datasets. See Datasets for more details.
Step3: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy
Step4: Using Alternate Backends
Adding Compute Options
Adding a set of compute options, via b.add_compute for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
Step5: Running Compute
Nothing changes when calling b.run_compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law. We can do this for all passband-component combinations by using set_value_all.
Step6: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
Step7: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Advanced: Alternate Backends
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('orb', times=np.linspace(0,10,1000), dataset='orb01', component=['primary', 'secondary'])
b.add_dataset('lc', times=np.linspace(0,10,1000), dataset='lc01')
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
b.add_compute('legacy', compute='legacybackend')
print b['legacybackend']
Explanation: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy
End of explanation
b.add_compute('phoebe', compute='phoebebackend')
print b['phoebebackend']
Explanation: Using Alternate Backends
Adding Compute Options
Adding a set of compute options, via b.add_compute for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
End of explanation
b.set_value_all('ld_func', 'logarithmic')
b.run_compute('legacybackend', model='legacyresults')
Explanation: Running Compute
Nothing changes when calling b.run_compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law. We can do this for all passband-component combinations by using set_value_all.
End of explanation
b.set_value_all('enabled@lc01@phoebebackend', False)
#b.set_value_all('enabled@orb01@legacybackend', False) # don't need this since legacy NEVER computes orbits
print b['enabled']
b.run_compute(['phoebebackend', 'legacybackend'], model='mixedresults')
Explanation: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
End of explanation
print b['mixedresults'].computes
b['mixedresults@phoebebackend'].datasets
b['mixedresults@legacybackend'].datasets
Explanation: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them.
End of explanation |
11,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 18
Step1: Next we create a network to implement the policy. We begin with two convolutional layers to process
the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game
logic. We also add a small Gated Recurrent Unit. That gives the network a little bit of memory, so
it can keep track of which way the ball is moving.
We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the
network's outputs. One computes the action probabilities, and the other computes an estimate of the
state value function.
We also provide an input for the initial state of the GRU, and returned its final state at the end. This is required by the learning algorithm
Step2: We will optimize the policy using the Asynchronous Advantage Actor Critic (A3C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate.
Step3: Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps.
Step4: Let's watch it play and see how it does! | Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
!pip install 'gym[atari]'
import deepchem as dc
import numpy as np
class PongEnv(dc.rl.GymEnvironment):
def __init__(self):
super(PongEnv, self).__init__('Pong-v0')
self._state_shape = (80, 80)
@property
def state(self):
# Crop everything outside the play area, reduce the image size,
# and convert it to black and white.
cropped = np.array(self._state)[34:194, :, :]
reduced = cropped[0:-1:2, 0:-1:2]
grayscale = np.sum(reduced, axis=2)
bw = np.zeros(grayscale.shape)
bw[grayscale != 233] = 1
return bw
def __deepcopy__(self, memo):
return PongEnv()
env = PongEnv()
Explanation: Tutorial Part 18: Using Reinforcement Learning to Play Pong
This notebook demonstrates using reinforcement learning to train an agent to play Pong.
The first step is to create an Environment that implements this task. Fortunately,
OpenAI Gym already provides an implementation of Pong (and many other tasks appropriate
for reinforcement learning). DeepChem's GymEnvironment class provides an easy way to
use environments from OpenAI Gym. We could just use it directly, but in this case we
subclass it and preprocess the screen image a little bit to make learning easier.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. To install gym you should also use pip install 'gym[atari]' (We need the extra modifier since we'll be using an atari game). We'll add this command onto our usual Colab installation commands for you
End of explanation
import tensorflow as tf
from tensorflow.keras.layers import Input, Concatenate, Conv2D, Dense, Flatten, GRU, Reshape
class PongPolicy(dc.rl.Policy):
def __init__(self):
super(PongPolicy, self).__init__(['action_prob', 'value', 'rnn_state'], [np.zeros(16)])
def create_model(self, **kwargs):
state = Input(shape=(80, 80))
rnn_state = Input(shape=(16,))
conv1 = Conv2D(16, kernel_size=8, strides=4, activation=tf.nn.relu)(Reshape((80, 80, 1))(state))
conv2 = Conv2D(32, kernel_size=4, strides=2, activation=tf.nn.relu)(conv1)
dense = Dense(256, activation=tf.nn.relu)(Flatten()(conv2))
gru, rnn_final_state = GRU(16, return_state=True, return_sequences=True)(
Reshape((-1, 256))(dense), initial_state=rnn_state)
concat = Concatenate()([dense, Reshape((16,))(gru)])
action_prob = Dense(env.n_actions, activation=tf.nn.softmax)(concat)
value = Dense(1)(concat)
return tf.keras.Model(inputs=[state, rnn_state], outputs=[action_prob, value, rnn_final_state])
policy = PongPolicy()
Explanation: Next we create a network to implement the policy. We begin with two convolutional layers to process
the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game
logic. We also add a small Gated Recurrent Unit. That gives the network a little bit of memory, so
it can keep track of which way the ball is moving.
We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the
network's outputs. One computes the action probabilities, and the other computes an estimate of the
state value function.
We also provide an input for the initial state of the GRU, and returned its final state at the end. This is required by the learning algorithm
End of explanation
from deepchem.models.optimizers import Adam
a3c = dc.rl.A3C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002))
Explanation: We will optimize the policy using the Asynchronous Advantage Actor Critic (A3C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate.
End of explanation
# Change this to train as many steps as you have patience for.
a3c.fit(1000)
Explanation: Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps.
End of explanation
# This code doesn't work well on Colab
env.reset()
while not env.terminated:
env.env.render()
env.step(a3c.select_action(env.state))
Explanation: Let's watch it play and see how it does!
End of explanation |
11,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TUTORIAL 04 - Graetz problem 1
Keywords
Step1: 3. Affine decomposition
In order to obtain an affine decomposition, we proceed as in the previous tutorial and recast the problem on a fixed, parameter independent, reference domain $\Omega$. As reference domain which choose the one characterized by $\mu_0 = 1$ which we generate through the generate_mesh notebook provided in the data folder.
As in the previous tutorial, we pull back the problem to the reference domain $\Omega$.
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_1.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the Graetz class
Step5: 4.4. Prepare reduction with a reduced basis method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
Explanation: TUTORIAL 04 - Graetz problem 1
Keywords: successive constraints method
1. Introduction
This Tutorial addresses geometrical parametrization and the successive constraints method (SCM). In particular, we will solve the Graetz problem, which deals with forced heat convection in a channel $\Omega_o(\mu_0)$ divided into two parts $\Omega_o^1$ and $\Omega_o^2(\mu_0)$, as in the following picture:
<img src="data/graetz_1.png" width="70%"/>
Boundaries $\Gamma_{o, 1} \cup \Gamma_{o, 5} \cup \Gamma_{o, 6}$ are kept at low temperature (say, zero), while boundaries $\Gamma_{o, 2}(\mu_0) \cup \Gamma_{o, 4}(\mu_0)$ are kept at high temperature (say, one). The convection is characterized by the velocity $\boldsymbol{\beta} = (x_1(1-x_1), 0)$, being $\boldsymbol{x}o = (x{o, 0}, x_1)$ the coordinate vector on the parametrized domain $\Omega_o(\mu_0)$.
The problem is characterized by two parameters. The first parameter $\mu_0$ controls the shape of deformable subdomain $\Omega_2(\mu_0)$. The heat transfer between the domains can be taken into account by means of the Péclet number, which will be labeled as the parameter $\mu_1$. The ranges of the two parameters are the following:
$$\mu_0 \in [0.1,10.0] \quad \text{and} \quad \mu_1 \in [0.01,10.0].$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0, \mu_1)
$$
on the parameter domain
$$
\mathbb{P}=[0.1,10.0]\times[0.01,10.0].
$$
In order to obtain a faster (yet, provably accurate) approximation of the problem, and avoiding any remeshing, we pursue a model reduction by means of a certified reduced basis reduced order method from a fixed reference domain.
The successive constraints method will be used to evaluate the stability factors.
2. Parametrized formulation
Let $u_o(\boldsymbol{\mu})$ be the temperature in the domain $\Omega_o(\mu_0)$.
We will directly provide a weak formulation for this problem
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u_o(\boldsymbol{\mu})\in\mathbb{V}_o(\boldsymbol{\mu})$ such that</center>
$$a_o\left(u_o(\boldsymbol{\mu}),v_o;\boldsymbol{\mu}\right)=f_o(v_o;\boldsymbol{\mu})\quad \forall v_o\in\mathbb{V}_o(\boldsymbol{\mu})$$
where
the function space $\mathbb{V}o(\boldsymbol{\mu})$ is defined as
$$
\mathbb{V}_o(\mu_0) = \left{ v \in H^1(\Omega_o(\mu_0)): v|{\Gamma_{o,1} \cup \Gamma_{o,5} \cup \Gamma_{o,6}} = 0, v|{\Gamma{o,2}(\mu_0) \cup \Gamma_{o,2}(\mu_0)} = 1 \right}
$$
Note that, as in the previous tutorial, the function space is parameter dependent due to the shape variation.
the parametrized bilinear form $a_o(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V}o(\boldsymbol{\mu}) \times \mathbb{V}_o(\boldsymbol{\mu}) \to \mathbb{R}$ is defined by
$$a_o(u_o,v_o;\boldsymbol{\mu}) = \mu_1\int{\Omega_o(\mu_0)} \nabla u_o \cdot \nabla v_o \ d\boldsymbol{x} + \int_{\Omega_o(\mu_0)} x_1(1-x_1) \partial_{x} u_o\ v_o \ d\boldsymbol{x},$$
the parametrized linear form $f_o(\cdot; \boldsymbol{\mu}): \mathbb{V}_o(\boldsymbol{\mu}) \to \mathbb{R}$ is defined by
$$f_o(v_o;\boldsymbol{\mu}) = 0.$$
The successive constraints method will be used to compute the stability factor of the bilinear form $a_o(\cdot, \cdot; \boldsymbol{\mu})$.
End of explanation
@SCM()
@PullBackFormsToReferenceDomain()
@ShapeParametrization(
("x[0]", "x[1]"), # subdomain 1
("mu[0]*(x[0] - 1) + 1", "x[1]"), # subdomain 2
)
class Graetz(EllipticCoerciveProblem):
# Default initialization of members
@generate_function_space_for_stability_factor
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.ds = Measure("ds")(subdomain_data=boundaries)
# Store the velocity expression
self.vel = Expression("x[1]*(1-x[1])", element=self.V.ufl_element())
# Customize eigen solver parameters
self._eigen_solver_parameters.update({
"bounding_box_minimum": {
"problem_type": "gen_hermitian", "spectral_transform": "shift-and-invert",
"spectral_shift": 1.e-5, "linear_solver": "mumps"
},
"bounding_box_maximum": {
"problem_type": "gen_hermitian", "spectral_transform": "shift-and-invert",
"spectral_shift": 1.e5, "linear_solver": "mumps"
},
"stability_factor": {
"problem_type": "gen_hermitian", "spectral_transform": "shift-and-invert",
"spectral_shift": 1.e-5, "linear_solver": "mumps"
}
})
# Return custom problem name
def name(self):
return "Graetz1"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_stability_factor
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = mu[1]
theta_a1 = 1.0
return (theta_a0, theta_a1)
elif term == "f":
theta_f0 = 1.0
return (theta_f0,)
elif term == "dirichlet_bc":
theta_bc0 = 1.0
return (theta_bc0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_stability_factor
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
u = self.u
vel = self.vel
a0 = inner(grad(u), grad(v)) * dx
a1 = vel * u.dx(0) * v * dx
return (a0, a1)
elif term == "f":
f0 = Constant(0.0) * v * dx
return (f0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),
DirichletBC(self.V, Constant(1.0), self.boundaries, 2),
DirichletBC(self.V, Constant(1.0), self.boundaries, 4),
DirichletBC(self.V, Constant(0.0), self.boundaries, 5),
DirichletBC(self.V, Constant(0.0), self.boundaries, 6)]
return (bc0,)
elif term == "inner_product":
u = self.u
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
Explanation: 3. Affine decomposition
In order to obtain an affine decomposition, we proceed as in the previous tutorial and recast the problem on a fixed, parameter independent, reference domain $\Omega$. As reference domain which choose the one characterized by $\mu_0 = 1$ which we generate through the generate_mesh notebook provided in the data folder.
As in the previous tutorial, we pull back the problem to the reference domain $\Omega$.
End of explanation
mesh = Mesh("data/graetz_1.xml")
subdomains = MeshFunction("size_t", mesh, "data/graetz_physical_region_1.xml")
boundaries = MeshFunction("size_t", mesh, "data/graetz_facet_region_1.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_1.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = Graetz(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.1, 10.0), (0.01, 10.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the Graetz class
End of explanation
reduction_method = ReducedBasis(problem)
reduction_method.set_Nmax(30, SCM=20)
reduction_method.set_tolerance(1e-5, SCM=1e-3)
Explanation: 4.4. Prepare reduction with a reduced basis method
End of explanation
lifting_mu = (1.0, 1.0)
problem.set_mu(lifting_mu)
reduction_method.initialize_training_set(200, SCM=250)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (10.0, 0.01)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(100, SCM=100)
reduction_method.error_analysis(filename="error_analysis")
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis(filename="speedup_analysis")
Explanation: 4.8. Perform a speedup analysis
End of explanation |
11,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: For the sake of visualizing values at nodes on our grid, we'll define a handy little function
Step2: Let's review the numbering of nodes and links. The lines below will print a list that shows, for each link ID, the IDs of the nodes at the link's tail and head
Step3: Finding the mean value between two nodes on a link
Suppose we want to have a link-based array, called h_edge, that contains water depth at locations between adjacent pairs of nodes. For each link, we'll simply take the average of the depth at the link's two nodes. To accomplish this, we can use the map_mean_of_link_nodes_to_link grid method. At link 8, for example, we'll average the h values at nodes 5 and 6, which should give us a depth of (6 + 7) / 2 = 6.5
Step4: What's in a name?
The mapping functions have long names, which are designed to make it as clear as possible to understand what each function does. All the mappers start with the verb map. Then the relationship is given; in this case, we are looking at the mean. Then the elements from which a quantity is being mapped
Step5: ... or the maximum
Step6: Upwind and downwind
Numerical schemes often use upwind differencing or downwind differencing. For example, finite difference schemes for equations that include advection may use "upwind" rather than centered differences, in which a scalar quantity (our h for example) is taken from whichever side is upstream in the flow field.
How do we know the flow direction? If the flow is driven by the gradient in some scalar field, such as pressure or elevation, one approach is to look at the values of this scalar on either end of the link
Step7: The water-surface elevation is then the sum of h and z
Step8: For every link, we can assign the value of h from whichever end of the link has the greater w
Step9: Consider the middle two nodes (5 and 6). Node 6 is higher (22 versus 20). Therefore, the link between them (link 8) should be assigned the value of h at node 6. This value happens to be 7.0.
Of course, we could also take the value from the lower of the two nodes, which gives link 8 a value of 6.0
Step10: Heads or tails?
It is also possible to map the scalar quantity at either the head node or the tail node to the link
Step11: Simple example using centered water depth
The following implements one time-step of a linear-viscous flow model, in which flow velocity is calculated at the links, and the depth at each link is taken as the mean of depth at the two bounding nodes. To make the flow a little tamer, we'll have our fluid be hot, low viscosity basaltic lava instead of water, with a dynamic viscosity of 100 Pa s.
Step12: I'm not sure I love the idea of a 5-m thick lava flow moving at over 100 m/s! (I guess we can take some comfort from the thought that turbulence would probably slow it down)
How different would the numerical solution be using an upwind scheme for flow depth? Let's find out | Python Code:
from landlab import RasterModelGrid
import numpy as np
mg = RasterModelGrid((3, 4), xy_spacing=100.0)
h = mg.add_zeros('surface_water__depth', at='node')
h[:] = 7 - np.abs(6 - np.arange(12))
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Mapping values between grid elements
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>)
<hr>
Imagine that you're using Landlab to write a model of shallow water flow over terrain. A natural approach is to place your scalar fields, such as water depth, at the nodes. You then place your vector fields, such as water surface gradient, flow velocity, and discharge, at the links. But your velocity depends on both slope and depth, which means you need to know the depth at the links too. How do you do this?
This tutorial introduces mappers: grid functions that map quantities defined on one set of elements (such as nodes) onto another set of elements (such as links). As you'll see, there are a variety of mappers available.
Mapping from nodes to links
For the sake of example, we'll start with a simple 3-row by 4-column raster grid. The grid will contain a scalar field called water__depth, abbreviated h. We'll populate it with some example values, as follows:
End of explanation
def show_node_values(mg, u):
for r in range(mg.number_of_node_rows - 1, -1, -1):
for c in range(mg.number_of_node_columns):
print(int(u[c + (mg.number_of_node_columns * r)]), end=' ')
print()
show_node_values(mg, h)
Explanation: For the sake of visualizing values at nodes on our grid, we'll define a handy little function:
End of explanation
for i in range(mg.number_of_links):
print(i, mg.node_at_link_tail[i], mg.node_at_link_head[i])
Explanation: Let's review the numbering of nodes and links. The lines below will print a list that shows, for each link ID, the IDs of the nodes at the link's tail and head:
End of explanation
h_edge = mg.map_mean_of_link_nodes_to_link('surface_water__depth')
for i in range(mg.number_of_links):
print(i, h_edge[i])
Explanation: Finding the mean value between two nodes on a link
Suppose we want to have a link-based array, called h_edge, that contains water depth at locations between adjacent pairs of nodes. For each link, we'll simply take the average of the depth at the link's two nodes. To accomplish this, we can use the map_mean_of_link_nodes_to_link grid method. At link 8, for example, we'll average the h values at nodes 5 and 6, which should give us a depth of (6 + 7) / 2 = 6.5:
End of explanation
h_edge = mg.map_min_of_link_nodes_to_link('surface_water__depth')
for i in range(mg.number_of_links):
print(i, h_edge[i])
Explanation: What's in a name?
The mapping functions have long names, which are designed to make it as clear as possible to understand what each function does. All the mappers start with the verb map. Then the relationship is given; in this case, we are looking at the mean. Then the elements from which a quantity is being mapped: we are taking values from link nodes. Finally, the element to which the new values apply: link.
Mapping minimum or maximum values
We can also map the minimum value of h:
End of explanation
h_edge = mg.map_max_of_link_nodes_to_link('surface_water__depth')
for i in range(mg.number_of_links):
print(i, h_edge[i])
Explanation: ... or the maximum:
End of explanation
z = mg.add_zeros('topographic__elevation', at='node')
z[:] = 16 - np.abs(7 - np.arange(12))
show_node_values(mg, z)
Explanation: Upwind and downwind
Numerical schemes often use upwind differencing or downwind differencing. For example, finite difference schemes for equations that include advection may use "upwind" rather than centered differences, in which a scalar quantity (our h for example) is taken from whichever side is upstream in the flow field.
How do we know the flow direction? If the flow is driven by the gradient in some scalar field, such as pressure or elevation, one approach is to look at the values of this scalar on either end of the link: the end with the higher value is upwind, and the end with the lower value is downwind.
Suppose for example that our water flow is driven by the water-surface slope (which is often a good approximation for the energy slope, though it omits the kinetic energy). Let's define a bed-surface elevation field z:
End of explanation
w = z + h
show_node_values(mg, w)
Explanation: The water-surface elevation is then the sum of h and z:
End of explanation
h_edge = mg.map_value_at_max_node_to_link(w, h)
for i in range(mg.number_of_links):
print(i, h_edge[i])
Explanation: For every link, we can assign the value of h from whichever end of the link has the greater w:
End of explanation
h_edge = mg.map_value_at_min_node_to_link(w, h)
for i in range(mg.number_of_links):
print(i, h_edge[i])
Explanation: Consider the middle two nodes (5 and 6). Node 6 is higher (22 versus 20). Therefore, the link between them (link 8) should be assigned the value of h at node 6. This value happens to be 7.0.
Of course, we could also take the value from the lower of the two nodes, which gives link 8 a value of 6.0:
End of explanation
h_edge = mg.map_link_head_node_to_link('surface_water__depth')
for i in range(mg.number_of_links):
print(i, h_edge[i])
h_edge = mg.map_link_tail_node_to_link('surface_water__depth')
for i in range(mg.number_of_links):
print(i, h_edge[i])
Explanation: Heads or tails?
It is also possible to map the scalar quantity at either the head node or the tail node to the link:
End of explanation
gamma = 25000.0 # unit weight of fluid, N/m2
viscosity = 100.0 # dynamic viscosity in Pa s
grad = mg.calc_grad_at_link(w)
h_edge = mg.map_mean_of_link_nodes_to_link(h)
vel = -(gamma / (3.0 * viscosity)) * h_edge * h_edge * grad
for ln in range(mg.number_of_links):
print(ln, h_edge[ln], grad[ln], vel[ln])
Explanation: Simple example using centered water depth
The following implements one time-step of a linear-viscous flow model, in which flow velocity is calculated at the links, and the depth at each link is taken as the mean of depth at the two bounding nodes. To make the flow a little tamer, we'll have our fluid be hot, low viscosity basaltic lava instead of water, with a dynamic viscosity of 100 Pa s.
End of explanation
h_edge = mg.map_value_at_max_node_to_link(w, h)
vel = -(gamma / (3.0 * viscosity)) * h_edge * h_edge * grad
for ln in range(mg.number_of_links):
print(ln, h_edge[ln], grad[ln], vel[ln])
Explanation: I'm not sure I love the idea of a 5-m thick lava flow moving at over 100 m/s! (I guess we can take some comfort from the thought that turbulence would probably slow it down)
How different would the numerical solution be using an upwind scheme for flow depth? Let's find out:
End of explanation |
11,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Strings-and-Text" data-toc-modified-id="Strings-and-Text-1"><span class="toc-item-num">1 </span>Strings and Text</a></span><ul class="toc-item"><li><span><a href="#Splitting-Strings-on-Any-of-Multiple-Delimiters-Using-re.split" data-toc-modified-id="Splitting-Strings-on-Any-of-Multiple-Delimiters-Using-re.split-1.1"><span class="toc-item-num">1.1 </span>Splitting Strings on Any of Multiple Delimiters Using re.split</a></span></li><li><span><a href="#Matching-Text-at-the-Start-or-End-of-a-String" data-toc-modified-id="Matching-Text-at-the-Start-or-End-of-a-String-1.2"><span class="toc-item-num">1.2 </span>Matching Text at the Start or End of a String</a></span></li><li><span><a href="#Wildcard-Patterns-Way-of-Matching-Strings-Using-fnmatchcase" data-toc-modified-id="Wildcard-Patterns-Way-of-Matching-Strings-Using-fnmatchcase-1.3"><span class="toc-item-num">1.3 </span>Wildcard Patterns Way of Matching Strings Using fnmatchcase</a></span></li><li><span><a href="#Matching-and-Searching-for-Text-Patterns" data-toc-modified-id="Matching-and-Searching-for-Text-Patterns-1.4"><span class="toc-item-num">1.4 </span>Matching and Searching for Text Patterns</a></span></li><li><span><a href="#Searching-and-Replacing-Text" data-toc-modified-id="Searching-and-Replacing-Text-1.5"><span class="toc-item-num">1.5 </span>Searching and Replacing Text</a></span></li><li><span><a href="#Stripping-Unwanted-Characters-from-Strings-Using-strip" data-toc-modified-id="Stripping-Unwanted-Characters-from-Strings-Using-strip-1.6"><span class="toc-item-num">1.6 </span>Stripping Unwanted Characters from Strings Using strip</a></span></li><li><span><a href="#Character-to-Character-Mapping-Using-translate." data-toc-modified-id="Character-to-Character-Mapping-Using-translate.-1.7"><span class="toc-item-num">1.7 </span>Character to Character Mapping Using translate.</a></span></li><li><span><a href="#Combining-and-Concatenating-Strings" data-toc-modified-id="Combining-and-Concatenating-Strings-1.8"><span class="toc-item-num">1.8 </span>Combining and Concatenating Strings</a></span></li><li><span><a href="#String-Formatting" data-toc-modified-id="String-Formatting-1.9"><span class="toc-item-num">1.9 </span>String Formatting</a></span></li><li><span><a href="#Reformatting-Text-to-a-Fixed-Number-of-Columns-Using-textwrap" data-toc-modified-id="Reformatting-Text-to-a-Fixed-Number-of-Columns-Using-textwrap-1.10"><span class="toc-item-num">1.10 </span>Reformatting Text to a Fixed Number of Columns Using textwrap</a></span></li></ul></li></ul></div>
Step1: Strings and Text
Some of the materials are a condensed reimplementation from the resource
Step2: Matching Text at the Start or End of a String
Use the str.startswith() or str.endswith().
Step3: Wildcard Patterns Way of Matching Strings Using fnmatchcase
Step4: Matching and Searching for Text Patterns
Example1
Step5: Example2
Step6: Example3
Step7: Example4
Step8: Searching and Replacing Text
Example1
Step9: Example2
Step10: Example3
Step11: Example4
Step12: Example5
Step14: Stripping Unwanted Characters from Strings Using strip
For unwanted characters in the beginning and end of the string, use str.strip(). And there's str.lstrip() and str.rstrip() for left and right stripping.
Step15: Character to Character Mapping Using translate.
Boiler plate
Step16: Combining and Concatenating Strings
Example1
Step17: Example2
Step18: String Formatting
Step19: Reformatting Text to a Fixed Number of Columns Using textwrap | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# magic to print version
%load_ext watermark
%watermark -a 'Ethen' -d -t -v
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Strings-and-Text" data-toc-modified-id="Strings-and-Text-1"><span class="toc-item-num">1 </span>Strings and Text</a></span><ul class="toc-item"><li><span><a href="#Splitting-Strings-on-Any-of-Multiple-Delimiters-Using-re.split" data-toc-modified-id="Splitting-Strings-on-Any-of-Multiple-Delimiters-Using-re.split-1.1"><span class="toc-item-num">1.1 </span>Splitting Strings on Any of Multiple Delimiters Using re.split</a></span></li><li><span><a href="#Matching-Text-at-the-Start-or-End-of-a-String" data-toc-modified-id="Matching-Text-at-the-Start-or-End-of-a-String-1.2"><span class="toc-item-num">1.2 </span>Matching Text at the Start or End of a String</a></span></li><li><span><a href="#Wildcard-Patterns-Way-of-Matching-Strings-Using-fnmatchcase" data-toc-modified-id="Wildcard-Patterns-Way-of-Matching-Strings-Using-fnmatchcase-1.3"><span class="toc-item-num">1.3 </span>Wildcard Patterns Way of Matching Strings Using fnmatchcase</a></span></li><li><span><a href="#Matching-and-Searching-for-Text-Patterns" data-toc-modified-id="Matching-and-Searching-for-Text-Patterns-1.4"><span class="toc-item-num">1.4 </span>Matching and Searching for Text Patterns</a></span></li><li><span><a href="#Searching-and-Replacing-Text" data-toc-modified-id="Searching-and-Replacing-Text-1.5"><span class="toc-item-num">1.5 </span>Searching and Replacing Text</a></span></li><li><span><a href="#Stripping-Unwanted-Characters-from-Strings-Using-strip" data-toc-modified-id="Stripping-Unwanted-Characters-from-Strings-Using-strip-1.6"><span class="toc-item-num">1.6 </span>Stripping Unwanted Characters from Strings Using strip</a></span></li><li><span><a href="#Character-to-Character-Mapping-Using-translate." data-toc-modified-id="Character-to-Character-Mapping-Using-translate.-1.7"><span class="toc-item-num">1.7 </span>Character to Character Mapping Using translate.</a></span></li><li><span><a href="#Combining-and-Concatenating-Strings" data-toc-modified-id="Combining-and-Concatenating-Strings-1.8"><span class="toc-item-num">1.8 </span>Combining and Concatenating Strings</a></span></li><li><span><a href="#String-Formatting" data-toc-modified-id="String-Formatting-1.9"><span class="toc-item-num">1.9 </span>String Formatting</a></span></li><li><span><a href="#Reformatting-Text-to-a-Fixed-Number-of-Columns-Using-textwrap" data-toc-modified-id="Reformatting-Text-to-a-Fixed-Number-of-Columns-Using-textwrap-1.10"><span class="toc-item-num">1.10 </span>Reformatting Text to a Fixed Number of Columns Using textwrap</a></span></li></ul></li></ul></div>
End of explanation
import re
line = 'asdf fjdk; afed, fjek,asdf, foo'
re.split(r'[;,\s]\s*', line)
Explanation: Strings and Text
Some of the materials are a condensed reimplementation from the resource: Python3 Cookbook Chapter 2. Strings and Text, which originally was freely available online.
Splitting Strings on Any of Multiple Delimiters Using re.split
The separator is either a semicolon (;), a comma (,), a whitespace ( ) or multiple whitespace.
End of explanation
filenames = ['Makefile', 'foo.c', 'bar.py', 'spam.c', 'spam.h']
# pass in a tuple for multiple match, must be tuple, list won't work
print([name for name in filenames if name.endswith(('.c', '.h'))])
print(any(name.endswith('.py') for name in filenames))
Explanation: Matching Text at the Start or End of a String
Use the str.startswith() or str.endswith().
End of explanation
from fnmatch import fnmatchcase
addresses = [
'5412 N CLARK ST',
'1060 W ADDISON ST',
'1039 W GRANVILLE AVE',
'2122 N CLARK st',
'4802 N BROADWAY']
[addr for addr in addresses if fnmatchcase(addr, '* ST')]
Explanation: Wildcard Patterns Way of Matching Strings Using fnmatchcase
End of explanation
text = 'yeah, but no, but yeah, but no, but yeah'
text.find('no')
Explanation: Matching and Searching for Text Patterns
Example1: Finding the position of a simple first match using str.find().
End of explanation
import re
text1 = '11/27/2012'
text2 = 'Nov 27, 2012'
# Simple matching: \d+ means match one or more digits
# the 'r' simply means raw strings, this leaves the backslash (\)
# uninterpretted, or else you'll have to use \\ to match special characters
if re.match(r'\d+/\d+/\d+', text1):
print('yes')
else:
print('no')
if re.match(r'\d+/\d+/\d+', text2):
print('yes')
else:
print('no')
# the re.compile version
datepat = re.compile(r'\d+/\d+/\d+')
if datepat.match(text1):
print('yes')
else:
print('no')
if datepat.match(text2):
print('yes')
else:
print('no')
Explanation: Example2: Match a lot of the same complex pattern, it's better to precompile the regular expression pattern first using re.compile().
End of explanation
text = 'Today is 11/27/2012. PyCon starts 3/13/2013.'
datepat.findall(text)
Explanation: Example3: Find all occurences in the text instead of just the first one with findall().
End of explanation
# single match
datepat = re.compile(r'(\d+)/(\d+)/(\d+)')
m = datepat.match('11/27/2012')
print(m.groups())
print(m.group(1))
# mutiple match
text = 'Today is 11/27/2012. PyCon starts 3/13/2013.'
print(datepat.findall(text))
print(re.findall(r'(\d+)/(\d+)/(\d+)', text)) # for matching just once
for month, day, year in datepat.findall(text):
print('{}-{}-{}'.format(year, month, day))
# return a iterator instead of a list
for m in datepat.finditer(text):
print(m.groups())
Explanation: Example4: Capture groups by enclosing the pattern in parathensis.
End of explanation
text = 'yeah, but no, but yeah, but no, but yeah'
text.replace('yeah', 'yep')
Explanation: Searching and Replacing Text
Example1: Finding the position of a simple first match using str.replace().
End of explanation
import re
# replace date from d/m/Y to Y-m-d
text = 'Today is 11/27/2012. PyCon starts 3/13/2013.'
re.sub(r'(\d+)/(\d+)/(\d+)', r'\3-\1-\2', text)
Explanation: Example2: More complex replace using re.sub(). The nackslashed digits refers to the matched group.
End of explanation
import re
from calendar import month_abbr
def change_date(m):
# place in the matched pattern and return the replaced text
mon_name = month_abbr[ int(m.group(1)) ]
return '{} {} {}'.format(m.group(2), mon_name, m.group(3))
datepat = re.compile(r'(\d+)/(\d+)/(\d+)')
datepat.sub(change_date, text)
Explanation: Example3: Define a function for the substitution.
End of explanation
newtext, n = datepat.subn(r'\3-\1-\2', text)
print(newtext)
print(n)
Explanation: Example4: Use .subn() to replace and return the number of substitution made.
End of explanation
text = 'UPPER PYTHON, lower python, Mixed Python'
re.findall('python', text, flags = re.IGNORECASE)
Explanation: Example5: supply the re.IGNORECASE flag if you want to ignore cases.
End of explanation
# white space stripping
s = ' hello world \n'
print(s.strip())
# character stripping
t = '-----hello world====='
print(t.strip('-='))
with open(filename) as f:
lines = (line.strip() for line in f)
for line in lines:
print('Generator Expression can be useful when you want to perform other operations after stripping')
Explanation: Stripping Unwanted Characters from Strings Using strip
For unwanted characters in the beginning and end of the string, use str.strip(). And there's str.lstrip() and str.rstrip() for left and right stripping.
End of explanation
intab = 'aeiou'
outtab = '12345'
# maps the character a > 1, e > 2
trantab = str.maketrans(intab, outtab)
str = 'this is string example....wow!!!'
print(str.translate(trantab))
Explanation: Character to Character Mapping Using translate.
Boiler plate: The method str.translate() returns a copy of the string in which all characters have been translated using a preconstructed table using the str.maketrans() function.
End of explanation
parts = ['Is', 'Chicago', 'Not', 'Chicago?']
print(' '.join(parts))
Explanation: Combining and Concatenating Strings
Example1: Use .join() when the strings you wish to combine are in a sequence.
End of explanation
a = 'Is Chicago'
b = 'Not Chicago?'
print(a + ' ' + b)
print(a, b, sep = ' ')
Explanation: Example2: Don't use the + operator when unneccessary.
End of explanation
s = '{name} has {n} messages.'
s.format(name = 'Guido', n = 37)
Explanation: String Formatting
End of explanation
import os
import textwrap
s = "Look into my eyes, look into my eyes, the eyes, the eyes, \
the eyes, not around the eyes, don't look around the eyes, \
look into my eyes, you're under."
print(textwrap.fill(s, 40))
# if you want to get the text to match the terminal size
print(os.get_terminal_size().columns)
Explanation: Reformatting Text to a Fixed Number of Columns Using textwrap
End of explanation |
11,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example demonstrates one possible way to cluster data sets that are too large to fit into memory using MDTraj and scipy.cluster. The idea for the algorithim is that we'll cluster every N-th frame directly, and then, considering the clusters fixed "assign" the remaining frames to the nearest cluster. It's not the most sophisticated algorithm, but it's a good demonstration of how MDTraj can be integrated with other data science tools.
Step1: Compute the pairwise RMSD between all of the frames. This requires
N^2 memory, which is why we might need to stride.
Step2: Now that we have the distances, we can use out favorite clustering
algorithm. I like ward.
Step3: Now, we need to extract n_leaders random samples from each of the clusters.
One way to do this is by building a map from each of the cluster labels
to the list of the indices of the subsampled confs which belong to it.
Step4: Now our leaders trajectory contains a set of representitive conformations
for each cluster. Here comes the second pass of the two-pass clustering.
Let's now consider these clusters as fixed objects and iterate through
every frame in our data set, assigning each frame to the cluster
it's closest to. We take the simple approach here of computing the distance
from each frame to each leader and assigning the frame to the cluster whose
leader is closest.
Note that this whole algorithm never requires having the entire
dataset in memory at once | Python Code:
from __future__ import print_function
import random
from collections import defaultdict
import mdtraj as md
import numpy as np
import scipy.cluster.hierarchy
stride = 5
subsampled = md.load('ala2.h5', stride=stride)
print(subsampled)
Explanation: This example demonstrates one possible way to cluster data sets that are too large to fit into memory using MDTraj and scipy.cluster. The idea for the algorithim is that we'll cluster every N-th frame directly, and then, considering the clusters fixed "assign" the remaining frames to the nearest cluster. It's not the most sophisticated algorithm, but it's a good demonstration of how MDTraj can be integrated with other data science tools.
End of explanation
distances = np.empty((subsampled.n_frames, subsampled.n_frames))
for i in range(subsampled.n_frames):
distances[i] = md.rmsd(subsampled, subsampled, i)
Explanation: Compute the pairwise RMSD between all of the frames. This requires
N^2 memory, which is why we might need to stride.
End of explanation
n_clusters = 3
linkage = scipy.cluster.hierarchy.ward(distances)
labels = scipy.cluster.hierarchy.fcluster(linkage, t=n_clusters, criterion='maxclust')
labels
Explanation: Now that we have the distances, we can use out favorite clustering
algorithm. I like ward.
End of explanation
mapping = defaultdict(lambda : [])
for i, label in enumerate(labels):
mapping[label].append(i)
mapping
Now we can iterate through the mapping and select n_leaders random
samples from each list. As we select them, we'll extract the
conformation and append it to a new trajectory called `leaders`, which
will start empty.
n_leaders_per_cluster = 2
leaders = md.Trajectory(xyz=np.empty((0, subsampled.n_atoms, 3)),
topology=subsampled.topology)
leader_labels = []
for label, indices in mapping.items():
leaders = leaders.join(subsampled[np.random.choice(indices, n_leaders_per_cluster)])
leader_labels.extend([label] * n_leaders_per_cluster)
print(leaders)
print(leader_labels)
Explanation: Now, we need to extract n_leaders random samples from each of the clusters.
One way to do this is by building a map from each of the cluster labels
to the list of the indices of the subsampled confs which belong to it.
End of explanation
labels = []
for frame in md.iterload('ala2.h5', chunk=1):
labels.append(leader_labels[np.argmin(md.rmsd(leaders, frame, 0))])
labels = np.array(labels)
print(labels)
print(labels.shape)
Explanation: Now our leaders trajectory contains a set of representitive conformations
for each cluster. Here comes the second pass of the two-pass clustering.
Let's now consider these clusters as fixed objects and iterate through
every frame in our data set, assigning each frame to the cluster
it's closest to. We take the simple approach here of computing the distance
from each frame to each leader and assigning the frame to the cluster whose
leader is closest.
Note that this whole algorithm never requires having the entire
dataset in memory at once
End of explanation |
11,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diagonalizing a Matrix
$
\mathbf{A} x_1 = \lambda_1 x_1 \
\mathbf{A} x_2 = \lambda_2 x_2 \
\mathbf{A} \times \begin{vmatrix} x_1 & x_2 \end{vmatrix} = \begin{vmatrix} \lambda_1 x_1 & \lambda_2 x_2 \end{vmatrix} = \begin{vmatrix} x_1 & x_2 \end{vmatrix} \times \begin{vmatrix} \lambda_1 & 0 \ 0 & \lambda_2 \end{vmatrix} \
THEN \
\mathbf{A} \mathbf{V} = \mathbf{V} \mathbf{\Lambda} \
SO \
\mathbf{V}^{-1} \mathbf{A} \mathbf{V} = \mathbf{\Lambda} \
AND \
\mathbf{A} = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1}
$
Powering
$
\mathbf{A}^2 = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} \
= \mathbf{V} \mathbf{\Lambda} \mathbf{\Lambda} \mathbf{V}^{-1} \
= \mathbf{V} \mathbf{\Lambda}^2 \mathbf{V}^{-1} \
$
Powering to n
$
\mathbf{A}^n = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} ... \
= \mathbf{V} \mathbf{\Lambda}^n \mathbf{V}^{-1} \
$
Source
https
Step1:
Step2: Building a diagonal eigenvalue matrix
Step3: $p_1 = p_0A$
Step4: $p1 = \mathbf{V} \Lambda \mathbf{V}^{-1} p_0$
Step5: $p2 = p1A$
Step6: $p2 = \mathbf{V} \Lambda^2 \mathbf{V}^{-1} p_0$ | Python Code:
import numpy as np
from scipy.linalg import eig, inv
from diffmaps_util import k, diag, sort_eigens
m = np.array([.8, .2, .5, .5]).reshape(2,2)
m
u0 = np.array([0,1])
for i in range(0,50):
u0 = u0.dot(m)
print u0
w, v = eig(m)
print w.real
print v
v.dot(inv(v).dot(u0))
Explanation: Diagonalizing a Matrix
$
\mathbf{A} x_1 = \lambda_1 x_1 \
\mathbf{A} x_2 = \lambda_2 x_2 \
\mathbf{A} \times \begin{vmatrix} x_1 & x_2 \end{vmatrix} = \begin{vmatrix} \lambda_1 x_1 & \lambda_2 x_2 \end{vmatrix} = \begin{vmatrix} x_1 & x_2 \end{vmatrix} \times \begin{vmatrix} \lambda_1 & 0 \ 0 & \lambda_2 \end{vmatrix} \
THEN \
\mathbf{A} \mathbf{V} = \mathbf{V} \mathbf{\Lambda} \
SO \
\mathbf{V}^{-1} \mathbf{A} \mathbf{V} = \mathbf{\Lambda} \
AND \
\mathbf{A} = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1}
$
Powering
$
\mathbf{A}^2 = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} \
= \mathbf{V} \mathbf{\Lambda} \mathbf{\Lambda} \mathbf{V}^{-1} \
= \mathbf{V} \mathbf{\Lambda}^2 \mathbf{V}^{-1} \
$
Powering to n
$
\mathbf{A}^n = \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} \mathbf{V} \mathbf{\Lambda} \mathbf{V}^{-1} ... \
= \mathbf{V} \mathbf{\Lambda}^n \mathbf{V}^{-1} \
$
Source
https://www.youtube.com/watch?v=U8R54zOTVLw
Markov Matrix
$
p_1 = \mathbf{A} p_0, p_2 = \mathbf{A} p_1 \
p_2 = \mathbf{A} \mathbf{A} p_0 \
p_2 = \mathbf{A}^2 p_0 \
p_2 = \mathbf{V} \mathbf{\Lambda}^2 \mathbf{V}^{-1} p_0
$
<p>**If $p_{n+1} = \mathbf{A} p_n$ then $p_{n} = \mathbf{A}^n p_0 = \mathbf{V} \mathbf{\Lambda}^n \mathbf{V}^{-1} p_0$**</p>
Writing p_0 as combination of eigenvectors
$
p_0 = c_1 x_1 + c_2 x_2 ... c_n x_n => \mathbf{V}\mathbf{c} = p_0 => \mathbf{c} = \mathbf{V}^{-1} p_0\
\mathbf{A} p_0 = p_1 = c_1 \lambda_1 x_1 + c_2 \lambda_2 x_2 ... c_k \lambda_k x_k \
\mathbf{A}^n p_0 = p_n = c_1 \lambda_1^n x_1 + c_2 \lambda_2^n x_2 ... c_k \lambda_k^n x_k \
= p_n = \mathbf{c} \mathbf{V} \mathbf{\Lambda}^n \
= \mathbf{V} \mathbf{\Lambda}^n \mathbf{V}^{-1} p_0
$
Source
https://www.youtube.com/watch?v=xtMzTXHO_zA
End of explanation
m = np.random.randn(9).reshape(3,3)
L = k(m, .7)
D = diag(L)
m = inv(D).dot(L)
print m
w, v = eig(m)
w = w.real
print w
print v
p0 = np.eye(len(m))
Explanation:
End of explanation
lmbda = np.zeros((3,3))
np.fill_diagonal(lmbda, w)
Explanation: Building a diagonal eigenvalue matrix
End of explanation
p1 = p0.dot(m)
p1
Explanation: $p_1 = p_0A$
End of explanation
v.dot(lmbda).dot(inv(v)).dot(p0)
Explanation: $p1 = \mathbf{V} \Lambda \mathbf{V}^{-1} p_0$
End of explanation
p2 = p1.dot(m)
p2
Explanation: $p2 = p1A$
End of explanation
v.dot(lmbda ** 2).dot(inv(v)).dot(p0)
Explanation: $p2 = \mathbf{V} \Lambda^2 \mathbf{V}^{-1} p_0$
End of explanation |
11,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regularization
Step1: We have referred to regularization in earlier sections, but we
want to develop this important idea more fully. Regularization is
the mechanism by which we navigate the bias/variance trade-off.
To get started, let's consider a classic constrained least
squares problem,
$$
\begin{aligned}
& \underset{\mathbf{x}}{\text{minimize}}
& & \Vert\mathbf{x}\Vert_2^2 \
& \text{subject to
Step2: Programming Tip.
Using the Matrix object is overkill for this problem but it
does demonstrate how Sympy's matrix machinery works. In this case,
we are using the norm method to compute the $L_2$ norm of the
given elements. Using S.var defines Sympy variables and injects
them into the global namespace. It is more Pythonic to do
something like x0 = S.symbols('x0',real=True) instead but the
other way is quicker, especially for variables with many
dimensions.
The solution defines the exact point where the line is
tangent to the circle in Figure. The
Lagrange multiplier has incorporated the constraint into the objective
function.
Step3: There is something subtle and very important about the nature of the solution,
however. Notice that there are other points very close to the solution on the
circle, indicated by the squares in Figure. This
closeness could be a good thing, in case it helps us actually find a solution
in the first place, but it may be unhelpful in so far as it creates ambiguity.
Let's hold that thought and try the same problem using the $L_1$ norm instead
of the $L_2$ norm. Recall that
$$
\Vert \mathbf{x}\Vert_1 = \sum_{i=1}^d \vert x_i \vert
$$
where $d$ is the dimension of the vector $\mathbf{x}$. Thus, we can
reformulate the same problem in the $L_1$ norm as in the following,
$$
\begin{aligned}
& \underset{\mathbf{x}}{\text{minimize}}
& & \Vert\mathbf{x}\Vert_1 \
& \text{subject to
Step4: Programming Tip.
The cvxy module provides a unified and accessible interface to the powerful
cvxopt convex optimization package, as well as other open-source solver
packages.
As shown in Figure, the constant-norm
contour in the $L_1$ norm is shaped like a diamond instead of a circle.
Furthermore, the solutions found in each case are different. Geometrically,
this is because inflating the circular $L_2$ reaches out in all directions
whereas the $L_1$ ball creeps out along the principal axes. This effect is
much more pronounced in higher dimensional spaces where $L_1$-balls get more
spikey [^spikey]. Like the $L_2$ case, there are also neighboring points on
the constraint line, but notice that these are not close to the boundary of the
corresponding $L_1$ ball, as they were in the $L_2$ case. This means that
these would be harder to confuse with the optimal solution because they
correspond to a substantially different $L_1$ ball.
[^spikey]
Step5: The only change to the code is the $L_2$ norm and we get
the same solution as before.
Let's see what happens in higher dimensions for both $L_2$ and
$L_1$ as we move from two dimensions to four dimensions.
Step6: And also in the $L_2$ case with the following code,
Step7: Note that the $L_1$ solution has selected out only one
dimension for the solution, as the other components are
effectively zero. This is not so with the $L_2$ solution, which
has meaningful elements in multiple coordinates. This is because
the $L_1$ problem has many pointy corners in the four dimensional
space that poke at the hyperplane that is defined by the
constraint. This essentially means the subsets (namely, the points
at the corners) are found as solutions because these touch the
hyperplane. This effect becomes more pronounced in higher
dimensions, which is the main benefit of using the $L_1$ norm
as we will see in the next section.
Step8: <!-- dom
Step9: Now, we can define our coefficient vector $\boldsymbol{\beta}$
using the following code,
Step10: Next, we define the objective function we are trying to minimize
Step11: Programming Tip.
The Sympy Matrix class has useful methods like the norm function
used above to define the objective function. The ord=2 means we want
to use the $L_2$ norm. The expression in parenthesis evaluates to a
Matrix object.
Note that it is helpful to define real variables using
the keyword argument whenever applicable because it relieves
Sympy's internal machinery of dealing with complex numbers.
Finally, we can use calculus to solve this by setting the
derivatives of the objective function to zero.
Step12: Notice that the solution does not uniquely specify all the components
of the beta variable. This is a consequence of the $p<n$ nature of this
problem where $p=2$ and $n=3$. While the the existence of this ambiguity does
not alter the solution,
Step13: But it does change the length of the solution vector
beta,
Step14: If we want to minimize this length we can easily
use the same calculus as before,
Step15: This provides the solution of minimum length
in the $L_2$ sense,
Step16: But what is so special about solutions of minimum length? For machine
learning, driving the objective function to zero is symptomatic of overfitting
the data. Usually, at the zero bound, the machine learning method has
essentially memorized the training data, which is bad for generalization. Thus,
we can effectively stall this problem by defining a region for the solution
that is away from the zero-bound.
$$
\begin{aligned}
& \underset{\boldsymbol{\beta}}{\text{minimize}}
& & \Vert y - \mathbf{X}\boldsymbol{\beta}\Vert_2^2 \
& \text{subject to
Step17: Note that the alpha scales of the penalty for the
$\Vert\boldsymbol{\beta}\Vert_2$. We set the fit_intercept=False argument to
omit the extra offset term from our example. The corresponding solution is the
following,
Step18: To double-check the solution, we can use some optimization tools from
Scipy and our previous Sympy analysis, as in the following,
Step19: Programming Tip.
We had to define the additional g function from the lambda function we
created from the Sympy expression in f because the minimize function
expects a single object vector as input instead of a three separate arguments.
which produces the same answer as the Ridge object. To
better understand the meaning of this result, we can re-compute the
mean squared error solution to this problem in one step using matrix
algebra instead of calculus,
Step20: Notice that this solves the posited problem exactly,
Step21: This means that the first term in the objective function
goes to zero,
$$
\Vert y-\mathbf{X}\boldsymbol{\beta}_{LS}\Vert=0
$$
But, let's examine the $L_2$ length of this solution versus
the ridge regression solution,
Step22: Thus, the ridge regression solution is shorter in the $L_2$
sense, but the first term in the objective function is not zero for
ridge regression,
Step23: Ridge regression solution trades fitting error
($\Vert y-\mathbf{X} \boldsymbol{\beta}\Vert_2$) for solution
length ($\Vert\boldsymbol{\beta}\Vert_2$).
Let's see this in action with a familiar example from
ch
Step24: <!-- dom
Step25: As before, we can use the optimization tools in Scipy to solve this
also,
Step26: Programming Tip.
The fmin function from Scipy's optimization module uses an
algorithm that does not depend upon derivatives. This is useful
because, unlike the $L_2$ norm, the $L_1$ norm has sharp corners
that make it harder to estimate derivatives.
This result matches the previous one from the
Scikit-learn Lasso object. Solving it using Scipy is motivating
and provides a good sanity check, but specialized algorithms are
required in practice. The following code block re-runs the lasso
with varying $\alpha$ and plots the coefficients in
Figure. Notice that as $\alpha$ increases, all
but one of the coefficients is driven to zero. Increasing $\alpha$
makes the trade-off between fitting the data in the $L_2$ sense
and wanting to reduce the number of nonzero coefficients
(equivalently, the number of features used) in the model. For a
given problem, it may be more practical to focus on reducing the
number of features in the model (i.e., large $\alpha$) than the
quality of the data fit in the training data. The lasso provides a
clean way to navigate this trade-off.
The following code loops over a set of $\alpha$ values and
collects the corresponding lasso coefficients to be plotted
in Figure | Python Code:
from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
Explanation: Regularization
End of explanation
import sympy as S
S.var('x:2 l',real=True)
J=S.Matrix([x0,x1]).norm()**2 + l*(1-x0-2*x1)
sol=S.solve(map(J.diff,[x0,x1,l]))
print(sol)
Explanation: We have referred to regularization in earlier sections, but we
want to develop this important idea more fully. Regularization is
the mechanism by which we navigate the bias/variance trade-off.
To get started, let's consider a classic constrained least
squares problem,
$$
\begin{aligned}
& \underset{\mathbf{x}}{\text{minimize}}
& & \Vert\mathbf{x}\Vert_2^2 \
& \text{subject to:}
& & x_0 + 2 x_1 = 1
\end{aligned}
$$
where $\Vert\mathbf{x}\Vert_2=\sqrt{x_0^2+x_1^2}$ is the
$L_2$ norm. Without the constraint, it would be easy to minimize
the objective function --- just take $\mathbf{x}=0$. Otherwise,
suppose we somehow know that $\Vert\mathbf{x}\Vert_2<c$, then
the locus of points defined by this inequality is the circle in
Figure. The constraint is the line in
the same figure. Because every value of $c$ defines a circle, the
constraint is satisfied when the circle touches the line. The
circle can touch the line at many different points, but we are
only interested in the smallest such circle because this is a
minimization problem. Intuitively, this means that we inflate a
$L_2$ ball at the origin and stop when it just touches the
contraint. The point of contact is our $L_2$ minimization
solution.
<!-- dom:FIGURE: [fig-machine_learning/regularization_001.png, width=500
frac=0.75] The solution of the constrained $L_2$ minimization problem is at the
point where the constraint (dark line) intersects the $L_2$ ball (gray circle)
centered at the origin. The point of intersection is indicated by the dark
circle. The two neighboring squares indicate points on the line that are close
to the solution. <div id="fig:regularization_001"></div> -->
<!-- begin figure -->
<div id="fig:regularization_001"></div>
<p>The solution of the constrained $L_2$ minimization problem is at the point
where the constraint (dark line) intersects the $L_2$ ball (gray circle)
centered at the origin. The point of intersection is indicated by the dark
circle. The two neighboring squares indicate points on the line that are close
to the solution.</p>
<img src="fig-machine_learning/regularization_001.png" width=500>
<!-- end figure -->
We can obtain the same result using the method of Lagrange
multipliers. We can rewrite the entire $L_2$ minimization problem
as one objective function using the Lagrange multiplier,
$\lambda$,
$$
J(x_0,x_1,\lambda) = x_0^2+x_1^2 + \lambda (1-x_0-x_1)
$$
and solve this as an ordinary function using calculus. Let's
do this using Sympy.
End of explanation
%matplotlib inline
from __future__ import division
import numpy as np
from numpy import pi, linspace, sqrt
from matplotlib.patches import Circle
from matplotlib.pylab import subplots
x1 = linspace(-1,1,10)
dx=linspace(.7,1.3,3)
fline = lambda x:(1-x)/2.
fig,ax=subplots()
_=ax.plot(dx*1/5,fline(dx*1/5),'s',ms=10,color='gray')
_=ax.plot(x1,fline(x1),color='gray',lw=3)
_=ax.add_patch(Circle((0,0),1/sqrt(5),alpha=0.3,color='gray'))
_=ax.plot(1/5,2/5,'o',color='k',ms=15)
_=ax.set_xlabel('$x_1$',fontsize=24)
_=ax.set_ylabel('$x_2$',fontsize=24)
_=ax.axis((-0.6,0.6,-0.6,0.6))
ax.set_aspect(1)
fig.tight_layout()
Explanation: Programming Tip.
Using the Matrix object is overkill for this problem but it
does demonstrate how Sympy's matrix machinery works. In this case,
we are using the norm method to compute the $L_2$ norm of the
given elements. Using S.var defines Sympy variables and injects
them into the global namespace. It is more Pythonic to do
something like x0 = S.symbols('x0',real=True) instead but the
other way is quicker, especially for variables with many
dimensions.
The solution defines the exact point where the line is
tangent to the circle in Figure. The
Lagrange multiplier has incorporated the constraint into the objective
function.
End of explanation
from cvxpy import Variable, Problem, Minimize, norm1, norm2
x=Variable(2,1,name='x')
constr=[np.matrix([[1,2]])*x==1]
obj=Minimize(norm1(x))
p= Problem(obj,constr)
p.solve()
print(x.value)
Explanation: There is something subtle and very important about the nature of the solution,
however. Notice that there are other points very close to the solution on the
circle, indicated by the squares in Figure. This
closeness could be a good thing, in case it helps us actually find a solution
in the first place, but it may be unhelpful in so far as it creates ambiguity.
Let's hold that thought and try the same problem using the $L_1$ norm instead
of the $L_2$ norm. Recall that
$$
\Vert \mathbf{x}\Vert_1 = \sum_{i=1}^d \vert x_i \vert
$$
where $d$ is the dimension of the vector $\mathbf{x}$. Thus, we can
reformulate the same problem in the $L_1$ norm as in the following,
$$
\begin{aligned}
& \underset{\mathbf{x}}{\text{minimize}}
& & \Vert\mathbf{x}\Vert_1 \
& \text{subject to:}
& & x_1 + 2 x_2 = 1
\end{aligned}
$$
It turns out that this problem is somewhat harder to
solve using Sympy, but we have convex optimization modules in Python
that can help.
End of explanation
constr=[np.matrix([[1,2]])*x==1]
obj=Minimize(norm2(x)) #L2 norm
p= Problem(obj,constr)
p.solve()
print(x.value)
Explanation: Programming Tip.
The cvxy module provides a unified and accessible interface to the powerful
cvxopt convex optimization package, as well as other open-source solver
packages.
As shown in Figure, the constant-norm
contour in the $L_1$ norm is shaped like a diamond instead of a circle.
Furthermore, the solutions found in each case are different. Geometrically,
this is because inflating the circular $L_2$ reaches out in all directions
whereas the $L_1$ ball creeps out along the principal axes. This effect is
much more pronounced in higher dimensional spaces where $L_1$-balls get more
spikey [^spikey]. Like the $L_2$ case, there are also neighboring points on
the constraint line, but notice that these are not close to the boundary of the
corresponding $L_1$ ball, as they were in the $L_2$ case. This means that
these would be harder to confuse with the optimal solution because they
correspond to a substantially different $L_1$ ball.
[^spikey]: We discussed the geometry of high dimensional space
when we covered the curse of dimensionality in the
statistics chapter.
To double-check our earlier $L_2$ result, we can also use the
cvxpy module to find the $L_2$ solution as in the following
code,
End of explanation
x=Variable(4,1,name='x')
constr=[np.matrix([[1,2,3,4]])*x==1]
obj=Minimize(norm1(x))
p= Problem(obj,constr)
p.solve()
print(x.value)
Explanation: The only change to the code is the $L_2$ norm and we get
the same solution as before.
Let's see what happens in higher dimensions for both $L_2$ and
$L_1$ as we move from two dimensions to four dimensions.
End of explanation
constr=[np.matrix([[1,2,3,4]])*x==1]
obj=Minimize(norm2(x))
p= Problem(obj,constr)
p.solve()
print(x.value)
Explanation: And also in the $L_2$ case with the following code,
End of explanation
from matplotlib.patches import Rectangle, RegularPolygon
r=RegularPolygon((0,0),4,1/2,pi/2,alpha=0.5,color='gray')
fig,ax=subplots()
dx = np.array([-0.1,0.1])
_=ax.plot(dx,fline(dx),'s',ms=10,color='gray')
_=ax.plot(x1,fline(x1),color='gray',lw=3)
_=ax.plot(0,1/2,'o',color='k',ms=15)
_=ax.add_patch(r)
_=ax.set_xlabel('$x_1$',fontsize=24)
_=ax.set_ylabel('$x_2$',fontsize=24)
_=ax.axis((-0.6,0.6,-0.6,0.6))
_=ax.set_aspect(1)
fig.tight_layout()
Explanation: Note that the $L_1$ solution has selected out only one
dimension for the solution, as the other components are
effectively zero. This is not so with the $L_2$ solution, which
has meaningful elements in multiple coordinates. This is because
the $L_1$ problem has many pointy corners in the four dimensional
space that poke at the hyperplane that is defined by the
constraint. This essentially means the subsets (namely, the points
at the corners) are found as solutions because these touch the
hyperplane. This effect becomes more pronounced in higher
dimensions, which is the main benefit of using the $L_1$ norm
as we will see in the next section.
End of explanation
import sympy as S
from sympy import Matrix
X = Matrix([[1,2,3],
[3,4,5]])
y = Matrix([[1,2]]).T
Explanation: <!-- dom:FIGURE: [fig-machine_learning/regularization_002.png, width=500
frac=0.75] The diamond is the $L_1$ ball in two dimensions and the line is the
constraint. The point of intersection is the solution to the optimization
problem. Note that for $L_1$ optimization, the two nearby points on the
constraint (squares) do not touch the $L_1$ ball. Compare this with
[Figure](#fig:regularization_001). <div id="fig:regularization_002"></div> -->
<!-- begin figure -->
<div id="fig:regularization_002"></div>
<p>The diamond is the $L_1$ ball in two dimensions and the line is the
constraint. The point of intersection is the solution to the optimization
problem. Note that for $L_1$ optimization, the two nearby points on the
constraint (squares) do not touch the $L_1$ ball. Compare this with
[Figure](#fig:regularization_001).</p>
<img src="fig-machine_learning/regularization_002.png" width=500>
<!-- end figure -->
Ridge Regression
Now that we have a sense of the geometry of the situation, let's revisit
our classic linear regression probem. To recap, we want to solve the following
problem,
$$
\min_{\boldsymbol{\beta}\in \mathbb{R}^p} \Vert y -
\mathbf{X}\boldsymbol{\beta}\Vert
$$
where $\mathbf{X}=\left[
\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_p \right]$ and $\mathbf{x}_i\in
\mathbb{R}^n$. Furthermore, we assume that the $p$ column vectors are linearly
independent (i.e., $\texttt{rank}(\mathbf{X})=p$). Linear regression produces
the $\boldsymbol{\beta}$ that minimizes the mean squared error above. In the
case where $p=n$, there is a unique solution to this problem. However, when
$p<n$, then there are infinitely many solutions.
To make this concrete, let's work this out using Sympy. First,
let's define an example $\mathbf{X}$ and $\mathbf{y}$ matrix,
End of explanation
b0,b1,b2=S.symbols('b:3',real=True)
beta = Matrix([[b0,b1,b2]]).T # transpose
Explanation: Now, we can define our coefficient vector $\boldsymbol{\beta}$
using the following code,
End of explanation
obj=(X*beta -y).norm(ord=2)**2
Explanation: Next, we define the objective function we are trying to minimize
End of explanation
sol=S.solve([obj.diff(i) for i in beta])
beta.subs(sol)
Explanation: Programming Tip.
The Sympy Matrix class has useful methods like the norm function
used above to define the objective function. The ord=2 means we want
to use the $L_2$ norm. The expression in parenthesis evaluates to a
Matrix object.
Note that it is helpful to define real variables using
the keyword argument whenever applicable because it relieves
Sympy's internal machinery of dealing with complex numbers.
Finally, we can use calculus to solve this by setting the
derivatives of the objective function to zero.
End of explanation
obj.subs(sol)
Explanation: Notice that the solution does not uniquely specify all the components
of the beta variable. This is a consequence of the $p<n$ nature of this
problem where $p=2$ and $n=3$. While the the existence of this ambiguity does
not alter the solution,
End of explanation
beta.subs(sol).norm(2)
Explanation: But it does change the length of the solution vector
beta,
End of explanation
S.solve((beta.subs(sol).norm()**2).diff())
Explanation: If we want to minimize this length we can easily
use the same calculus as before,
End of explanation
betaL2=beta.subs(sol).subs(b2,S.Rational(1,6))
betaL2
Explanation: This provides the solution of minimum length
in the $L_2$ sense,
End of explanation
from sklearn.linear_model import Ridge
clf = Ridge(alpha=100.0,fit_intercept=False)
clf.fit(np.array(X).astype(float),np.array(y).astype(float))
Explanation: But what is so special about solutions of minimum length? For machine
learning, driving the objective function to zero is symptomatic of overfitting
the data. Usually, at the zero bound, the machine learning method has
essentially memorized the training data, which is bad for generalization. Thus,
we can effectively stall this problem by defining a region for the solution
that is away from the zero-bound.
$$
\begin{aligned}
& \underset{\boldsymbol{\beta}}{\text{minimize}}
& & \Vert y - \mathbf{X}\boldsymbol{\beta}\Vert_2^2 \
& \text{subject to:}
& & \Vert\boldsymbol{\beta}\Vert_2 < c
\end{aligned}
$$
where $c$ is the tuning parameter. Using the same process as before,
we can re-write this as the following,
$$
\min_{\boldsymbol{\beta}\in\mathbb{R}^p}\Vert
y-\mathbf{X}\boldsymbol{\beta}\Vert_2^2 +\alpha\Vert\boldsymbol{\beta}\Vert_2^2
$$
where $\alpha$ is the tuning parameter. These are the penalized or
Lagrange forms of these problems derived from the constrained versions. The
objective function is penalized by the $\Vert\boldsymbol{\beta}\Vert_2$ term.
For $L_2$ penalization, this is called ridge regression. This is
implemented in Scikit-learn as Ridge. The following code sets this up for
our example,
End of explanation
print(clf.coef_)
Explanation: Note that the alpha scales of the penalty for the
$\Vert\boldsymbol{\beta}\Vert_2$. We set the fit_intercept=False argument to
omit the extra offset term from our example. The corresponding solution is the
following,
End of explanation
from scipy.optimize import minimize
f = S.lambdify((b0,b1,b2),obj+beta.norm()**2*100.)
g = lambda x:f(x[0],x[1],x[2])
out = minimize(g,[.1,.2,.3]) # initial guess
out.x
Explanation: To double-check the solution, we can use some optimization tools from
Scipy and our previous Sympy analysis, as in the following,
End of explanation
betaLS=X.T*(X*X.T).inv()*y
betaLS
Explanation: Programming Tip.
We had to define the additional g function from the lambda function we
created from the Sympy expression in f because the minimize function
expects a single object vector as input instead of a three separate arguments.
which produces the same answer as the Ridge object. To
better understand the meaning of this result, we can re-compute the
mean squared error solution to this problem in one step using matrix
algebra instead of calculus,
End of explanation
X*betaLS-y
Explanation: Notice that this solves the posited problem exactly,
End of explanation
print(betaLS.norm().evalf(), np.linalg.norm(clf.coef_))
Explanation: This means that the first term in the objective function
goes to zero,
$$
\Vert y-\mathbf{X}\boldsymbol{\beta}_{LS}\Vert=0
$$
But, let's examine the $L_2$ length of this solution versus
the ridge regression solution,
End of explanation
print((y-X*clf.coef_.T).norm()**2)
Explanation: Thus, the ridge regression solution is shorter in the $L_2$
sense, but the first term in the objective function is not zero for
ridge regression,
End of explanation
# create chirp signal
xi = np.linspace(0,1,100)[:,None]
# sample chirp randomly
xin= np.sort(np.random.choice(xi.flatten(),20,replace=False))[:,None]
# create sampled waveform
y = cos(2*pi*(xin+xin**2))
# create full waveform for reference
yi = cos(2*pi*(xi+xi**2))
# create polynomial features
qfit = PolynomialFeatures(degree=8) # quadratic
Xq = qfit.fit_transform(xin)
# reformat input as polynomial
Xiq = qfit.fit_transform(xi)
lr=LinearRegression() # create linear model
lr.fit(Xq,y) # fit linear model
# create ridge regression model and fit
clf = Ridge(alpha=1e-9,fit_intercept=False)
clf.fit(Xq,y)
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import numpy as np
from numpy import cos, pi
np.random.seed(1234567)
xi = np.linspace(0,1,100)[:,None]
xin = np.linspace(0,1,20)[:,None]
xin= np.sort(np.random.choice(xi.flatten(),20,replace=False))[:,None]
f0 = 1 # init frequency
BW = 2
y = cos(2*pi*(f0*xin+(BW/2.0)*xin**2))
yi = cos(2*pi*(f0*xi+(BW/2.0)*xi**2))
qfit = PolynomialFeatures(degree=8) # quadratic
Xq = qfit.fit_transform(xin)
Xiq = qfit.fit_transform(xi)
lr=LinearRegression() # create linear model
_=lr.fit(Xq,y)
fig,axs=subplots(2,1,sharex=True,sharey=True)
fig.set_size_inches((6,6))
ax=axs[0]
_=ax.plot(xi,yi,label='true',ls='--',color='k')
_=ax.plot(xi,lr.predict(Xiq),label=r'$\beta_{LS}$',color='k')
_=ax.legend(loc=0)
_=ax.set_ylabel(r'$\hat{y}$ ',fontsize=22,rotation='horizontal')
_=ax.fill_between(xi.flatten(),yi.flatten(),lr.predict(Xiq).flatten(),color='gray',alpha=.3)
_=ax.set_title('Polynomial Regression of Chirp Signal')
_=ax.plot(xin, -1.5+np.array([0.01]*len(xin)), '|', color='k',mew=3)
clf = Ridge(alpha=1e-9,fit_intercept=False)
_=clf.fit(Xq,y)
ax=axs[1]
_=ax.plot(xi,yi,label=r'true',ls='--',color='k')
_=ax.plot(xi,clf.predict(Xiq),label=r'$\beta_{RR}$',color='k')
_=ax.legend(loc=(0.25,0.70))
_=ax.fill_between(xi.flatten(),yi.flatten(),clf.predict(Xiq).flatten(),color='gray',alpha=.3)
# add rug plot
_=ax.plot(xin, -1.5+np.array([0.01]*len(xin)), '|', color='k',mew=3)
_=ax.set_xlabel('$x$',fontsize=22)
_=ax.set_ylabel(r'$\hat{y}$ ',fontsize=22,rotation='horizontal')
_=ax.set_title('Ridge Regression of Chirp Signal')
Explanation: Ridge regression solution trades fitting error
($\Vert y-\mathbf{X} \boldsymbol{\beta}\Vert_2$) for solution
length ($\Vert\boldsymbol{\beta}\Vert_2$).
Let's see this in action with a familiar example from
ch:stats:sec:nnreg. Consider
Figure.
For this example, we created our usual chirp signal and attempted to
fit it with a high-dimensional polynomial, as we did in
the section ch:ml:sec:cv. The lower panel is the same except
with ridge
regression. The shaded gray area is the space between the true signal
and the approximant in both cases. The horizontal hash marks indicate
the subset of $x_i$ values that each regressor was trained on.
Thus, the training set represents a non-uniform sample of the
underlying chirp waveform. The top panel shows the usual polynomial
regression. Note that the regressor fits the given points extremely
well, but fails at the endpoint. The ridge regressor misses many of
the points in the middle, as indicated by the gray area, but does not
overshoot at the ends as much as the plain polynomial regression. This
is the basic trade-off for ridge regression. The Jupyter/IPython
notebook has the code for this graph, but the main steps
are shown in the following,
End of explanation
X = np.matrix([[1,2,3],
[3,4,5]])
y = np.matrix([[1,2]]).T
from sklearn.linear_model import Lasso
lr = Lasso(alpha=1.0,fit_intercept=False)
_=lr.fit(X,y)
print(lr.coef_)
Explanation: <!-- dom:FIGURE: [fig-machine_learning/regularization_003.png, width=500
frac=0.85] The top figure shows polynomial regression and the lower panel shows
polynomial ridge regression. The ridge regression does not match as well
throughout most of the domain, but it does not flare as violently at the ends.
This is because the ridge constraint holds the coefficient vector down at the
expense of poorer performance along the middle of the domain. <div
id="fig:regularization_003"></div> -->
<!-- begin figure -->
<div id="fig:regularization_003"></div>
<p>The top figure shows polynomial regression and the lower panel shows
polynomial ridge regression. The ridge regression does not match as well
throughout most of the domain, but it does not flare as violently at the ends.
This is because the ridge constraint holds the coefficient vector down at the
expense of poorer performance along the middle of the domain.</p>
<img src="fig-machine_learning/regularization_003.png" width=500>
<!-- end figure -->
Lasso
Lasso regression follows the same basic pattern as ridge regression,
except with the $L_1$ norm in the objective function.
$$
\min_{\boldsymbol{\beta}\in\mathbb{R}^p}\Vert
y-\mathbf{X}\boldsymbol{\beta}\Vert^2 +\alpha\Vert\boldsymbol{\beta}\Vert_1
$$
The interface in Scikit-learn is likewise the same.
The following is the same problem as before using lasso
instead of ridge regression,
End of explanation
from scipy.optimize import fmin
obj = 1/4.*(X*beta-y).norm(2)**2 + beta.norm(1)*l
f = S.lambdify((b0,b1,b2),obj.subs(l,1.0))
g = lambda x:f(x[0],x[1],x[2])
fmin(g,[0.1,0.2,0.3])
Explanation: As before, we can use the optimization tools in Scipy to solve this
also,
End of explanation
o=[]
alphas= np.logspace(-3,0,10)
for a in alphas:
clf = Lasso(alpha=a,fit_intercept=False)
_=clf.fit(X,y)
o.append(clf.coef_)
fig,ax=subplots()
fig.set_size_inches((8,5))
k=np.vstack(o)
ls = ['-','--',':','-.']
for i in range(k.shape[1]):
_=ax.semilogx(alphas,k[:,i],'o-',
label='coef %d'%(i),
color='k',ls=ls[i],
alpha=.8,)
_=ax.axis(ymin=-1e-1)
_=ax.legend(loc=0)
_=ax.set_xlabel(r'$\alpha$',fontsize=20)
_=ax.set_ylabel(r'Lasso coefficients',fontsize=16)
fig.tight_layout()
Explanation: Programming Tip.
The fmin function from Scipy's optimization module uses an
algorithm that does not depend upon derivatives. This is useful
because, unlike the $L_2$ norm, the $L_1$ norm has sharp corners
that make it harder to estimate derivatives.
This result matches the previous one from the
Scikit-learn Lasso object. Solving it using Scipy is motivating
and provides a good sanity check, but specialized algorithms are
required in practice. The following code block re-runs the lasso
with varying $\alpha$ and plots the coefficients in
Figure. Notice that as $\alpha$ increases, all
but one of the coefficients is driven to zero. Increasing $\alpha$
makes the trade-off between fitting the data in the $L_2$ sense
and wanting to reduce the number of nonzero coefficients
(equivalently, the number of features used) in the model. For a
given problem, it may be more practical to focus on reducing the
number of features in the model (i.e., large $\alpha$) than the
quality of the data fit in the training data. The lasso provides a
clean way to navigate this trade-off.
The following code loops over a set of $\alpha$ values and
collects the corresponding lasso coefficients to be plotted
in Figure
End of explanation |
11,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Simple Query
This is a simple single-level query.
Default Columns
Step2: The system also supports Pan-STARRS1 and 2MASS cross-matches using the panstarrs1 and twomass keywords
Step3: Different Defaults
If you want fewer default columns, this is an option through the defaults keyword.
Step4: Likewise, there's an option for much greater detail.
Step10: <br><br><br><br>
<br><br><br><br>
Complex Nested Query
A complex query like this one shows the real utility of this package.
Instead of keeping track of the complex SQL, we only need to pay close attention to the custom calculated columns.
This ADQL queries for data within a rectangular area on a sky rotated by a rotation matrix and specified North Galactic Pole angles. The specifics aren't important -- the real takeaway is that the sky rotation and calculation are written in a clear format, with all the parts of the query close together. Running the query is trivial after that. | Python Code:
circle =
--Selections: Cluster RA
1=CONTAINS(POINT('ICRS',gaia.ra,gaia.dec),
CIRCLE('ICRS',{ra:.4f},{dec:.4f},{rad:.2f}))
.format(ra=230, dec=0, rad=4)
df = make_simple_query(
WHERE=circle, # The WHERE part of the SQL
random_index=1e4, # a shortcut to use the random_index in 'WHERE'
ORDERBY='gaia.parallax', # setting the data ordering
pprint=True, # print the query
do_query=True, # perform the query using gaia_tools.query
local=False, # whether to perform the query locally
units=True # to fill in missing units from 'defaults' file
)
df
Explanation: Simple Query
This is a simple single-level query.
Default Columns
End of explanation
df = make_simple_query(
WHERE=circle, # The WHERE part of the SQL
random_index=1e4, # a shortcut to use the random_index in 'WHERE'
ORDERBY='gaia.parallax', # setting the data ordering
panstarrs1=True, twomass=True,
do_query=True, # perform the query using gaia_tools.query
local=False, # whether to perform the query locally
units=True # to fill in missing units from 'defaults' file
)
df
Explanation: The system also supports Pan-STARRS1 and 2MASS cross-matches using the panstarrs1 and twomass keywords
End of explanation
df = make_simple_query(
WHERE=circle, random_index=1e4, ORDERBY='gaia.parallax',
do_query=True, local=False, units=True,
defaults='empty',
)
df
Explanation: Different Defaults
If you want fewer default columns, this is an option through the defaults keyword.
End of explanation
df = make_simple_query(
WHERE=circle, random_index=1e4, ORDERBY='gaia.parallax',
do_query=True, local=False, units=True,
defaults='full'
)
df
Explanation: Likewise, there's an option for much greater detail.
End of explanation
###########
# Custom Calculations
# Innermost Level
l0cols =
--Rotation Matrix
{K00}*cos(radians(dec))*cos(radians(ra))+
{K01}*cos(radians(dec))*sin(radians(ra))+
{K02}*sin(radians(dec)) AS cosphi1cosphi2,
{K10}*cos(radians(dec))*cos(radians(ra))+
{K11}*cos(radians(dec))*sin(radians(ra))+
{K12}*sin(radians(dec)) AS sinphi1cosphi2,
{K20}*cos(radians(dec))*cos(radians(ra))+
{K21}*cos(radians(dec))*sin(radians(ra))+
{K22}*sin(radians(dec)) AS sinphi2,
--c1, c2
{sindecngp}*cos(radians(dec)){mcosdecngp:+}*sin(radians(dec))*cos(radians(ra{mrangp:+})) as c1,
{cosdecngp}*sin(radians(ra{mrangp:+})) as c2
# Inner Level
l1cols =
gaia.cosphi1cosphi2, gaia.sinphi1cosphi2, gaia.sinphi2,
gaia.c1, gaia.c2,
atan2(sinphi1cosphi2, cosphi1cosphi2) AS phi1,
atan2(sinphi2, sinphi1cosphi2 / sin(atan2(sinphi1cosphi2, cosphi1cosphi2))) AS phi2
# Inner Level
l2cols =
gaia.sinphi1cosphi2, gaia.cosphi1cosphi2, gaia.sinphi2,
gaia.phi1, gaia.phi2,
gaia.c1, gaia.c2,
( c1*pmra+c2*pmdec)/cos(phi2) AS pmphi1,
(-c2*pmra+c1*pmdec)/cos(phi2) AS pmphi2
# Outer Level
l3cols =
gaia.phi1, gaia.phi2,
gaia.pmphi1, gaia.pmphi2
###########
# Custom Selection
l3sel =
phi1 > {phi1min:+}
AND phi1 < {phi1max:+}
AND phi2 > {phi2min:+}
AND phi2 < {phi2max:+}
###########
# Custom substitutions
l3userasdict = {
'K00': .656, 'K01': .755, 'K02': .002,
'K10': .701, 'K11': .469, 'K12': .537,
'K20': .53, 'K21': .458, 'K22': .713,
'sindecngp': -0.925, 'cosdecngp': .382, 'mcosdecngp': -.382,
'mrangp': -0,
'phi1min': -0.175, 'phi1max': 0.175,
'phi2min': -0.175, 'phi2max': 0.175}
###########
# Making Query
df = make_query(
gaia_mags=True,
panstarrs1=True, # doing a Pan-STARRS1 crossmatch
user_cols=l3cols,
use_AS=True, user_ASdict=l3userasdict,
# Inner Query
FROM=make_query(
gaia_mags=True,
user_cols=l2cols,
# Inner Query
FROM=make_query(
gaia_mags=True,
user_cols=l1cols,
# Innermost Query
FROM=make_query(
gaia_mags=True,
inmostquery=True, # telling system this is the innermost level
user_cols=l0cols,
random_index=1e4 # quickly specifying random index
)
)
),
WHERE=l3sel,
ORDERBY='gaia.source_id',
pprint=True,
# doing query
do_query=True, local=False, units=True
)
df
Explanation: <br><br><br><br>
<br><br><br><br>
Complex Nested Query
A complex query like this one shows the real utility of this package.
Instead of keeping track of the complex SQL, we only need to pay close attention to the custom calculated columns.
This ADQL queries for data within a rectangular area on a sky rotated by a rotation matrix and specified North Galactic Pole angles. The specifics aren't important -- the real takeaway is that the sky rotation and calculation are written in a clear format, with all the parts of the query close together. Running the query is trivial after that.
End of explanation |
11,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Meetup 1
Going to parse texts for most used words.
Step1: We want to "tokenize" the text and discard "stopwords" like 'a', 'the', 'in'. These words aren't relevant for our analysis.
To tokenize our text we're going to use regular expressions. Regular expressions are cool and you should try to use them whenever you can. To use regular expression we need to import the regular expression module re. Lets do this in the next cell!!
Step2: We want to tokenize words. We will use \w+ regular expression to tokenize all the words.
- Lets break this down - \w will match any alphanumerica and underscore characters
- The +
Step3: That was the easy part..... We want all the data(text) to be "normalized". The word 'Linear' is different then the word 'linear' but for our case it shouldn't be counted twice.
Lets create a Python list container/data structure to store all of our words. For a more in depth look at Python lists and how to use them efficiently take a look at .....
Step4: Now we must...clean the data yet more. It's like when you think you've cleaned your room but your mom tells you it ain't that clean yet.
Step5: Now we have a Python list of stop words and a Python list of words in our text. We want to cross reference the tokens with the stop words and save those in a new list. Lets do that....
Step6: Now comes the real fun stuff. Lets plot the word frequency histogram with two lines of actual code. | Python Code:
# Lets see how many lines are in the PDF
# We can use the '!' special character to run Linux commands inside of our notebook
!wc -l test.txt
# Now lets see how many words
!wc -w test.txt
import nltk
from nltk import tokenize
# Lets open the file so we can access the ascii contents
# fd stands for file descriptor but we can use whatever name we want
# the open command returns a file descripor object, which itself isn't very useful
# so we need to read the entire contents so we have a text string we can parse
# advanced: use a context manager with open() as x:
fd = open('test.txt', 'r')
text = fd.read()
text
Explanation: Meetup 1
Going to parse texts for most used words.
End of explanation
# import the regular expression module
import re
Explanation: We want to "tokenize" the text and discard "stopwords" like 'a', 'the', 'in'. These words aren't relevant for our analysis.
To tokenize our text we're going to use regular expressions. Regular expressions are cool and you should try to use them whenever you can. To use regular expression we need to import the regular expression module re. Lets do this in the next cell!!
End of explanation
match_words = '\w+'
tokens = re.findall(match_words, text)
tokens[0:9]
# We can also use nltk to accomplish the same thing
# from nltk.tokenize import RegexpTokenizer
# tokenizer = RegexpTokenizer('\w+')
# tokenizer.tokenize(text)
Explanation: We want to tokenize words. We will use \w+ regular expression to tokenize all the words.
- Lets break this down - \w will match any alphanumerica and underscore characters
- The +
End of explanation
words = []
for word in tokens:
words.append(word.lower())
words[0:8]
Explanation: That was the easy part..... We want all the data(text) to be "normalized". The word 'Linear' is different then the word 'linear' but for our case it shouldn't be counted twice.
Lets create a Python list container/data structure to store all of our words. For a more in depth look at Python lists and how to use them efficiently take a look at .....
End of explanation
#Here we want a list of common stopwords but we need to download them first.
nltk.download('stopwords')
stop_words = nltk.corpus.stopwords.words('english')
stop_words
Explanation: Now we must...clean the data yet more. It's like when you think you've cleaned your room but your mom tells you it ain't that clean yet.
End of explanation
words_nsw = []
for w in words:
if w not in stop_words:
words_nsw.append(w)
words_nsw[0:11]
Explanation: Now we have a Python list of stop words and a Python list of words in our text. We want to cross reference the tokens with the stop words and save those in a new list. Lets do that....
End of explanation
# lets import a graphing and data visualization library
import matplotlib.pyplot as plt
# Lets tell jupyter notebook to display images inside our notebook
# %matplotlib inline
freq_dist = nltk.FreqDist(words_nsw)
freq_dist.plot(30)
Explanation: Now comes the real fun stuff. Lets plot the word frequency histogram with two lines of actual code.
End of explanation |
11,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
11,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations [1] [2] [3]_.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects [4]_. Here we use the 24 unique images of faces
and body parts.
References
.. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering."
Science 210.4468 (1980)
Step1: Let's restrict the number of conditions to speed up computation
Step2: Define stimulus - trigger mapping
Step3: Let's make the event_id dictionary
Step4: Read MEG data
Step5: Epoch data
Step6: Let's plot some conditions
Step7: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
Step8: Compute confusion matrix using ROC-AUC
Step9: Plot
Step10: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together. | Python Code:
# Authors: Jean-Remi King <[email protected]>
# Jaakko Leppakangas <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5))
Explanation: Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations [1] [2] [3]_.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects [4]_. Here we use the 24 unique images of faces
and body parts.
References
.. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering."
Science 210.4468 (1980): 390-398.
.. [2] Laakso, A. & Cottrell, G.. "Content and cluster analysis:
assessing representational similarity in neural systems." Philosophical
psychology 13.1 (2000): 47-76.
.. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. "Representational
similarity analysis-connecting the branches of systems neuroscience."
Frontiers in systems neuroscience 2 (2008): 4.
.. [4] Cichy, R. M., Pantazis, D., & Oliva, A. "Resolving human object
recognition in space and time." Nature neuroscience (2014): 17(3),
455-462.
End of explanation
max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows
Explanation: Let's restrict the number of conditions to speed up computation
End of explanation
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10])
Explanation: Define stimulus - trigger mapping
End of explanation
event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural']
Explanation: Let's make the event_id dictionary
End of explanation
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block) for block in range(n_runs)]
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
mne.viz.plot_events(events, sfreq=raw.info['sfreq'])
Explanation: Read MEG data
End of explanation
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
Explanation: Epoch data
End of explanation
epochs['face'].average().plot()
epochs['not-face'].average().plot()
Explanation: Let's plot some conditions
End of explanation
# Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='lbfgs'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test])
Explanation: Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately.
End of explanation
confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj]
Explanation: Compute confusion matrix using ROC-AUC
End of explanation
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show()
Explanation: Plot
End of explanation
fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show()
Explanation: Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together.
End of explanation |
11,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with sensor locations
This tutorial describes how to read and plot sensor locations, and how
the physical location of sensors is handled in MNE-Python.
Step1: About montages and layouts
Step2: .. sidebar
Step3: Once loaded, a montage can be applied to data via one of the instance methods
such as
Step4:
Step5: Controlling channel projection (MNE vs EEGLAB)
Channel positions in 2d space are obtained by projecting their actual 3d
positions using a sphere as a reference. Because 'standard_1020' montage
contains realistic, not spherical, channel positions, we will use a different
montage to demonstrate controlling how channels are projected to 2d space.
Step6: By default a sphere with an origin in (0, 0, 0) x, y, z coordinates and
radius of 0.095 meters (9.5 cm) is used. You can use a different sphere
radius by passing a single value to sphere argument in any function that
plots channels in 2d (like
Step7: To control not only radius, but also the sphere origin, pass a
(x, y, z, radius) tuple to sphere argument
Step8: In mne-python the head center and therefore the sphere center are calculated
using fiducial points. Because of this the head circle represents head
circumference at the nasion and ear level, and not where it is commonly
measured in 10-20 EEG system
Step9: If you have previous EEGLAB experience you may prefer its convention to
represent 10-20 head circumference with the head circle. To get EEGLAB-like
channel layout you would have to move the sphere origin a few centimeters
up on the z dimension
Step10: Instead of approximating the EEGLAB-esque sphere location as above, you can
calculate the sphere origin from position of Oz, Fpz, T3/T7 or T4/T8
channels. This is easier once the montage has been applied to the data and
channel positions are in the head space - see
this example <ex-topomap-eeglab-style>.
Reading sensor digitization files
In the sample data, setting the digitized EEG montage was done prior to
saving the
Step11: It's probably evident from the 2D topomap above that there is some
irregularity in the EEG sensor positions in the sample dataset
<sample-dataset> — this is because the sensor positions in that dataset are
digitizations of the sensor positions on an actual subject's head, rather
than idealized sensor positions based on a spherical head model. Depending on
what system was used to digitize the electrode positions (e.g., a Polhemus
Fastrak digitizer), you must use different montage reading functions (see
dig-formats). The resulting
Step12:
Step13: You may have noticed that the file formats and filename extensions of the
built-in layout and montage files vary considerably. This reflects different
manufacturers' conventions; to make loading easier the montage and layout
loading functions in MNE-Python take the filename without its extension so
you don't have to keep track of which file format is used by which
manufacturer.
To load a layout file, use the
Step14: Similar to the picks argument for selecting channels from
Step15: If you're working with a | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=True, verbose=False)
Explanation: Working with sensor locations
This tutorial describes how to read and plot sensor locations, and how
the physical location of sensors is handled in MNE-Python.
:depth: 2
As usual we'll start by importing the modules we need and loading some
example data <sample-dataset>:
End of explanation
montage_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'montages')
print('\nBUILT-IN MONTAGE FILES')
print('======================')
print(sorted(os.listdir(montage_dir)))
Explanation: About montages and layouts
:class:Montages <mne.channels.DigMontage> contain sensor
positions in 3D (x, y, z, in meters), and can be used to set
the physical positions of sensors. By specifying the location of sensors
relative to the brain, :class:Montages <mne.channels.DigMontage> play an
important role in computing the forward solution and computing inverse
estimates.
In contrast, :class:Layouts <mne.channels.Layout> are idealized 2-D
representations of sensor positions, and are primarily used for arranging
individual sensor subplots in a topoplot, or for showing the approximate
relative arrangement of sensors as seen from above.
Working with built-in montages
The 3D coordinates of MEG sensors are included in the raw recordings from MEG
systems, and are automatically stored in the info attribute of the
:class:~mne.io.Raw file upon loading. EEG electrode locations are much more
variable because of differences in head shape. Idealized montages for many
EEG systems are included during MNE-Python installation; these files are
stored in your mne-python directory, in the
:file:mne/channels/data/montages folder:
End of explanation
ten_twenty_montage = mne.channels.make_standard_montage('standard_1020')
print(ten_twenty_montage)
Explanation: .. sidebar:: Computing sensor locations
If you are interested in how standard ("idealized") EEG sensor positions
are computed on a spherical head model, the `eeg_positions`_ repository
provides code and documentation to this end.
These built-in EEG montages can be loaded via
:func:mne.channels.make_standard_montage. Note that when loading via
:func:~mne.channels.make_standard_montage, provide the filename without
its file extension:
End of explanation
# these will be equivalent:
# raw_1020 = raw.copy().set_montage(ten_twenty_montage)
# raw_1020 = raw.copy().set_montage('standard_1020')
Explanation: Once loaded, a montage can be applied to data via one of the instance methods
such as :meth:raw.set_montage <mne.io.Raw.set_montage>. It is also possible
to skip the loading step by passing the filename string directly to the
:meth:~mne.io.Raw.set_montage method. This won't work with our sample
data, because it's channel names don't match the channel names in the
standard 10-20 montage, so these commands are not run here:
End of explanation
fig = ten_twenty_montage.plot(kind='3d')
fig.gca().view_init(azim=70, elev=15)
ten_twenty_montage.plot(kind='topomap', show_names=False)
Explanation: :class:Montage <mne.channels.DigMontage> objects have a
:meth:~mne.channels.DigMontage.plot method for visualization of the sensor
locations in 3D; 2D projections are also possible by passing
kind='topomap':
End of explanation
biosemi_montage = mne.channels.make_standard_montage('biosemi64')
biosemi_montage.plot(show_names=False)
Explanation: Controlling channel projection (MNE vs EEGLAB)
Channel positions in 2d space are obtained by projecting their actual 3d
positions using a sphere as a reference. Because 'standard_1020' montage
contains realistic, not spherical, channel positions, we will use a different
montage to demonstrate controlling how channels are projected to 2d space.
End of explanation
biosemi_montage.plot(show_names=False, sphere=0.07)
Explanation: By default a sphere with an origin in (0, 0, 0) x, y, z coordinates and
radius of 0.095 meters (9.5 cm) is used. You can use a different sphere
radius by passing a single value to sphere argument in any function that
plots channels in 2d (like :meth:~mne.channels.DigMontage.plot that we use
here, but also for example :func:mne.viz.plot_topomap):
End of explanation
biosemi_montage.plot(show_names=False, sphere=(0.03, 0.02, 0.01, 0.075))
Explanation: To control not only radius, but also the sphere origin, pass a
(x, y, z, radius) tuple to sphere argument:
End of explanation
biosemi_montage.plot()
Explanation: In mne-python the head center and therefore the sphere center are calculated
using fiducial points. Because of this the head circle represents head
circumference at the nasion and ear level, and not where it is commonly
measured in 10-20 EEG system: above nasion at T4/T8, T3/T7, Oz, Fz level.
Notice below that by default T7 and Oz channels are placed within the head
circle, not on the head outline:
End of explanation
biosemi_montage.plot(sphere=(0, 0, 0.035, 0.094))
Explanation: If you have previous EEGLAB experience you may prefer its convention to
represent 10-20 head circumference with the head circle. To get EEGLAB-like
channel layout you would have to move the sphere origin a few centimeters
up on the z dimension:
End of explanation
fig = plt.figure()
ax2d = fig.add_subplot(121)
ax3d = fig.add_subplot(122, projection='3d')
raw.plot_sensors(ch_type='eeg', axes=ax2d)
raw.plot_sensors(ch_type='eeg', axes=ax3d, kind='3d')
ax3d.view_init(azim=70, elev=15)
Explanation: Instead of approximating the EEGLAB-esque sphere location as above, you can
calculate the sphere origin from position of Oz, Fpz, T3/T7 or T4/T8
channels. This is easier once the montage has been applied to the data and
channel positions are in the head space - see
this example <ex-topomap-eeglab-style>.
Reading sensor digitization files
In the sample data, setting the digitized EEG montage was done prior to
saving the :class:~mne.io.Raw object to disk, so the sensor positions are
already incorporated into the info attribute of the :class:~mne.io.Raw
object (see the documentation of the reading functions and
:meth:~mne.io.Raw.set_montage for details on how that works). Because of
that, we can plot sensor locations directly from the :class:~mne.io.Raw
object using the :meth:~mne.io.Raw.plot_sensors method, which provides
similar functionality to
:meth:montage.plot() <mne.channels.DigMontage.plot>.
:meth:~mne.io.Raw.plot_sensors also allows channel selection by type, can
color-code channels in various ways (by default, channels listed in
raw.info['bads'] will be plotted in red), and allows drawing into an
existing matplotlib axes object (so the channel positions can easily be
made as a subplot in a multi-panel figure):
End of explanation
fig = mne.viz.plot_alignment(raw.info, trans=None, dig=False, eeg=False,
surfaces=[], meg=['helmet', 'sensors'],
coord_frame='meg')
mne.viz.set_3d_view(fig, azimuth=50, elevation=90, distance=0.5)
Explanation: It's probably evident from the 2D topomap above that there is some
irregularity in the EEG sensor positions in the sample dataset
<sample-dataset> — this is because the sensor positions in that dataset are
digitizations of the sensor positions on an actual subject's head, rather
than idealized sensor positions based on a spherical head model. Depending on
what system was used to digitize the electrode positions (e.g., a Polhemus
Fastrak digitizer), you must use different montage reading functions (see
dig-formats). The resulting :class:montage <mne.channels.DigMontage>
can then be added to :class:~mne.io.Raw objects by passing it to the
:meth:~mne.io.Raw.set_montage method (just as we did above with the name of
the idealized montage 'standard_1020'). Once loaded, locations can be
plotted with :meth:~mne.channels.DigMontage.plot and saved with
:meth:~mne.channels.DigMontage.save, like when working with a standard
montage.
<div class="alert alert-info"><h4>Note</h4><p>When setting a montage with :meth:`~mne.io.Raw.set_montage`
the measurement info is updated in two places (the ``chs``
and ``dig`` entries are updated). See `tut-info-class`.
``dig`` may contain HPI, fiducial, or head shape points in
addition to electrode locations.</p></div>
Rendering sensor position with mayavi
It is also possible to render an image of a MEG sensor helmet in 3D, using
mayavi instead of matplotlib, by calling :func:mne.viz.plot_alignment
End of explanation
layout_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'layouts')
print('\nBUILT-IN LAYOUT FILES')
print('=====================')
print(sorted(os.listdir(layout_dir)))
Explanation: :func:~mne.viz.plot_alignment requires an :class:~mne.Info object, and
can also render MRI surfaces of the scalp, skull, and brain (by passing
keywords like 'head', 'outer_skull', or 'brain' to the
surfaces parameter) making it useful for assessing coordinate frame
transformations <plot_source_alignment>. For examples of various uses of
:func:~mne.viz.plot_alignment, see plot_montage,
:doc:../../auto_examples/visualization/plot_eeg_on_scalp, and
:doc:../../auto_examples/visualization/plot_meg_sensors.
Working with layout files
As with montages, many layout files are included during MNE-Python
installation, and are stored in the :file:mne/channels/data/layouts folder:
End of explanation
biosemi_layout = mne.channels.read_layout('biosemi')
biosemi_layout.plot() # same result as: mne.viz.plot_layout(biosemi_layout)
Explanation: You may have noticed that the file formats and filename extensions of the
built-in layout and montage files vary considerably. This reflects different
manufacturers' conventions; to make loading easier the montage and layout
loading functions in MNE-Python take the filename without its extension so
you don't have to keep track of which file format is used by which
manufacturer.
To load a layout file, use the :func:mne.channels.read_layout function, and
provide the filename without its file extension. You can then visualize the
layout using its :meth:~mne.channels.Layout.plot method, or (equivalently)
by passing it to :func:mne.viz.plot_layout:
End of explanation
midline = np.where([name.endswith('z') for name in biosemi_layout.names])[0]
biosemi_layout.plot(picks=midline)
Explanation: Similar to the picks argument for selecting channels from
:class:~mne.io.Raw objects, the :meth:~mne.channels.Layout.plot method of
:class:~mne.channels.Layout objects also has a picks argument. However,
because layouts only contain information about sensor name and location (not
sensor type), the :meth:~mne.channels.Layout.plot method only allows
picking channels by index (not by name or by type). Here we find the indices
we want using :func:numpy.where; selection by name or type is possible via
:func:mne.pick_channels or :func:mne.pick_types.
End of explanation
layout_from_raw = mne.channels.make_eeg_layout(raw.info)
# same result as: mne.channels.find_layout(raw.info, ch_type='eeg')
layout_from_raw.plot()
Explanation: If you're working with a :class:~mne.io.Raw object that already has sensor
positions incorporated, you can create a :class:~mne.channels.Layout object
with either the :func:mne.channels.make_eeg_layout function or
(equivalently) the :func:mne.channels.find_layout function.
End of explanation |
11,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: Next, we read in the example data. Note that you will need to update the filepaths below to work on your machine.
Step2: Now we can plot the data.
Step3: Save this project with the name | Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as mplt
from scipy import linalg
from scipy import io
### Ordinary Least Squares
### SOLVES 2-CLASS LEAST SQUARES PROBLEM
### LOAD DATA ###
### IF LoadClasses IS True, THEN LOAD DATA FROM FILES ###
### OTHERSIE, RANDOMLY GENERATE DATA ###
LoadClasses = True
TrainOutliers = False
TestOutliers = False
NOut = 20
NSampsClass = 200
NSamps = 2*NSampsClass
Explanation: syncID: 1f8217240c064ed1a67b9db20e9362f4
title: "Classification of Hyperspectral Data with Ordinary Least Squares in Python"
description: "Learn to classify spectral data using the Ordinary Least Squares method."
dateCreated: 2017-06-21
authors: Paul Gader
contributors: Donal O'Leary
estimatedTime: 1 hour
packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot
topics: hyperspectral-remote-sensing, HDF5, remote-sensing
languagesTool: python
dataProduct: NEON.DP1.30006, NEON.DP3.30006, NEON.DP1.30008
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/hyperspectral-classification/Classification_OLS_py/Classification_OLS_py.ipynb
tutorialSeries: intro-hsi-py-series
urlTitle: classification-ols-python
In this tutorial, we will learn to classify spectral data using the
Ordinary Least Squares method.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Classify spectral remote sensing data using Ordinary Least Squares.
### Install Python Packages
* **numpy**
* **gdal**
* **matplotlib**
* **matplotlib.pyplot**
### Download Data
<a href="https://ndownloader.figshare.com/files/8730436">
Download the spectral classification teaching data subset</a>
<a href="https://ndownloader.figshare.com/files/8730436" class="link--button link--arrow">
Download Dataset</a>
### Additional Materials
This tutorial was prepared in conjunction with a presentation on spectral classification
that can be downloaded.
<a href="https://ndownloader.figshare.com/files/8730613">
Download Dr. Paul Gader's Classification 1 PPT</a>
<a href="https://ndownloader.figshare.com/files/8731960">
Download Dr. Paul Gader's Classification 2 PPT</a>
<a href="https://ndownloader.figshare.com/files/8731963">
Download Dr. Paul Gader's Classification 3 PPT</a>
</div>
Classification with Ordinary Least Squares solves the 2-class least squares problem.
First, we load the required packages and set initial variables.
End of explanation
if LoadClasses:
### GET FILENAMES %%%
### THESE ARE THE OPTIONS ###
### LinSepC1, LinSepC2,LinSepC2Outlier (Still Linearly Separable) ###
### NonLinSepC1, NonLinSepC2, NonLinSepC22 ###
## You will need to update these filepaths for your machine:
InFile1 = '/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/NonLinSepC1.mat'
InFile2 = '/Users/olearyd/Git/data/RSDI2017-Data-SpecClass/NonLinSepC2.mat'
C1Dict = io.loadmat(InFile1)
C2Dict = io.loadmat(InFile2)
C1 = C1Dict['NonLinSepC1']
C2 = C2Dict['NonLinSepC2']
if TrainOutliers:
### Let's Make Some Noise ###
Out1 = 2*np.random.rand(NOut,2)-0.5
Out2 = 2*np.random.rand(NOut,2)-0.5
C1 = np.concatenate((C1,Out1),axis=0)
C2 = np.concatenate((C2,Out2),axis=0)
NSampsClass = NSampsClass+NOut
NSamps = 2*NSampsClass
else:
### Randomly Generate Some Data
### Make a covariance using a diagonal array and rotation matrix
pi = 3.141592653589793
Lambda1 = 0.25
Lambda2 = 0.05
DiagMat = np.array([[Lambda1, 0.0],[0.0, Lambda2]])
RotMat = np.array([[np.sin(pi/4), np.cos(pi/4)], [-np.cos(pi/4), np.sin(pi/4)]])
mu1 = np.array([0,0])
mu2 = np.array([1,1])
Sigma = np.dot(np.dot(RotMat.T, DiagMat), RotMat)
C1 = np.random.multivariate_normal(mu1, Sigma, NSampsClass)
C2 = np.random.multivariate_normal(mu2, Sigma, NSampsClass)
print(Sigma)
print(C1.shape)
print(C2.shape)
Explanation: Next, we read in the example data. Note that you will need to update the filepaths below to work on your machine.
End of explanation
### PLOT DATA ###
matplotlib.pyplot.figure(1)
matplotlib.pyplot.plot(C1[:NSampsClass, 0], C1[:NSampsClass, 1], 'bo')
matplotlib.pyplot.plot(C2[:NSampsClass, 0], C2[:NSampsClass, 1], 'ro')
matplotlib.pyplot.show()
### SET UP TARGET OUTPUTS ###
TargetOutputs = np.ones((NSamps,1))
TargetOutputs[NSampsClass:NSamps] = -TargetOutputs[NSampsClass:NSamps]
### PLOT TARGET OUTPUTS ###
matplotlib.pyplot.figure(2)
matplotlib.pyplot.plot(range(NSampsClass), TargetOutputs[range(NSampsClass)], 'b-')
matplotlib.pyplot.plot(range(NSampsClass, NSamps), TargetOutputs[range(NSampsClass, NSamps)], 'r-')
matplotlib.pyplot.show()
### FIND LEAST SQUARES SOLUTION ###
AllSamps = np.concatenate((C1,C2),axis=0)
AllSampsBias = np.concatenate((AllSamps, np.ones((NSamps,1))), axis=1)
Pseudo = linalg.pinv2(AllSampsBias)
w = Pseudo.dot(TargetOutputs)
w
### COMPUTE OUTPUTS ON TRAINING DATA ###
y = AllSampsBias.dot(w)
### PLOT OUTPUTS FROM TRAINING DATA ###
matplotlib.pyplot.figure(3)
matplotlib.pyplot.plot(range(NSamps), y, 'm')
matplotlib.pyplot.plot(range(NSamps),np.zeros((NSamps,1)), 'b')
matplotlib.pyplot.plot(range(NSamps), TargetOutputs, 'k')
matplotlib.pyplot.title('TrainingOutputs (Magenta) vs Desired Outputs (Black)')
matplotlib.pyplot.show()
### CALCULATE AND PLOT LINEAR DISCRIMINANT ###
Slope = -w[1]/w[0]
Intercept = -w[2]/w[0]
Domain = np.linspace(-1.1, 1.1, 60) # set up the descision surface domain, -1.1 to 1.1 (looking at the data), do it 60 times
Disc = Slope*Domain+Intercept
matplotlib.pyplot.figure(4)
matplotlib.pyplot.plot(C1[:NSampsClass, 0], C1[:NSampsClass, 1], 'bo')
matplotlib.pyplot.plot(C2[:NSampsClass, 0], C2[:NSampsClass, 1], 'ro')
matplotlib.pyplot.plot(Domain, Disc, 'k-')
matplotlib.pyplot.ylim([-1.1,1.3])
matplotlib.pyplot.title('Ordinary Least Squares')
matplotlib.pyplot.show()
RegConst = 0.1
AllSampsBias = np.concatenate((AllSamps, np.ones((NSamps,1))), axis=1)
AllSampsBiasT = AllSampsBias.T
XtX = AllSampsBiasT.dot(AllSampsBias)
AllSampsReg = XtX + RegConst*np.eye(3)
Pseudo = linalg.pinv2(AllSampsReg)
wr = Pseudo.dot(AllSampsBiasT.dot(TargetOutputs))
Slope = -wr[1]/wr[0]
Intercept = -wr[2]/wr[0]
Domain = np.linspace(-1.1, 1.1, 60)
Disc = Slope*Domain+Intercept
matplotlib.pyplot.figure(5)
matplotlib.pyplot.plot(C1[:NSampsClass, 0], C1[:NSampsClass, 1], 'bo')
matplotlib.pyplot.plot(C2[:NSampsClass, 0], C2[:NSampsClass, 1], 'ro')
matplotlib.pyplot.plot(Domain, Disc, 'k-')
matplotlib.pyplot.ylim([-1.1,1.3])
matplotlib.pyplot.title('Ridge Regression')
matplotlib.pyplot.show()
Explanation: Now we can plot the data.
End of explanation
### COMPUTE OUTPUTS ON TRAINING DATA ###
yr = AllSampsBias.dot(wr)
### PLOT OUTPUTS FROM TRAINING DATA ###
matplotlib.pyplot.figure(6)
matplotlib.pyplot.plot(range(NSamps), yr, 'm')
matplotlib.pyplot.plot(range(NSamps),np.zeros((NSamps,1)), 'b')
matplotlib.pyplot.plot(range(NSamps), TargetOutputs, 'k')
matplotlib.pyplot.title('TrainingOutputs (Magenta) vs Desired Outputs (Black)')
matplotlib.pyplot.show()
y1 = y[range(NSampsClass)]
y2 = y[range(NSampsClass, NSamps)]
Corr1 = np.sum([y1>0])
Corr2 = np.sum([y2<0])
y1r = yr[range(NSampsClass)]
y2r = yr[range(NSampsClass, NSamps)]
Corr1r = np.sum([y1r>0])
Corr2r = np.sum([y2r<0])
print('Result for Ordinary Least Squares')
CorrClassRate=(Corr1+Corr2)/NSamps
print(Corr1 + Corr2, 'Correctly Classified for a ', round(100*CorrClassRate), '% Correct Classification \n')
print('Result for Ridge Regression')
CorrClassRater=(Corr1r+Corr2r)/NSamps
print(Corr1r + Corr2r, 'Correctly Classified for a ', round(100*CorrClassRater), '% Correct Classification \n')
### Make Confusion Matrices ###
NumClasses = 2;
Cm = np.zeros((NumClasses,NumClasses))
Cm[(0,0)] = Corr1/NSampsClass
Cm[(0,1)] = (NSampsClass-Corr1)/NSampsClass
Cm[(1,0)] = (NSampsClass-Corr2)/NSampsClass
Cm[(1,1)] = Corr2/NSampsClass
Cm = np.round(100*Cm)
print('Confusion Matrix for OLS Regression \n', Cm, '\n')
Cm = np.zeros((NumClasses,NumClasses))
Cm[(0,0)] = Corr1r/NSampsClass
Cm[(0,1)] = (NSampsClass-Corr1r)/NSampsClass
Cm[(1,0)] = (NSampsClass-Corr2r)/NSampsClass
Cm[(1,1)] = Corr2r/NSampsClass
Cm = np.round(100*Cm)
print('Confusion Matrix for Ridge Regression \n', Cm, '\n')
Explanation: Save this project with the name: OLSandRidgeRegress2DPGader. Make a New Project for Spectra.
End of explanation |
11,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An RNN for short-term predictions
This model will try to predict the next value in a short sequence based on historical data. This can be used for example to forecast demand based on a couple of weeks of sales data.
<div class="alert alert-block alert-info">
Things to do
Step1: Generate fake dataset
Step2: Hyperparameters
Step3: Visualize training sequences
This is what the neural network will see during training.
Step4: The model definition
When executed, these functions instantiate the Tensorflow graph for our model.
Step5: <a name="assignment1"></a>
<div class="alert alert-block alert-info">
**Assignment #1**
Step6: <a name="assignment2"></a>
<div class="alert alert-block alert-info">
**Assignment #2**
Step7: <a name="assignment3"></a>
<div class="alert alert-block alert-info">
**Assignment #3**
Step8: prepare training dataset
Step9: <a name="instantiate"></a>
Instantiate the model
Step10: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
Step11: The training loop
You can re-execute this cell to continue training | Python Code:
import numpy as np
import utils_datagen
import utils_display
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version: " + tf.__version__)
Explanation: An RNN for short-term predictions
This model will try to predict the next value in a short sequence based on historical data. This can be used for example to forecast demand based on a couple of weeks of sales data.
<div class="alert alert-block alert-info">
Things to do:<br/>
<ol start="0">
<li> Run the notebook. Initially it uses a linear model (the simplest one). Look at the RMSE (Root Means Square Error) metrics at the end of the training and how they compare against a couple of simplistic models: random predictions (RMSErnd), predict same as last value (RMSEsal), predict based on trend from two last values (RMSEtfl).
<li> Now implement the DNN (Dense Neural Network) model [here](#assignment1) using `tf.layers.dense`. See how it performs.
<li> Swap in the CNN (Convolutional Neural Network) model [here](#assignment2). It is already implemented in function CNN_model. See how it performs.
<li> Implement the RNN model [here](#assignment3) using a single `tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)`. See how it performs.
<li> Make the RNN cell 2-deep [here](#assignment4) using `tf.nn.rnn_cell.MultiRNNCell`. See if this improves things. Try also training for 10 epochs instead of 5.
<li> You can now go and check out the solutions in file [00_RNN_predictions_solution.ipynb](00_RNN_predictions_solution.ipynb). Its final cell has a loop that benchmarks all the neural network architectures. Try it and then if you have the time, try reducing the data sequence length from 16 to 8 (SEQLEN=8) and see if you can still predict the next value with so little context.
</ol>
</div>
End of explanation
DATA_SEQ_LEN = 1024*128
data = np.concatenate([utils_datagen.create_time_series(waveform, DATA_SEQ_LEN) for waveform in utils_datagen.Waveforms])
utils_display.picture_this_1(data, DATA_SEQ_LEN)
Explanation: Generate fake dataset
End of explanation
NB_EPOCHS = 5 # number of times the data is repeated during training
RNN_CELLSIZE = 32 # size of the RNN cells
SEQLEN = 16 # unrolled sequence length
BATCHSIZE = 32 # mini-batch size
Explanation: Hyperparameters
End of explanation
utils_display.picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences
Explanation: Visualize training sequences
This is what the neural network will see during training.
End of explanation
# three simplistic predictive models: can you beat them ?
def simplistic_models(X):
# "random" model
Yrnd = tf.random_uniform([tf.shape(X)[0]], -2.0, 2.0) # tf.shape(X)[0] is the batch size
# "same as last" model
Ysal = X[:,-1]
# "trend from last two" model
Ytfl = X[:,-1] + (X[:,-1] - X[:,-2])
return Yrnd, Ysal, Ytfl
# linear model (RMSE: 0.36, with shuffling: 0.17)
def linear_model(X):
Yout = tf.layers.dense(X, 1) # output shape [BATCHSIZE, 1]
return Yout
Explanation: The model definition
When executed, these functions instantiate the Tensorflow graph for our model.
End of explanation
# 2-layer dense model (RMSE: 0.15-0.18, if training data is not shuffled: 0.38)
def DNN_model(X):
# X shape [BATCHSIZE, SEQLEN]
# --- dummy model: please implement a real one ---
# to test it, do not forget to use this function (DNN_model) when instantiating the model
Y = X * tf.Variable(tf.ones([]), name="dummy1") # Y shape [BATCHSIZE, SEQLEN]
# --- end of dummy model ---
Yout = tf.layers.dense(Y, 1, activation=None) # output shape [BATCHSIZE, 1]. Predicting vectors of 1 element.
return Yout
Explanation: <a name="assignment1"></a>
<div class="alert alert-block alert-info">
**Assignment #1**: Implement the DNN (Dense Neural Network) model using a single `tf.layers.dense` layer. Do not forget to use the DNN_model function when [instantiating the model](#instantiate)
</div>
End of explanation
# convolutional (RMSE: 0.31, with shuffling: 0.16)
def CNN_model(X):
X = tf.expand_dims(X, axis=2) # [BATCHSIZE, SEQLEN, 1] is necessary for conv model
Y = tf.layers.conv1d(X, filters=8, kernel_size=4, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.conv1d(Y, filters=16, kernel_size=3, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.conv1d(Y, filters=8, kernel_size=1, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//2, 8]
Y = tf.layers.conv1d(Y, filters=8, kernel_size=3, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN//2, 8]
Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//4, 8]
# mis-using a conv layer as linear regression :-)
Yout = tf.layers.conv1d(Y, filters=1, kernel_size=SEQLEN//4, activation=None, padding="valid") # output shape [BATCHSIZE, 1, 1]
Yout = tf.squeeze(Yout, axis=-1) # output shape [BATCHSIZE, 1]
return Yout
Explanation: <a name="assignment2"></a>
<div class="alert alert-block alert-info">
**Assignment #2**: Swap in the CNN (Convolutional Neural Network) model. It is already implemented in function CNN_model below so all you have to do is read through the CNN_model code and then use the CNN_model function when [instantiating the model](#instantiate).
</div>
End of explanation
# RNN model (RMSE: 0.38, with shuffling 0.14, the same with loss on last 8)
def RNN_model(X, n=1):
X = tf.expand_dims(X, axis=2) # shape [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
batchsize = tf.shape(X)[0] # allow for variable batch size
# --- dummy model: please implement a real RNN model ---
# to test it, do not forget to use this function (RNN_model) when instantiating the model
Yn = X * tf.ones([RNN_CELLSIZE], name="dummy2") # Yn shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
# TODO: create a tf.nn.rnn_cell.GRUCell
# TODO: unroll the cell using tf.nn.dynamic_rnn(..., dtype=tf.float32)
# --- end of dummy model ---
# This is the regression layer. It is already implemented.
# Yn [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
Yn = tf.reshape(Yn, [batchsize*SEQLEN, RNN_CELLSIZE])
Yr = tf.layers.dense(Yn, 1) # Yr [BATCHSIZE*SEQLEN, 1] predicting vectors of 1 element
Yr = tf.reshape(Yr, [batchsize, SEQLEN, 1]) # Yr [BATCHSIZE, SEQLEN, 1]
# In this RNN model, you can compute the loss on the last predicted item or the last n predicted items
# Last n with n=SEQLEN//2 is slightly better. This is a hyperparameter you can adjust in the RNN_model_N
# function below.
Yout = Yr[:,-n:SEQLEN,:] # last item(s) in sequence: output shape [BATCHSIZE, n, 1]
Yout = tf.squeeze(Yout, axis=-1) # remove the last dimension (1): output shape [BATCHSIZE, n]
return Yout
def RNN_model_N(X): return RNN_model(X, n=SEQLEN//2)
def model_fn(features, labels, model):
X = features # shape [BATCHSIZE, SEQLEN]
Y = model(X)
last_label = labels[:, -1] # last item in sequence: the target value to predict
last_labels = labels[:, -tf.shape(Y)[1]:SEQLEN] # last p items in sequence (as many as in Y), useful for RNN_model(X, n>1)
loss = tf.losses.mean_squared_error(Y, last_labels) # loss computed on last label(s)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
Yrnd, Ysal, Ytfl = simplistic_models(X)
eval_metrics = {"RMSE": tf.sqrt(loss),
# compare agains three simplistic predictive models: can you beat them ?
"RMSErnd": tf.sqrt(tf.losses.mean_squared_error(Yrnd, last_label)),
"RMSEsal": tf.sqrt(tf.losses.mean_squared_error(Ysal, last_label)),
"RMSEtfl": tf.sqrt(tf.losses.mean_squared_error(Ytfl, last_label))}
Yout = Y[:,-1]
return Yout, loss, eval_metrics, train_op
Explanation: <a name="assignment3"></a>
<div class="alert alert-block alert-info">
**Assignment #3**: Implement the RNN (Recurrent Neural Network) model using `tf.nn.rnn_cell.GRUCell` and `tf.nn.dynamic_rnn`. Do not forget to use the RNN_model_N function when [instantiating the model](#instantiate).</div>
<a name="assignment4"></a>
<div class="alert alert-block alert-info">
**Assignment #4**: Make the RNN cell 2-deep [here](#assignment2) using `tf.nn.rnn_cell.MultiRNNCell`. See if this improves things. Try also training for 10 epochs instead of 5. Finally try to compute the loss on the last n elemets of the predicted sequence instead of the last (n=SEQLEN//2 for example). Do not forget to use the RNN_model_N function when [instantiating the model](#instantiate).
</div>
<div style="text-align: right; font-family: monospace">
X shape [BATCHSIZE, SEQLEN, 1]<br/>
Y shape [BATCHSIZE, SEQLEN, 1]<br/>
H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]
</div>
End of explanation
# training to predict the same sequence shifted by one (next value)
labeldata = np.roll(data, -1)
# slice data into sequences
traindata = np.reshape(data, [-1, SEQLEN])
labeldata = np.reshape(labeldata, [-1, SEQLEN])
# also make an evaluation dataset by randomly subsampling our fake data
EVAL_SEQUENCES = DATA_SEQ_LEN*4//SEQLEN//4
joined_data = np.stack([traindata, labeldata], axis=1) # new shape is [N_sequences, 2(train/eval), SEQLEN]
joined_evaldata = joined_data[np.random.choice(joined_data.shape[0], EVAL_SEQUENCES, replace=False)]
evaldata = joined_evaldata[:,0,:]
evallabels = joined_evaldata[:,1,:]
def datasets(nb_epochs):
# Dataset API for batching, shuffling, repeating
dataset = tf.data.Dataset.from_tensor_slices((traindata, labeldata))
dataset = dataset.repeat(NB_EPOCHS)
dataset = dataset.shuffle(DATA_SEQ_LEN*4//SEQLEN) # important ! Number of sequences in shuffle buffer: all of them
dataset = dataset.batch(BATCHSIZE)
# Dataset API for batching
evaldataset = tf.data.Dataset.from_tensor_slices((evaldata, evallabels))
evaldataset = evaldataset.repeat()
evaldataset = evaldataset.batch(EVAL_SEQUENCES) # just one batch with everything
# Some boilerplate code...
# this creates a Tensorflow iterator of the correct type and shape
# compatible with both our training and eval datasets
tf_iter = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
# it can be initialized to iterate through the training dataset
dataset_init_op = tf_iter.make_initializer(dataset)
# or it can be initialized to iterate through the eval dataset
evaldataset_init_op = tf_iter.make_initializer(evaldataset)
# Returns the tensorflow nodes needed by our model_fn.
features, labels = tf_iter.get_next()
# When these nodes will be executed (sess.run) in the training or eval loop,
# they will output the next batch of data.
# Note: when you do not need to swap the dataset (like here between train/eval) just use
# features, labels = dataset.make_one_shot_iterator().get_next()
# TODO: easier with tf.estimator.inputs.numpy_input_fn ???
return features, labels, dataset_init_op, evaldataset_init_op
Explanation: prepare training dataset
End of explanation
tf.reset_default_graph() # restart model graph from scratch
# instantiate the dataset
features, labels, dataset_init_op, evaldataset_init_op = datasets(NB_EPOCHS)
# instantiate the model
Yout, loss, eval_metrics, train_op = model_fn(features, labels, linear_model)
Explanation: <a name="instantiate"></a>
Instantiate the model
End of explanation
# variable initialization
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
Explanation: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
End of explanation
count = 0
losses = []
indices = []
sess.run(dataset_init_op)
while True:
try: loss_, _ = sess.run([loss, train_op])
except tf.errors.OutOfRangeError: break
# print progress
if count%300 == 0:
epoch = count // (DATA_SEQ_LEN*4//BATCHSIZE//SEQLEN)
print("epoch " + str(epoch) + ", batch " + str(count) + ", loss=" + str(loss_))
if count%10 == 0:
losses.append(np.mean(loss_))
indices.append(count)
count += 1
# final evaluation
sess.run(evaldataset_init_op)
eval_metrics_, Yout_ = sess.run([eval_metrics, Yout])
print("Final accuracy on eval dataset:")
print(str(eval_metrics_))
plt.ylim(ymax=np.amax(losses[1:])) # ignore first value(s) for scaling
plt.plot(indices, losses)
plt.show()
# execute multiple times to see different sample sequences
utils_display.picture_this_3(Yout_, evaldata, evallabels, SEQLEN)
Explanation: The training loop
You can re-execute this cell to continue training
End of explanation |
11,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding a Data Set to pods
Open Data Science Initiative
28th May 2014 Neil D. Lawrence
Adding a data set to GPy should be done in two stages. Firstly, you need to edit the data_resources.json file to provide information about where to download the data from and what the license and citation information for the data is. Then you can edit the datasets.py file, located in GPy.util to load in the data and perform any preprocessing, before returning the data set to the user in the standard dictionary format.
Step 1
Step1: The function name allows users to call data = GPy.util.datasets.boston_housing() to acquire the data. You should use a name that makes it clear to readers of the code what is going on. The data set name is passed to the function as a default argument. This name corresponds to the entry in the json file.
The next two lines call the function data_available() to check if the data set is already in the cache. If the data set is not there, then download_data(), which handles the interface with the user for downloading the data is called.
The location of the cached data can be determined through the configuration file. By default it is set to be in a temporary directory under your home directory
Step2: Optional Step 3
Step3: Now we can access the same data set, but this time, because we have the data in cache no download is performed.
Step4: For this version of the data set we can check that the response variables have been normalized. | Python Code:
def boston_housing(data_set='boston_housing'):
if not data_available(data_set):
download_data(data_set)
all_data = np.genfromtxt(os.path.join(data_path, data_set, 'housing.data'))
X = all_data[:, 0:13]
Y = all_data[:, 13:14]
return data_details_return({'X' : X, 'Y': Y}, data_set)
Explanation: Adding a Data Set to pods
Open Data Science Initiative
28th May 2014 Neil D. Lawrence
Adding a data set to GPy should be done in two stages. Firstly, you need to edit the data_resources.json file to provide information about where to download the data from and what the license and citation information for the data is. Then you can edit the datasets.py file, located in GPy.util to load in the data and perform any preprocessing, before returning the data set to the user in the standard dictionary format.
Step 1: Editing data_resources.json
A json file is a simple way of storing a python dictionary in a format that is interchangeable with other languages. This file is loaded in at the beginning of datasets.py to provide information on where he data set is located, what its licensing terms are and any other standard details about the data. You can use any json editor to edit the file. You can also use a standard text editor, but be careful not to damage the format of the file! If you do damage the format, there are various on line json format checkers you can use to try and recover the file.
The file consists of a comma separated list of dictionary entries. Each dictionary entry corresponds to a single data set. Below is the dictionary entry for the Boston Housing data.
The entry includes firstly, the data set name. Then it includes six fields for describing.
* url The download url location of the data. This is provided as a list of urls. Just in case several different locations need to be visited. Here the list contains only one element.
* files This is a list of lists. Each list contains the files to be downloaded from the corresponding url. Here there are three files required from the first (and only) url.
* details Some helpful information for the user about the data.
* citation The citation to use when publishing on the data. If you use a data set you should always cite its origin.
* size A total size information for the user to know how much disk space the data will take when its all downloaded.
* license The license terms for the data. Many data sets have a license associated. Don't include data sets in this collection that don't permit their inclusion. There don't appear to be any license constraints for the use of the Boston housing data, so in this case this value is set to null.
Step 2: Including the Data in datasets.py
The data_resources.json file includes all the information about how to download the data. Now in datasets.py we write a short dataset recovery function to execute the download and return the data to the user. It has the following form:
End of explanation
import pods
data = pods.datasets.boston_housing()
Explanation: The function name allows users to call data = GPy.util.datasets.boston_housing() to acquire the data. You should use a name that makes it clear to readers of the code what is going on. The data set name is passed to the function as a default argument. This name corresponds to the entry in the json file.
The next two lines call the function data_available() to check if the data set is already in the cache. If the data set is not there, then download_data(), which handles the interface with the user for downloading the data is called.
The location of the cached data can be determined through the configuration file. By default it is set to be in a temporary directory under your home directory: tmp/GPy-datasets. But you can change this by creating your own configuration file in your home directiory, .gpy_user.cfg or by editing the configuration file for your GPy installation, installation.cfg. See this notebook for details on the config file.
The final line, data_details_return returns the dictionary of information loaded in from data_resource.json alongside the dictionary we've just constructed. The dictionary we return to the user is in a standard format with entries X and Y for the covariates and response variables.
Now things should be ready for you to download the data!
End of explanation
from pods.datasets import *
import numpy as np
def boston_housing_preprocess(data_set='boston_housing'):
if not data_available(data_set):
download_data(data_set)
all_data = np.genfromtxt(os.path.join(data_path, data_set, 'housing.data'))
X = all_data[:, 0:13]
Y = all_data[:, 13:14]
Y = (Y - np.mean(Y))/np.std(Y)
return data_details_return({'X' : X, 'Y': Y,
'info' : 'The response variables have been preprocessed to have zero mean and unit standard deviation'
}, data_set)
Explanation: Optional Step 3: Preprocessing
In the above we haven't performed any preprocessing of the data. What if we want to preprocess the data before giving it to the user? We can write a different, additional, version of the data set recovery function for providing a different preprocessing. Here we preprocess the Y values to be zero mean and unit standard deviation.
End of explanation
data = boston_housing_preprocess()
Explanation: Now we can access the same data set, but this time, because we have the data in cache no download is performed.
End of explanation
print('Mean: ', data['Y'].mean())
print('Standard deviation ', data['Y'].std())
Explanation: For this version of the data set we can check that the response variables have been normalized.
End of explanation |
11,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the nlputils library
In this iPython Notebook are some examples of how various parts of the nlputils library can be used with text and other datasets. General knowledge of common machine learning algorithms and NLP is assumed.
We will show how to transform a text corpus into tf-idf features and classify the documents with standard sklearn classifiers as well as the knn classifier part of this library. Additionally, the corpus will be visualized in two dimensions.
Step1: Analyzing a Text Corpus
In this section, we explain how to transform text documents into feature vectors and how these vectors can be used to classify the documents as well as visualize the corpus in 2D.
Transforming text into features
This first part shows how to use the FeatureTransform class and the features2mat function from nlputils.features to transform text documents into tf-idf features.
The 20 newsgroups dataset
The first step is to get some data we can work with. For this we use the 20 newsgroups text corpus.
We subsample 7 of the 20 categories and remove meta information such as headers to avoid overfitting.
Step2: The text documents have to be stored in a dictionary, where each document has an id mapping to its text. The corresponding categories of the documents are saved in a dictionary as well. Additionally, we create a mapping from the category number to its description.
Step3: Generating tf-idf features
Now that we have the data in the format expected by the library, we can transform the texts into tf-idf feature vectors. For this we use the FeatureTransform class from nlputils.features.
The feature extraction process is summarized in the figure below
Step4: The initialized FeatureTransform instance can also be applied to other text dictionaries later on (e.g. if more test examples become available). The first time texts2features is called, the idf weights are computed and a list with acceptable bigrams is generated (possibly based on only the documents listed in the fit_ids). These are then used when computing the document features in successive calls to the function.
To use the tf-idf features e.g. with sklearn classifiers, they should be arranged in a feature matrix instead of in dictionaries. To do this, nlputils.features provides the features2mat function. It returns a sparse (csr) matrix, where each row represents a document and in the columns are the counts of the individual words. The function should first be applied to the training data, then the returned featurenames (a list indicating which word is represented in which column) should be included when computing the matrix for the test data to ensure the feature dimensions of training and test data are the same.
Step5: Classifying with sklearn (Linear SVM and Logistic Regression)
Let's see how these features work with sklearn to solve the classification problem. We'll try logistic regression and a linear SVM with the feature matrices we have just generated for the corpus.
Step6: The results with logistic regression look pretty good already. What about linear SVM?
Step7: Ok, that's slightly worse than with logistic regression. Of course the classifiers themselves have many hyperparameters which can be tuned to get better results. Instead we'll recompute the features and renormalize them with the vector length. SVM relies on the kernel computation, which in this case is the scalar product of the feature vectors. When these vectors are normalized by their length, this product is the cosine similarity of the documents, a popular similarity measure often used in information retrieval settings.
Step8: Indeed, now we're better than with logistic regression! What if we use the same features with the LogReg classifier?
Step9: For logistic regression the results are worse with these features.
Always try different parameter settings when computing the tf-idf features and remember that what constitutes optimal features can vary for different classifiers! It's generally recommended to use idf weighting, however it can make a difference if you use binary or count features (norm='binary' or 'max') and always experiment at least with the 'length' and 'max' renormalization.
Classifying with k-nearest neighbors (knn)
nlputils.knn_classifier contains an adaptive weighted k-nearest neighbor implementation based on [1], which can be used to classify the documents as well. The knn classifier is directly applied to a similarity matrix with similarity scores for every new test document to all existing training documents (for which we know the labels). This means there is no training of the classifier involved (it is a lazy learning algorithm), we simply pick the label of a new document based on the most common label of the training documents most similar to it.
The pairwise similarities of the documents can be computed with functions from nlputils.simmat, which generate a similarity matrix using one of the similarity functions defined in nlputils.simcoefs. Please note that for some of the similarity functions (e.g. linear kernel) it is possible to compute the whole similarity matrix efficiently with matrix products while for others all entries of the matrix have to be computed individually based on the feature dictionaries, which can increase the computing time dramatically for large datasets (it scales roughly $O(n^2)$).
[1] Baoli, Li, Lu Qin, and Yu Shiwen. "An adaptive k-nearest neighbor text categorization strategy." ACM Transactions on Asian Language Information Processing (TALIP) 3.4 (2004)
Step10: As knn classification is a very simple and straightforward approach, it might not come as a surprise that the results are worse than with SVM or LogReg. But of course again different parameter settings (especially concerning the type of similarity computed between the documents) should be explored. Note that the knn classification itself is pretty fast, it's the computation of the similarity matrix which can take a while. But once the matrix is available, you can easily tune the hyperparameters of the classifier as there is no retraining necessary. And of course, as we will show later, this classifier can be applied to more than just text data, all it needs is a similarity matrix to pick the most likely categories, but this similarity matrix does not necessarily have to be computed with the corresponding nlputils.simmat function.
Visualizing the Corpus in 2D
To get an overview of a dataset, it is often helpful to plot it. Since we're dealing with a very high dimensional feature space here, a dimensionality reduction method has to be applied to embed the data in two dimensions. We'll show example visualizations created with the classical scaling (kPCA) and t-SNE implementations found in nlputils.embedding.
Both embedding methods take as input a precomputed similarity matrix, which can again be computed with nlputils.simmat. We simply use the linear similarity between the documents here, but other similarity coefficients are worth exploring.
To create the actual plot of the embedding, we use a function from nlputils.visualize.
Classical Scaling
Step11: While classical scaling/kPCA works well as a general dimensionality reduction method (if you want to reduce the dimensions from a few thousand to, say, 50, while retaining most of the data's variance), it doesn't really create very pretty two dimensional plots. So lets try t-SNE instead.
t-SNE
To embed the dataset with t-SNE, we have to compute a pairwise distance matrix of the data and then transform into a special similarity matrix based on a certain perplexity value (roughly how many nearest neighbors a point is assumed to have), as described in the original paper [2].
For the text dataset, we first compute the angular distance matrix of the dataset, and then transform it into a similarity matrix with the dist2kernel function from nlputils.simmat.
[2] Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of Machine Learning Research 9.Nov (2008)
Step12: This looks more interesting than the classical scaling embedding and documents belonging to the same category are forming clusters.
However, note that we've here used the angular distance, not the euclidean distance which t-SNE uses by default. The angular distance is related to the cosine similarity and works well for the sparse text features. Just for comparison, below is the default t-SNE embedding based on the euclidean distances created with the sklearn implementation.
Step13: Using the nlputils library with other kinds of data
While nlputils was devised with text data in mind, many of its functions can be applied to other kinds of data as well. For the following examples we use a dataset with images of handwritten digits, first visualize it in 2D and then classify it again with sklearn classifiers as well as knn.
Step14: Visualize with kPCA and t-SNE
We visualize the dataset in 2D to show that the nlputils.embedding functions can produce the same results as corresponding sklearn implementations.
Kernel PCA and Classical Scaling
Both just perform the eigendecomposition of a centered kernel matrix.
Step15: t-SNE
The sklearn implementation of t-SNE takes as input the original data and by default computes a similarity matrix for you based on a specified perplexity value and the euclidean distances between the data points. The nlputils t-SNE function on the other hand takes as input a precomputed similarity matrix. To get results comparable to the sklearn implementation, we use the euclidean distance here as well to compute the pairwise distance matrix, which is then transformed into the similarity matrix with the dist2kernel function.
Please note that the optimization process of t-SNE is non-convex and the results depend on the random initialization of the coordinates, therefore the solutions of both t-SNE implementations will never quite look the same.
Step16: Classify with SVM and knn
Next, we classify the images of the handwritten digits using the SVM classifier from sklearn and the knn classifier from nlputils.
For this we first have to split the data into training and test folds. | Python Code:
from __future__ import unicode_literals, division, print_function, absolute_import
from builtins import str, range
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import offsetbox
from scipy.spatial.distance import pdist, squareform
from sklearn.datasets import fetch_20newsgroups, load_digits
from sklearn.svm import LinearSVC, SVC
from sklearn.linear_model import LogisticRegression as logreg
import sklearn.metrics as skmet
from sklearn.metrics.pairwise import rbf_kernel
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA, KernelPCA
# transform documents into tf-idf features
from nlputils.features import FeatureTransform, features2mat
# compute similarity matrices
from nlputils.simmat import compute_K_map, compute_K, dist2kernel
# classify with knn classifier
from nlputils.knn_classifier import knn, get_labels
# embed in 2D and plot
from nlputils.embedding import proj2d, tsne_sim, classical_scaling
from nlputils.visualize import basic_viz, get_colors
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: Using the nlputils library
In this iPython Notebook are some examples of how various parts of the nlputils library can be used with text and other datasets. General knowledge of common machine learning algorithms and NLP is assumed.
We will show how to transform a text corpus into tf-idf features and classify the documents with standard sklearn classifiers as well as the knn classifier part of this library. Additionally, the corpus will be visualized in two dimensions.
End of explanation
# load the data
categories = [
"comp.graphics",
"rec.autos",
"rec.sport.baseball",
"sci.med",
"sci.space",
"soc.religion.christian",
"talk.politics.guns"
]
newsgroups_train = fetch_20newsgroups(subset='train', remove=(
'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42)
newsgroups_test = fetch_20newsgroups(subset='test', remove=(
'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42)
Explanation: Analyzing a Text Corpus
In this section, we explain how to transform text documents into feature vectors and how these vectors can be used to classify the documents as well as visualize the corpus in 2D.
Transforming text into features
This first part shows how to use the FeatureTransform class and the features2mat function from nlputils.features to transform text documents into tf-idf features.
The 20 newsgroups dataset
The first step is to get some data we can work with. For this we use the 20 newsgroups text corpus.
We subsample 7 of the 20 categories and remove meta information such as headers to avoid overfitting.
End of explanation
# create a dictionary mapping a document id to its text (if the text contains more than 3 words)
textdict = {i: t for i, t in enumerate(newsgroups_train.data) if len(t.split()) > 3}
textdict.update({i: t for i, t in enumerate(newsgroups_test.data, len(newsgroups_train.data)) if len(t.split()) > 3})
# similarly map the document ids to their category labels
doccats = {i: c for i, c in enumerate(newsgroups_train.target) if i in textdict}
doccats.update({i: c for i, c in enumerate(newsgroups_test.target, len(newsgroups_train.target)) if i in textdict})
# remember which ids map to training and test data
train_ids = [i for i in range(len(newsgroups_train.data)) if i in textdict]
test_ids = [i for i in range(len(newsgroups_train.data), len(textdict)) if i in textdict]
# and create a mapping from the category ids to their descriptions
catdesc = {i: d for i, d in enumerate(newsgroups_train.target_names)}
print("%i training and %i test samples" % (len(train_ids), len(test_ids)))
Explanation: The text documents have to be stored in a dictionary, where each document has an id mapping to its text. The corresponding categories of the documents are saved in a dictionary as well. Additionally, we create a mapping from the category number to its description.
End of explanation
# transform into tf-idf features
ft = FeatureTransform(norm='max', weight=True, renorm='max')
docfeats = ft.texts2features(textdict, fit_ids=train_ids)
Explanation: Generating tf-idf features
Now that we have the data in the format expected by the library, we can transform the texts into tf-idf feature vectors. For this we use the FeatureTransform class from nlputils.features.
The feature extraction process is summarized in the figure below:
<img src="feature_extraction.png" width="600">
In a first step, the texts are preprocessed, for which we can choose whether to transform the texts to lowercase (by default to_lower=True) and whether to normalize numbers (i.e. replace all digits with '1's, which still allows you to distinguish e.g. dates from phone numbers but otherwise creates more overlap between the texts since you can assume the exact phone number doesn't matter; again by default norm_num=True).
Additionally, we can identify bigrams in the texts and treat them as single units instead of splitting these words up later when counting the term frequencies. Instead of just taking all combinations of two words occurring in the texts, thereby vastly increasing the dimensionality of the feature space, for each bigram a score is computed depending on how often the two words occur together in this combination and only relevant bigrams are included as features (by default identify_bigrams=True).
The next step is the actual transformation of the texts into tf-idf features. First, the term frequencies of each text are counted; these counts can be normalized with the norm option: basically the choice here is whether to only use binary features (norm='binary') or the actual term frequencies (e.g. with norm='max'). Then, the term frequencies can be weighted by their idf scores (by default weight=True). As a last step, the weighted counts can be renormalized, e.g. by dividing them by the maximum value or the length of the vector (by default renorm='length'). All possible combinations of normalization, weighting, and renormalization are illustrated below:
<img src="feature_normalization.png" width="600">
While the feature extraction process does not make use of any category labels, it is still advised to avoid possible overfitting by only using the training documents for identifying relevant bigrams and computing the idf weights. This can be done by passing the ids of the training documents as fit_ids when transforming the texts into features.
The result of the feature extraction process is a dictionary with the document ids as keys and then for every text a dictionary mapping the words occurring in it to the (weighted, normalized) counts. (Note: these feature dictionaries are sparse, i.e. only contain the tf-idf features for words actually occurring in the document).
End of explanation
# save in matrix form
X, featurenames = features2mat(docfeats, train_ids)
X_test, _ = features2mat(docfeats, test_ids, featurenames)
print("%i features" % len(featurenames))
Explanation: The initialized FeatureTransform instance can also be applied to other text dictionaries later on (e.g. if more test examples become available). The first time texts2features is called, the idf weights are computed and a list with acceptable bigrams is generated (possibly based on only the documents listed in the fit_ids). These are then used when computing the document features in successive calls to the function.
To use the tf-idf features e.g. with sklearn classifiers, they should be arranged in a feature matrix instead of in dictionaries. To do this, nlputils.features provides the features2mat function. It returns a sparse (csr) matrix, where each row represents a document and in the columns are the counts of the individual words. The function should first be applied to the training data, then the returned featurenames (a list indicating which word is represented in which column) should be included when computing the matrix for the test data to ensure the feature dimensions of training and test data are the same.
End of explanation
# transform the labels in the doccats dictionary back to vectors required by sklearn
y_train = [doccats[tid] for tid in train_ids]
y_test = [doccats[tid] for tid in test_ids]
# train the logistic regression classifier
clf = logreg(class_weight='balanced', random_state=1)
clf.fit(X, y_train)
# predict the labels of the test set
y_pred = list(clf.predict(X_test))
# report the F1 score
print("F1-score: %.5f" % skmet.f1_score(y_test, y_pred, average='micro'))
# plot the confusion matrix to get more insights into the classifier mistakes
labels = sorted(catdesc.keys())
confmat = np.array(skmet.confusion_matrix(y_test, y_pred, labels), dtype=float)
# normalize by # true labels
confmat /= np.tile(np.array([np.sum(confmat, axis=1)]).T, (1, confmat.shape[1]))
plt.figure()
plt.imshow(confmat, interpolation='nearest')
plt.xticks(list(range(len(labels))), [catdesc[i] for i in labels], rotation=90)
plt.yticks(list(range(len(labels))), [catdesc[i] for i in labels])
plt.xlabel("Prediction")
plt.ylabel("True Label")
plt.title("Confusion Matrix", fontsize=16)
plt.clim([0, 1])
plt.colorbar()
Explanation: Classifying with sklearn (Linear SVM and Logistic Regression)
Let's see how these features work with sklearn to solve the classification problem. We'll try logistic regression and a linear SVM with the feature matrices we have just generated for the corpus.
End of explanation
# train the linear SVM
clf = LinearSVC(class_weight='balanced', random_state=1)
clf.fit(X, y_train)
# predict the labels of the test set
y_pred = list(clf.predict(X_test))
# report the F1 score
print("F1-score: %.5f" % skmet.f1_score(y_test, y_pred, average='micro'))
Explanation: The results with logistic regression look pretty good already. What about linear SVM?
End of explanation
# recompute features
ft = FeatureTransform(norm='max', weight=True, renorm='length')
docfeats = ft.texts2features(textdict, fit_ids=train_ids)
X, featurenames = features2mat(docfeats, train_ids)
X_test, _ = features2mat(docfeats, test_ids, featurenames)
# train SVM and predict
clf = LinearSVC(class_weight='balanced', random_state=1)
clf.fit(X, y_train)
y_pred = list(clf.predict(X_test))
print("F1-score: %.5f" % skmet.f1_score(y_test, y_pred, average='micro'))
Explanation: Ok, that's slightly worse than with logistic regression. Of course the classifiers themselves have many hyperparameters which can be tuned to get better results. Instead we'll recompute the features and renormalize them with the vector length. SVM relies on the kernel computation, which in this case is the scalar product of the feature vectors. When these vectors are normalized by their length, this product is the cosine similarity of the documents, a popular similarity measure often used in information retrieval settings.
End of explanation
clf = logreg(class_weight='balanced', random_state=1)
clf.fit(X, y_train)
y_pred = list(clf.predict(X_test))
print("F1-score: %.5f" % skmet.f1_score(y_test, y_pred, average='micro'))
Explanation: Indeed, now we're better than with logistic regression! What if we use the same features with the LogReg classifier?
End of explanation
# transform texts into features
ft = FeatureTransform(norm='max', weight=True, renorm='length')
docfeats = ft.texts2features(textdict, fit_ids=train_ids)
# compute the similarity matrix between training and test documents (the kernel map).
# For this we use the linear kernel, which, since we have length normalized
# feature vectors, is the cosine similarity between the documents.
# shape: len(test_ids) x len(train_ids)
K_map = compute_K_map(train_ids, test_ids, docfeats, sim='linear')
# restructure the doccats dictionary to contain a list of (single) categories
# The knn classifier is set up to be able to select multiple plausible categories
# per document (or none if it's too unsure about any of them), for this, the labels
# need to be arranged in lists.
doccats_knn = {tid:[c] for tid, c in doccats.items()}
# apply knn to get a score for every document and category, indicating how likely it
# is that this document belongs to this category.
# Different parameter settings might again yield better results.
likely_cat = knn(K_map, train_ids, test_ids, doccats_knn, k=15, adapt=True, alpha=5, weight=True)
# transform the category scores into actual labels for each document
# For this we can directly give the scores to the get_labels function.
# With the threshold set to 'max', only the most likely label is chosen,
# otherwise all categories scoring above the threshold (or none if
# scores for all categories for this document are too low) are chosen.
# Using a high threshold instead of 'max' might be helpful if the
# label should only be assigned automatically when the classifier is
# very sure about its decision and otherwise a human could be consulted.
labels = get_labels(likely_cat, threshold='max')
# transform the returned labels dict into a vector and compute the F1 score
y_pred = [labels[tid][0] for tid in test_ids]
print("F1-score: %.5f" % skmet.f1_score(y_test, y_pred, average='micro'))
Explanation: For logistic regression the results are worse with these features.
Always try different parameter settings when computing the tf-idf features and remember that what constitutes optimal features can vary for different classifiers! It's generally recommended to use idf weighting, however it can make a difference if you use binary or count features (norm='binary' or 'max') and always experiment at least with the 'length' and 'max' renormalization.
Classifying with k-nearest neighbors (knn)
nlputils.knn_classifier contains an adaptive weighted k-nearest neighbor implementation based on [1], which can be used to classify the documents as well. The knn classifier is directly applied to a similarity matrix with similarity scores for every new test document to all existing training documents (for which we know the labels). This means there is no training of the classifier involved (it is a lazy learning algorithm), we simply pick the label of a new document based on the most common label of the training documents most similar to it.
The pairwise similarities of the documents can be computed with functions from nlputils.simmat, which generate a similarity matrix using one of the similarity functions defined in nlputils.simcoefs. Please note that for some of the similarity functions (e.g. linear kernel) it is possible to compute the whole similarity matrix efficiently with matrix products while for others all entries of the matrix have to be computed individually based on the feature dictionaries, which can increase the computing time dramatically for large datasets (it scales roughly $O(n^2)$).
[1] Baoli, Li, Lu Qin, and Yu Shiwen. "An adaptive k-nearest neighbor text categorization strategy." ACM Transactions on Asian Language Information Processing (TALIP) 3.4 (2004): 215-226.
End of explanation
# transform texts into features
ft = FeatureTransform(norm='max', weight=True, renorm='max')
docfeats = ft.texts2features(textdict, fit_ids=train_ids)
# compute linear similarity matrix of the training data
K = compute_K(train_ids, docfeats, 'linear')
# compute embedding (by default, the proj2d wrapper function computes the
# t-SNE embedding; force it to fall back to classical scaling instead)
x, y = proj2d(K, use_tsne=False)
# use the obtained coordinates to create the plot
# (don't forget to pass the same ids as used for computing K so the x and y
# values can be associated with the right category labels for coloring)
basic_viz(train_ids, doccats, x, y, catdesc, 'Classical Scaling')
Explanation: As knn classification is a very simple and straightforward approach, it might not come as a surprise that the results are worse than with SVM or LogReg. But of course again different parameter settings (especially concerning the type of similarity computed between the documents) should be explored. Note that the knn classification itself is pretty fast, it's the computation of the similarity matrix which can take a while. But once the matrix is available, you can easily tune the hyperparameters of the classifier as there is no retraining necessary. And of course, as we will show later, this classifier can be applied to more than just text data, all it needs is a similarity matrix to pick the most likely categories, but this similarity matrix does not necessarily have to be computed with the corresponding nlputils.simmat function.
Visualizing the Corpus in 2D
To get an overview of a dataset, it is often helpful to plot it. Since we're dealing with a very high dimensional feature space here, a dimensionality reduction method has to be applied to embed the data in two dimensions. We'll show example visualizations created with the classical scaling (kPCA) and t-SNE implementations found in nlputils.embedding.
Both embedding methods take as input a precomputed similarity matrix, which can again be computed with nlputils.simmat. We simply use the linear similarity between the documents here, but other similarity coefficients are worth exploring.
To create the actual plot of the embedding, we use a function from nlputils.visualize.
Classical Scaling
End of explanation
# transform texts into features
ft = FeatureTransform(norm='max', weight=True, renorm='length')
docfeats = ft.texts2features(textdict, fit_ids=train_ids)
# compute angular distance matrix of the training data
K = compute_K(train_ids, docfeats, 'angulardist')
# and transform it into a similarity matrix
K = dist2kernel(K, 30)
# compute embedding and plot
x, y = proj2d(K, use_tsne=True, verbose=False)
basic_viz(train_ids, doccats, x, y, catdesc, 't-SNE')
Explanation: While classical scaling/kPCA works well as a general dimensionality reduction method (if you want to reduce the dimensions from a few thousand to, say, 50, while retaining most of the data's variance), it doesn't really create very pretty two dimensional plots. So lets try t-SNE instead.
t-SNE
To embed the dataset with t-SNE, we have to compute a pairwise distance matrix of the data and then transform into a special similarity matrix based on a certain perplexity value (roughly how many nearest neighbors a point is assumed to have), as described in the original paper [2].
For the text dataset, we first compute the angular distance matrix of the dataset, and then transform it into a similarity matrix with the dist2kernel function from nlputils.simmat.
[2] Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of Machine Learning Research 9.Nov (2008): 2579-2605.
End of explanation
# transform features into matrix
X, _ = features2mat(docfeats, train_ids)
# use t-SNE from sklearn
e_tsne = TSNE(n_components=2, random_state=1, method='exact', perplexity=30)
X_embed = e_tsne.fit_transform(X)
basic_viz(train_ids, doccats, X_embed[:,0], X_embed[:,1], catdesc, 't-SNE (sklearn)')
Explanation: This looks more interesting than the classical scaling embedding and documents belonging to the same category are forming clusters.
However, note that we've here used the angular distance, not the euclidean distance which t-SNE uses by default. The angular distance is related to the cosine similarity and works well for the sparse text features. Just for comparison, below is the default t-SNE embedding based on the euclidean distances created with the sklearn implementation.
End of explanation
# load digits dataset
digits = load_digits()
X = digits.data
X /= float(X.max())
y = digits.target
n_samples, n_features = X.shape
# define a plotting function for later
def plot_digits(X, digits, title=None, plot_box=True):
colorlist = get_colors(10)
# Scale and visualize the embedding vectors
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
plt.text(X[i, 0], X[i, 1], str(digits.target[i]),
color=colorlist[digits.target[i]],
fontdict={'weight': 'medium', 'size': 9})
if plot_box and hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = np.array([[1., 1.]]) # just something big
for i in range(digits.data.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-2:
# don't show points that are too close
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r),
X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
if title is not None:
plt.title(title, fontsize=16)
Explanation: Using the nlputils library with other kinds of data
While nlputils was devised with text data in mind, many of its functions can be applied to other kinds of data as well. For the following examples we use a dataset with images of handwritten digits, first visualize it in 2D and then classify it again with sklearn classifiers as well as knn.
End of explanation
# determine a good gamma for the rbf kernel
D = squareform(pdist(X, 'euclidean'))
gamma = 1./(np.median(D)**2)
# Gaussian kernel PCA
e_gkpca = KernelPCA(n_components=2, kernel='rbf', gamma=gamma)
X_embed = e_gkpca.fit_transform(X)
plot_digits(X_embed, digits, title='Kernel PCA (sklearn)')
# classical scaling with rbf kernel
K_rbf = rbf_kernel(X, X, gamma)
X_embed = classical_scaling(K_rbf)
plot_digits(X_embed, digits, title='Classical Scaling (nlputils)')
Explanation: Visualize with kPCA and t-SNE
We visualize the dataset in 2D to show that the nlputils.embedding functions can produce the same results as corresponding sklearn implementations.
Kernel PCA and Classical Scaling
Both just perform the eigendecomposition of a centered kernel matrix.
End of explanation
# t-SNE from sklearn (uses the original data and computes the similarity matrix for you)
e_tsne = TSNE(n_components=2, random_state=1, method='exact', perplexity=30)
X_embed = e_tsne.fit_transform(X)
plot_digits(X_embed, digits, title='t-SNE (sklearn)')
# compute the euclidean distance matrix
D = squareform(pdist(X, 'euclidean'))
# transform it into a similarity matrix with perplexity 30
K_tsne = dist2kernel(D, perp=30)
# embed with t-SNE and plot
X_embed = tsne_sim(K_tsne, verbose=False)
plot_digits(X_embed, digits, title='t-SNE (nlputils)')
Explanation: t-SNE
The sklearn implementation of t-SNE takes as input the original data and by default computes a similarity matrix for you based on a specified perplexity value and the euclidean distances between the data points. The nlputils t-SNE function on the other hand takes as input a precomputed similarity matrix. To get results comparable to the sklearn implementation, we use the euclidean distance here as well to compute the pairwise distance matrix, which is then transformed into the similarity matrix with the dist2kernel function.
Please note that the optimization process of t-SNE is non-convex and the results depend on the random initialization of the coordinates, therefore the solutions of both t-SNE implementations will never quite look the same.
End of explanation
# randomly split data into training (80%) and test (20%) folds
np.random.seed(42)
n_test = int(0.2*n_samples)
rnd_idx = np.random.permutation(X.shape[0])
X_test, y_test = X[rnd_idx[:n_test],:], y[rnd_idx[:n_test]]
X, y = X[rnd_idx[n_test:],:], y[rnd_idx[n_test:]]
# classify with rbf SVM (sklearn)
clf = SVC(gamma=gamma, random_state=1)
clf.fit(X, y)
y_pred = list(clf.predict(X_test))
print("Accuracy: %.5f" % skmet.accuracy_score(y_test, y_pred))
# classify with knn (nlputils)
# compute the similarity matrix (we'll also use the rbf kernel here)
K_map = rbf_kernel(X_test, X, gamma)
# construct a doccats dictionary with a list of (single) class labels
doccats_knn = {tid:[c] for tid, c in enumerate(y)}
# for the train and test labels we just use the range of numbers
train_ids = list(range(len(y)))
test_ids = list(range(len(y), n_samples))
# apply knn to get a score for every document and category, indicating how likely it
# is that this document belongs to this category.
likely_cat = knn(K_map, train_ids, test_ids, doccats_knn, k=15, adapt=True, alpha=5, weight=True)
# transform the category scores into actual labels for each document
labels = get_labels(likely_cat, threshold='max')
# transform the returned labels dict into a vector and compute the accuracy
y_pred = [labels[tid][0] for tid in test_ids]
print("Accuracy: %.5f" % skmet.accuracy_score(y_test, y_pred))
Explanation: Classify with SVM and knn
Next, we classify the images of the handwritten digits using the SVM classifier from sklearn and the knn classifier from nlputils.
For this we first have to split the data into training and test folds.
End of explanation |
11,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Marvin Maps
Marvin Maps is how you deal with the DAP MAPS FITS files easily. You can retrieve maps in several ways. Let's take a took.
From a Marvin Maps
Marvin Maps takes the same inputs as cube
Step1: Once you have a maps object, you can access the raw maps file and header and extensions via maps.header and maps.data. Alternatively, you can access individual maps using the getMap method. getMap works by specifying a parameter and a channel. The parameter and channels names are equivalent to those found in the MAPS FITS extensions and headers, albeit lowercased.
Step2: We can easily plot the map using the internal plot function. Currently maps are plotted using some default Matplotlib color schemes and scaling.
Step3: Try Yourself Now try grabbing and plotting the map for stellar velocity in the cell below.
You can access the individual values, ivar, and mask for your map via the .value, .ivar, and .mask attributes. These are 2d-array numpy arrays.
Step4: Let's replot the Halpha flux map but exclude all regions that have a non-zero mask. We need the numpy Python package for this.
Step5: From the maps object, we can also easily plot the ratio between two maps, e.g. emission-line ratios, using the getMapRatio method. Map ratios are Map objects the same as any other, so you can access their array values or plot them
Step6: Try Yourself Modify the above to display the map for the emission-line ratio OIII/Hbeta
From a Marvin Cube
Step7: Once we have a cube, we can get its maps using the getMaps method. getMaps is just a wrapper to the Marvin Maps Tool. Once we have the maps, we can do all the same things as before. | Python Code:
# import the maps
from marvin.tools.maps import Maps
# Load a MPL-5 map
mapfile = '/Users/Brian/Work/Manga/analysis/v2_0_1/2.0.2/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'
# Let's get a default map of
maps = Maps(filename=mapfile)
print(maps)
Explanation: Marvin Maps
Marvin Maps is how you deal with the DAP MAPS FITS files easily. You can retrieve maps in several ways. Let's take a took.
From a Marvin Maps
Marvin Maps takes the same inputs as cube: filename, plateifu, or mangaid. It also accepts keywords bintype and template_kin. These uniquely define a DAP MAPS file. By default, Marvin will load a MAPS file of bintype=SPX and template_kin=GAU-MILESHC for MPL-5. For MPL-4, the defaults are bintype=NONE, and template_kin=MIUSCAT-THIN.
End of explanation
# Let's grab the H-alpha flux emission line map
haflux = maps.getMap('emline_gflux', channel='ha_6564')
print(haflux)
Explanation: Once you have a maps object, you can access the raw maps file and header and extensions via maps.header and maps.data. Alternatively, you can access individual maps using the getMap method. getMap works by specifying a parameter and a channel. The parameter and channels names are equivalent to those found in the MAPS FITS extensions and headers, albeit lowercased.
End of explanation
# turn on interactive plotting
%matplotlib notebook
# let's plot it
haflux.plot()
Explanation: We can easily plot the map using the internal plot function. Currently maps are plotted using some default Matplotlib color schemes and scaling.
End of explanation
haflux.value, haflux.mask
Explanation: Try Yourself Now try grabbing and plotting the map for stellar velocity in the cell below.
You can access the individual values, ivar, and mask for your map via the .value, .ivar, and .mask attributes. These are 2d-array numpy arrays.
End of explanation
import numpy as np
# select the locations where the mask is non-zero
badvals = np.where(haflux.mask > 0)
# set those values to a numpy nan.
haflux.value[badvals] = np.nan
# check the min and max
print('min', np.nanmin(haflux.value), 'max', np.nanmax(haflux.value))
haflux.plot()
Explanation: Let's replot the Halpha flux map but exclude all regions that have a non-zero mask. We need the numpy Python package for this.
End of explanation
# Let's look at the NII-to-Halpha emission-line ratio map
niiha = maps.getMapRatio('emline_gflux', 'nii_6585', 'ha_6564')
print(niiha)
niiha.plot()
Explanation: From the maps object, we can also easily plot the ratio between two maps, e.g. emission-line ratios, using the getMapRatio method. Map ratios are Map objects the same as any other, so you can access their array values or plot them
End of explanation
# import the Cube tool
from marvin.tools.cube import Cube
# point to your file
filename ='/Users/Brian/Work/Manga/redux/v2_0_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
# get a cube
cube = Cube(filename=filename)
print(cube)
Explanation: Try Yourself Modify the above to display the map for the emission-line ratio OIII/Hbeta
From a Marvin Cube
End of explanation
maps = cube.getMaps()
print(maps)
Explanation: Once we have a cube, we can get its maps using the getMaps method. getMaps is just a wrapper to the Marvin Maps Tool. Once we have the maps, we can do all the same things as before.
End of explanation |
11,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
🏭 Coal Plant ON/OFF
Step1: 🛎️ [DON’T PANIC] It’s safe to ignore the warnings.
When we pip install the requirements, there might be some warnings about conflicting dependency versions.
For the scope of this sample, that’s ok.
⚠️ Restart the runtime
Step2: 🗺️ Authenticate to Earth Engine
In order to use the Earth Engine API, you'll need to have an Earth Engine account.
To create an account, fill out the registration form here.
Step3: 🚏 Overview
This notebook leverages geospatial data from Google Earth Engine, and labeled data provided by the organization Climate TRACE. By combining these two data sources, you'll build and train a model that predicts whether or not a power plant is turned on and producing emissions.
🛰️ Data (inputs)
The data in this example consists of images from a satellite called Sentinel-2, a wide-swath, high-resolution, multi-spectral imaging mission for land monitoring studies.
When working with satellite data, each input image has the dimensions [width, height, bands]. Bands are measurements from specific satellite instruments for different ranges of the electromagnetic spectrum. For example, Sentinel-2 contains 🌈 13 spectral bands. If you're familiar with image classification problems, you can think of the bands as similar to an image's RGB (red, green, blue) channels. However, when working with satellite data we generally have more than just 3 channels.
🏷️ Labels (outputs)
For each patch of pixels (an image of a power plant) that we give to the model, it performs binary classification, which indicates whether the power plant is on or off.
In this example, the output is a single number between 0 (Off) and 1 (On), representing the probability of that power plant being ON.
Model (function)
TL;DR
The model will receive a patch of pixels, in the center is the power plant tower. We take 16 pixels as padding creating a 33x33 patch. The model returns a classification of ON/OFF
In this example, we have a CSV file of labels. Each row in this file represents a power plant at a specific lat/lon and timestamp. At training time we'll prepare a dataset where each input image is a single pixel that we have a label for. We will then add padding around that image. These padded pixels will not get predictions, but will help our model to make better predictions for the center point that we have a label for.
For example, with a padding of 16, each 1 pixel input point would become a 33x33 image after the padding is added.
The model in this sample is trained for image patches where a power plant is located in the center, and the dimensions must be 33x33 pixels where each pixel has a constant number of bands.
1. 🛰️ Get the training data
The training data in this sample comes from two places
Step4: 🏷️ Import labels
First, we import the CSV file that contains the labels.
Step5: Each row in this dataframe represents a power plant at a particular timestamp.
The "is_powered_on" column indicates whether or not the coal plant was turned on (1) or off (0) at that timestamp.
Step6: 🎛️ Create train/validation splits
Before we can train an ML model, we need to split this data into training and validation datasets. We will do this by creating two new dataframes with a 70/30 training validation split.
Step7: Merge 🏷️ labels + 🛰️ Sentinel image data
In Earth Engine, an ImageCollection is a stack or sequence of images. An Image is composed of one or more bands and each band has its own name, data type, scale, mask and projection. The Sentinel-2 dataset is represented as an ImageCollection, where each image in the collection is of a specific geographic location at a particular time.
In the cell below, we write a function to extract the Sentinel image taken at the specific latitude/longitude and timestamp for each row of our dataframe.
We will store all of this information as an Earth Engine Feature Collection. In Earth Engine, a Feature is an object with a geometry property storing a Geometry object, and a properties property storing a dictionary of other properties. Groups of related Features can be combined into a FeatureCollection to enable additional operations on the entire set such as filtering, sorting, and rendering.
We first filter the Sentinel-2 ImageCollection at the start/end dates for a particular row in our dataframe.
Then, using the neighorboodToArray method we create a FeatureCollection that contains the satellite data for each band at the latitude and longitude of interest as well as a 16 pixel padding around that point.
In the image below you can think of the purple box representing the lat/lon where the power plant is located. And around this pixel, we add the padding.
Step8: To get a better sense of what's going on, let's look at the properties for the first Feature in the train_features list. You can see that it contains a property for the label is_powered_on, and 13 additional properies, one for each spectral band.
Step9: The data contained in each band property is an array of shape 33x33.
For example, here is the data for band B1 in the first element in our list expressed as a numpy array.
Step10: 💾 Export data
Lastly, we'll export the data to a Cloud Storage bucket. We'll export the data as TFRecords.
Later when we run the training job, we'll parse these TFRecords and feed them to the model.
Step11: This export will take around 10 minutes. You can monitor the progress with the following command
Step12: 2. 👟 Run a custom training job
Once the export jobs have finished, we're ready to use that data to train a model on Vertex AI Training.
The complete training code can be found in the task.py file.
To run our custom training job on Vertex AI Training, we'll use the pre-built containers provided by Vertex AI to run our training script.
We'll also make use of a GPU. Our model training will only take a couple of minutes, so using a GPU isn't really necessary. But for demonstration purposes (since adding a GPU is simple!) we will make sure we use a container image that is GPU compatible, and then add the accelerator_type and accelerator_count parameters to job.run. TensorFlow will make use of a single GPU out of the box without any extra code changes.
Step13: The job will take around 10 minutes to run.
Step14: 3. 💻 Deploy a web service to host the trained model
Next, we use
Cloud Run
to deploy a web service that exposes a
REST API to
get predictions from our trained model.
We'll deploy our service to Cloud Run directly from source code so we don't need to build the container image first. Behind the scenes, this command uses Google Cloud buildpacks and Cloud Build to automatically build a container image from our source code in the serving_app directory. To run the web service, we configure Cloud Run to launch
gunicorn
on this container image.
Since calls to this web service could launch potentially expensive jobs in our project, we configure it to only accept authenticated calls.
🐣 Deploy app
Step15: Now we need the web service URL to make calls to the REST API we just exposed. We can use gcloud run services describe to get the web service URL.
Since we only accept authorized calls in our web service, we also need to authenticate each call.
gcloud is already authenticated, so we can use gcloud auth print-identity-token to get quick access.
ℹ️ For more information on how to do authenticated calls in Cloud Run, see the
Authentication overview page.
Step16: Finally, we can test that everything is working.
We included a ping method in our web service just to make sure everything is working as expected.
It simply returns back the arguments we passed to the call, as well as a response saying that the call was successful.
🛎️ This is a convenient way to make sure the web service is reachable, the authentication is working as expected, and the request arguments are passed correctly.
We can use Python's
requests
library.
The web service was built to always accept JSON-encoded requests, and returns JSON-encoded responses.
For a request to be successful, it must
Step18: 4.🔮 Get Predictions
Now that we know our app is up and running, we can use it to make predictions.
Let's start by making a prediction for a particular coal plant. To do this we will need to extract the Sentinel data from Earth Engine and send it in the body of the post requst to the prediction service.
We'll start with a plant located at the coordinates -84.80529, 39.11613, and then extract the satellite data from October 2021.
Step19: When we call the get_prediction_data function we need to pass in the start and end dates.
Sentinel-2 takes pictures every 10 days. At training time, we knew the exact date of the Sentinel-2 image, as this was provided in the labels CSV file. However, for user supplied images for prediction we don't know the specific date the image was taken. To address this, we'll extract data for the entire month of October and then use the mosaic function in Earth Engine which will grab the earliest image in that range, stitch together images at the seams, and discard the rest.
Step20: The prediction service expects two things the input data for the prediction as well as the Cloud Storage path where the model is stored.
Step21: 4. 🗺️ Visualize predictions
Let's visualize the results of a coal plant in Spain. First, we get predictions for the four towers at this power plant.
Step22: Next, we can plot these points on a map. Blue means our model predicts that the towers are "off", and red means our model predicts that the towers are "on" and producing carbon pollution. | Python Code:
# Get the sample source code.
!git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git ~/python-docs-samples
%cd ~/python-docs-samples/people-and-planet-ai/geospatial-classification
!pip install -r requirements.txt -c constraints.txt
Explanation: 🏭 Coal Plant ON/OFF: Predictions
Time estimate: 1 hour
Cost estimate: Around $1.00 USD (free if you use \$300 Cloud credits)
Watch the video in YouTube<br>
This is an interactive notebook that contains all of the code necessary to train an ML model from satellite images for geospatial classification of whether a coal plant is on/off.
This is a first step introductory example of how these satellite images can be used to detect carbon pollution from power plants.
💚 This is one of many machine learning how-to samples inspired from real climate solutions aired on the People and Planet AI 🎥 series.
🙈 Using this interactive notebook
Click the run icons ▶️ of each section within this notebook.
This notebook code lets you train and deploy an ML model from end-to-end. When you run a code cell, the code runs in the notebook's runtime, so you're not making any changes to your personal computer.
🛎️ To avoid any errors, wait for each section to finish in their order before clicking the next “run” icon.
This sample must be connected to a Google Cloud project, but nothing else is needed other than your Google Cloud project.
You can use an existing project and the cost will be around $1.00. Alternatively, you can create a new Cloud project with cloud credits for free.
🚴♀️ Steps summary
Here's a quick summary of what you’ll go through:
Get the training data (~15 minutes to complete, no cost for using Earth Engine):
Extract satellite images from Earth Engine, combine it with the data that was labeled and contains lat/long coordinates from Climate TRACE in a CSV, and export to
Cloud Storage.
Run a custom training job (~15 minutes to complete, costs ~ $1):
Using Tensorflow on Vertex AI Training using a pre-built training container.
Deploy a web service to host the trained model (~7 minutes to complete, costs a few cents to build the image, and deployment cost covered by free tier):
On
Cloud Run
and get predictions using the model.
Get Predictions (a few seconds per prediction, costs covered by free tier):
Use the web service to get predictions for new data.
Visualize predictions (~5 minutes to complete) :
Visualize the predictions on a map.
(Optional) Delete the project to avoid ongoing costs.
✨ Before you begin, you need to…
Decide on creating a new
free project
(recommended) or using an existing one.
Then copy the project ID and paste it in the google_cloud_project field in the "Entering project details” section below.
💡 If you don't plan to keep the resources that you create via this sample, we recommend creating a new project instead of selecting an existing project.
After you finish these steps, you can delete the project, removing all the resources associated in bulk.
Click here
to enable the following APIs in your Google Cloud project:
Earth Engine, Vertex AI, Container Registry, Cloud Build, and Cloud Run.
Make sure that billing is enabled for your Google Cloud project,
click here
to learn how to confirm that billing is enabled.
Click here
to create a Cloud Storage bucket.
Then copy the bucket’s name and paste it in the cloud_storage_bucket field in the “Entering project details” section below.
🛎️ Make sure it's a regional bucket in a location where
Vertex AI is available.
Have an Earth Engine account (it's FREE) or create a new one.
To create an account, fill out the registration form here.. Please note this can take from 0-24 hours...but it's worth it! Come back to this sample after you have this.
⛏️ Preparing the project environment
Click the run ▶️ icons in order for the cells to download and install the necessary code, libraries, and resources for this solution.
💡 You can optionally view the entire
code in GitHub.
↘️ Get the code
End of explanation
#@title My Google Cloud resources
project = '' #@param {type:"string"}
cloud_storage_bucket = '' #@param {type:"string"}
region = '' #@param {type:"string"}
# Validate the inputs.
if not project:
raise ValueError(f"Please provide a value for 'project'")
if not cloud_storage_bucket:
raise ValueError(f"Please provide a value for 'cloud_storage_bucket'")
if not region:
raise ValueError(f"Please provide a value for 'region'")
# Authenticate
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
!gcloud config set project {project}
%cd ~/python-docs-samples/people-and-planet-ai/geospatial-classification
Explanation: 🛎️ [DON’T PANIC] It’s safe to ignore the warnings.
When we pip install the requirements, there might be some warnings about conflicting dependency versions.
For the scope of this sample, that’s ok.
⚠️ Restart the runtime: Running the previous cell just updated some libraries and requires to restart the runtime to load those libraries correctly.
In the top-left menu, click "Runtime" > "Restart runtime".
✏️ Enter your Cloud project's details. Ensure you provide a regional bucket!
End of explanation
import ee
import google.auth
credentials, _ = google.auth.default()
ee.Initialize(credentials, project=project)
Explanation: 🗺️ Authenticate to Earth Engine
In order to use the Earth Engine API, you'll need to have an Earth Engine account.
To create an account, fill out the registration form here.
End of explanation
# Define constants
LABEL = 'is_powered_on'
IMAGE_COLLECTION = "COPERNICUS/S2"
BANDS = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12']
SCALE = 10
PATCH_SIZE = 16
Explanation: 🚏 Overview
This notebook leverages geospatial data from Google Earth Engine, and labeled data provided by the organization Climate TRACE. By combining these two data sources, you'll build and train a model that predicts whether or not a power plant is turned on and producing emissions.
🛰️ Data (inputs)
The data in this example consists of images from a satellite called Sentinel-2, a wide-swath, high-resolution, multi-spectral imaging mission for land monitoring studies.
When working with satellite data, each input image has the dimensions [width, height, bands]. Bands are measurements from specific satellite instruments for different ranges of the electromagnetic spectrum. For example, Sentinel-2 contains 🌈 13 spectral bands. If you're familiar with image classification problems, you can think of the bands as similar to an image's RGB (red, green, blue) channels. However, when working with satellite data we generally have more than just 3 channels.
🏷️ Labels (outputs)
For each patch of pixels (an image of a power plant) that we give to the model, it performs binary classification, which indicates whether the power plant is on or off.
In this example, the output is a single number between 0 (Off) and 1 (On), representing the probability of that power plant being ON.
Model (function)
TL;DR
The model will receive a patch of pixels, in the center is the power plant tower. We take 16 pixels as padding creating a 33x33 patch. The model returns a classification of ON/OFF
In this example, we have a CSV file of labels. Each row in this file represents a power plant at a specific lat/lon and timestamp. At training time we'll prepare a dataset where each input image is a single pixel that we have a label for. We will then add padding around that image. These padded pixels will not get predictions, but will help our model to make better predictions for the center point that we have a label for.
For example, with a padding of 16, each 1 pixel input point would become a 33x33 image after the padding is added.
The model in this sample is trained for image patches where a power plant is located in the center, and the dimensions must be 33x33 pixels where each pixel has a constant number of bands.
1. 🛰️ Get the training data
The training data in this sample comes from two places:
The satellite images will be extracted from Earth Engine.
The labels are provided in a CSV file that indicates whether a coal plant is turned on or off at a particular timestamp.
For each row in the CSV file, we need to extract the corresponding Sentinel image taken at that specific latitude/longitude and timestamp. We'll export this image data, along with the corresponding label (on/off), to Cloud Storage.
End of explanation
import pandas as pd
import numpy as np
labels_dataframe = pd.read_csv('labeled_geospatial_data.csv')
Explanation: 🏷️ Import labels
First, we import the CSV file that contains the labels.
End of explanation
labels_dataframe.head()
Explanation: Each row in this dataframe represents a power plant at a particular timestamp.
The "is_powered_on" column indicates whether or not the coal plant was turned on (1) or off (0) at that timestamp.
End of explanation
TRAIN_VALIDATION_SPLIT = 0.7
train_dataframe = labels_dataframe.sample(frac=TRAIN_VALIDATION_SPLIT,random_state=200) #random state is a seed value
validation_dataframe = labels_dataframe.drop(train_dataframe.index).sample(frac=1.0)
Explanation: 🎛️ Create train/validation splits
Before we can train an ML model, we need to split this data into training and validation datasets. We will do this by creating two new dataframes with a 70/30 training validation split.
End of explanation
from datetime import datetime, timedelta
def labeled_feature(row):
start = datetime.fromisoformat(row.timestamp)
end = start + timedelta(days=1)
image = (
ee.ImageCollection(IMAGE_COLLECTION)
.filterDate(start.strftime("%Y-%m-%d"), end.strftime("%Y-%m-%d"))
.select(BANDS)
.mosaic()
)
point = ee.Feature(
ee.Geometry.Point([row.lon, row.lat]),
{LABEL: row.is_powered_on},
)
return (
image.neighborhoodToArray(ee.Kernel.square(PATCH_SIZE))
.sampleRegions(ee.FeatureCollection([point]), scale=SCALE)
.first()
)
train_features = [labeled_feature(row) for row in train_dataframe.itertuples()]
validation_features = [labeled_feature(row) for row in validation_dataframe.itertuples()]
Explanation: Merge 🏷️ labels + 🛰️ Sentinel image data
In Earth Engine, an ImageCollection is a stack or sequence of images. An Image is composed of one or more bands and each band has its own name, data type, scale, mask and projection. The Sentinel-2 dataset is represented as an ImageCollection, where each image in the collection is of a specific geographic location at a particular time.
In the cell below, we write a function to extract the Sentinel image taken at the specific latitude/longitude and timestamp for each row of our dataframe.
We will store all of this information as an Earth Engine Feature Collection. In Earth Engine, a Feature is an object with a geometry property storing a Geometry object, and a properties property storing a dictionary of other properties. Groups of related Features can be combined into a FeatureCollection to enable additional operations on the entire set such as filtering, sorting, and rendering.
We first filter the Sentinel-2 ImageCollection at the start/end dates for a particular row in our dataframe.
Then, using the neighorboodToArray method we create a FeatureCollection that contains the satellite data for each band at the latitude and longitude of interest as well as a 16 pixel padding around that point.
In the image below you can think of the purple box representing the lat/lon where the power plant is located. And around this pixel, we add the padding.
End of explanation
ee.FeatureCollection(train_features[0]).propertyNames().getInfo()
Explanation: To get a better sense of what's going on, let's look at the properties for the first Feature in the train_features list. You can see that it contains a property for the label is_powered_on, and 13 additional properies, one for each spectral band.
End of explanation
example_feature = np.array(train_features[0].get('B1').getInfo())
print(example_feature)
print('shape: ' + str(example_feature.shape))
Explanation: The data contained in each band property is an array of shape 33x33.
For example, here is the data for band B1 in the first element in our list expressed as a numpy array.
End of explanation
# Export data
training_task = ee.batch.Export.table.toCloudStorage(
collection=ee.FeatureCollection(train_features),
description="Training image export",
bucket=cloud_storage_bucket,
fileNamePrefix="geospatial_training",
selectors=BANDS + [LABEL],
fileFormat="TFRecord",
)
training_task.start()
validation_task = ee.batch.Export.table.toCloudStorage(
collection=ee.FeatureCollection(validation_features),
description="Validation image export",
bucket=cloud_storage_bucket,
fileNamePrefix="geospatial_validation",
selectors= BANDS + [LABEL],
fileFormat='TFRecord')
validation_task.start()
Explanation: 💾 Export data
Lastly, we'll export the data to a Cloud Storage bucket. We'll export the data as TFRecords.
Later when we run the training job, we'll parse these TFRecords and feed them to the model.
End of explanation
from pprint import pprint
pprint(ee.batch.Task.list())
Explanation: This export will take around 10 minutes. You can monitor the progress with the following command:
End of explanation
from google.cloud import aiplatform
aiplatform.init(project=project, staging_bucket=cloud_storage_bucket)
job = aiplatform.CustomTrainingJob(
display_name="geospatial_model_training",
script_path="task.py",
container_uri="us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-7:latest")
Explanation: 2. 👟 Run a custom training job
Once the export jobs have finished, we're ready to use that data to train a model on Vertex AI Training.
The complete training code can be found in the task.py file.
To run our custom training job on Vertex AI Training, we'll use the pre-built containers provided by Vertex AI to run our training script.
We'll also make use of a GPU. Our model training will only take a couple of minutes, so using a GPU isn't really necessary. But for demonstration purposes (since adding a GPU is simple!) we will make sure we use a container image that is GPU compatible, and then add the accelerator_type and accelerator_count parameters to job.run. TensorFlow will make use of a single GPU out of the box without any extra code changes.
End of explanation
model = job.run(accelerator_type='NVIDIA_TESLA_K80', accelerator_count=1, args=[f'--bucket={cloud_storage_bucket}'])
Explanation: The job will take around 10 minutes to run.
End of explanation
# Deploy the web service to Cloud Run.
# https://cloud.google.com/sdk/gcloud/reference/run/deploy
!gcloud run deploy "geospatial-service" \
--source=serving_app \
--command="gunicorn" \
--args="--threads=8,--timeout=0,main:app" \
--region="{region}" \
--memory="1G" \
--no-allow-unauthenticated \
Explanation: 3. 💻 Deploy a web service to host the trained model
Next, we use
Cloud Run
to deploy a web service that exposes a
REST API to
get predictions from our trained model.
We'll deploy our service to Cloud Run directly from source code so we don't need to build the container image first. Behind the scenes, this command uses Google Cloud buildpacks and Cloud Build to automatically build a container image from our source code in the serving_app directory. To run the web service, we configure Cloud Run to launch
gunicorn
on this container image.
Since calls to this web service could launch potentially expensive jobs in our project, we configure it to only accept authenticated calls.
🐣 Deploy app
End of explanation
import subprocess
# Get the web service URL.
# https://cloud.google.com/sdk/gcloud/reference/run/services/describe
service_url = subprocess.run(
[ 'gcloud', 'run', 'services', 'describe', 'geospatial-service',
f'--region={region}',
f'--format=get(status.url)',
],
capture_output=True,
).stdout.decode('utf-8').strip()
print(f"service_url: {service_url}")
# Get an identity token for authorized calls to our web service.
# https://cloud.google.com/sdk/gcloud/reference/auth/print-identity-token
identity_token = subprocess.run(
['gcloud', 'auth', 'print-identity-token'],
capture_output=True,
).stdout.decode('utf-8').strip()
print(f"identity_token: {identity_token}")
Explanation: Now we need the web service URL to make calls to the REST API we just exposed. We can use gcloud run services describe to get the web service URL.
Since we only accept authorized calls in our web service, we also need to authenticate each call.
gcloud is already authenticated, so we can use gcloud auth print-identity-token to get quick access.
ℹ️ For more information on how to do authenticated calls in Cloud Run, see the
Authentication overview page.
End of explanation
import requests
requests.post(
url=f'{service_url}/ping',
headers={'Authorization': f'Bearer {identity_token}'},
json={'x': 42, 'message': 'Hello world!'},
).json()
Explanation: Finally, we can test that everything is working.
We included a ping method in our web service just to make sure everything is working as expected.
It simply returns back the arguments we passed to the call, as well as a response saying that the call was successful.
🛎️ This is a convenient way to make sure the web service is reachable, the authentication is working as expected, and the request arguments are passed correctly.
We can use Python's
requests
library.
The web service was built to always accept JSON-encoded requests, and returns JSON-encoded responses.
For a request to be successful, it must:
Be an HTTP POST request
Contain the following headers:
Authorization: Bearer IDENTITY_TOKEN
Content-Type: application/json
The data must be valid JSON, if no arguments are needed we can pass {} as an empty object.
For ease of use, requests.post has a
json parameter
that automatically attaches the header Content-Type: application/json and encodes our data into a JSON string.
End of explanation
# Extract image data
import json
def get_prediction_data(lon, lat, start, end):
Extracts Sentinel image as json at specific lat/lon and timestamp.
location = ee.Feature(ee.Geometry.Point([lon, lat]))
image = (
ee.ImageCollection(IMAGE_COLLECTION)
.filterDate(start, end)
.select(BANDS)
.mosaic()
)
feature = image.neighborhoodToArray(ee.Kernel.square(PATCH_SIZE)).sampleRegions(
collection=ee.FeatureCollection([location]), scale=SCALE
)
return feature.getInfo()["features"][0]["properties"]
Explanation: 4.🔮 Get Predictions
Now that we know our app is up and running, we can use it to make predictions.
Let's start by making a prediction for a particular coal plant. To do this we will need to extract the Sentinel data from Earth Engine and send it in the body of the post requst to the prediction service.
We'll start with a plant located at the coordinates -84.80529, 39.11613, and then extract the satellite data from October 2021.
End of explanation
prediction_data = get_prediction_data(-84.80529, 39.11613, '2021-10-01', '2021-10-31')
Explanation: When we call the get_prediction_data function we need to pass in the start and end dates.
Sentinel-2 takes pictures every 10 days. At training time, we knew the exact date of the Sentinel-2 image, as this was provided in the labels CSV file. However, for user supplied images for prediction we don't know the specific date the image was taken. To address this, we'll extract data for the entire month of October and then use the mosaic function in Earth Engine which will grab the earliest image in that range, stitch together images at the seams, and discard the rest.
End of explanation
requests.post(
url=f'{service_url}/predict',
headers={'Authorization': f'Bearer {identity_token}'},
json={'data': prediction_data, 'bucket': cloud_storage_bucket},
).json()['predictions']
Explanation: The prediction service expects two things the input data for the prediction as well as the Cloud Storage path where the model is stored.
End of explanation
def get_prediction(lon, lat, start, end):
prediction_data = get_prediction_data(lon, lat, start, end)
result = requests.post(
url=f'{service_url}/predict',
headers={'Authorization': f'Bearer {identity_token}'},
json={'data': prediction_data, 'bucket': cloud_storage_bucket},).json()
return result['predictions']['predictions'][0][0][0][0]
lons = [-7.86444, -7.86376, -7.85755, -7.85587]
lats = [43.43717, 43.43827, 43.44075, 43.44114]
plant_predictions = [get_prediction(lon , lat, '2021-10-01', '2021-10-31') for lon, lat in zip(lons, lats)]
Explanation: 4. 🗺️ Visualize predictions
Let's visualize the results of a coal plant in Spain. First, we get predictions for the four towers at this power plant.
End of explanation
import folium
import folium.plugins as folium_plugins
import branca.colormap as cm
colormap = cm.LinearColormap(colors=['lightblue', 'red'], index=[0,1], vmin=0, vmax=1)
map = folium.Map(
location=[43.44, -7.86],
zoom_start=16,
tiles='https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',
attr = 'ESRI'
)
for loc, p in zip(zip(lats, lons), plant_predictions):
folium.Circle(
location=loc,
radius=20,
fill=True,
color=colormap(p),
).add_to(map)
map.add_child(colormap)
display(map)
Explanation: Next, we can plot these points on a map. Blue means our model predicts that the towers are "off", and red means our model predicts that the towers are "on" and producing carbon pollution.
End of explanation |
11,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with Images
cv2.imread, cv2.imshow, cv2.imwrite
Reading an image - cv2.imread()
cv2.imread() has two arguments, one address of the image
and the other as following arguments
Step1: Showing an image - cv2.imshow()
To display an image in window, use cv2.imshow()
Window automatically fits to image size.Two Arguments
Step2: About cv2.waitKey() function
Step3: Writing an Image
Use the function cv2.imwrite() to save an image.
2 Arguments
Step4: Using matplotlib | Python Code:
import numpy as np
import cv2
ls
file_adr = 'Me1.png'
img = cv2.imread(file_adr,cv2.IMREAD_GRAYSCALE) # Alternate- 0 cv2.IMREAD_GRAYSCALE - 0
cv2.imwrite('Me1_gray.jpg', img) #
img2 = cv2.imread('Me1_gray.jpg', cv2.IMREAD_COLOR)
Explanation: Getting Started with Images
cv2.imread, cv2.imshow, cv2.imwrite
Reading an image - cv2.imread()
cv2.imread() has two arguments, one address of the image
and the other as following arguments: (specifies the way image should
be read)
cv2.IMREAD_COLOR : loads a color image. Transparency of image is
neglected, it is default flag. [Alternate : 1]
cv2.IMREAD_GRAYSCALE : Loads image in grayscale mode. [Alternate : 0]
cv2.IMREAD_UNCHANGED : Loads image such as including alpha channel.
[Alternate : -1]
End of explanation
cv2.imshow('Image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Explanation: Showing an image - cv2.imshow()
To display an image in window, use cv2.imshow()
Window automatically fits to image size.Two Arguments:
1. Window name (string)
Our image
End of explanation
cv2.namedWindow('image', cv2.WINDOW_AUTOSIZE)
cv2.imshow('Image', img)
cv2.waitKey(5)
cv2.destroyWindow('Image')
cv2.waitKey?
help(cv2.namedWindow)
Explanation: About cv2.waitKey() function:
Keyboard binding function Argument in milliseconds
If zero is passed, it waits indefinitely for a key stroke.
About cv2.destroyAllWindows():
Destroy all windows we created
To destroy any specific window,
use cv2.destroyWindow(), pass argument with window's name.
End of explanation
cv2.imwrite('Me1.jpg', img)
import numpy as np
import cv2
img2 = cv2.imread('Me1.png', -1)
cv2.imshow('IMAGE', img2)
k = cv2.waitKey(0) & 0xFF
if k == 27:
cv2.destroyAllWindows()
elif k == ord('s'):
cv2.imwrite('Oh.png', img2)
cv2.destroyAllWindows()
Explanation: Writing an Image
Use the function cv2.imwrite() to save an image.
2 Arguments:
First argument is file name
Image you want to save
End of explanation
import numpy as np
import cv2
from matplotlib import pyplot as plt
import seaborn; seaborn.set()
img = cv2.imread('Me1.png',0)
plt.imshow(img, cmap = 'gray', interpolation='bicubic')
# plt.xticks([]), plt.yticks([]) # To hide tick values on x & y -axis
plt.show()
# 255, 255, 0 - R G B
Explanation: Using matplotlib
End of explanation |
11,736 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Given a pandas DataFrame, how does one convert several binary columns (where 1 denotes the value exists, 0 denotes it doesn't) into a single categorical column? | Problem:
import pandas as pd
df = pd.DataFrame({'A': [1, 0, 0, 0, 1, 0],
'B': [0, 1, 0, 0, 0, 1],
'C': [0, 0, 1, 0, 0, 0],
'D': [0, 0, 0, 1, 0, 0]})
df["category"] = df.idxmax(axis=1) |
11,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'pcmdi-test-1-0', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: PCMDI
Source ID: PCMDI-TEST-1-0
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
11,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Общая информация
Срок сдачи
Step1: IRIS
Step2: MNIST
Step3: Задание 5 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from line_profiler import LineProfiler
from sklearn.metrics.pairwise import pairwise_distances
import seaborn as sns
from sklearn import datasets
from sklearn.base import ClassifierMixin
from sklearn.datasets import fetch_mldata
from sklearn.neighbors.base import NeighborsBase, KNeighborsMixin, SupervisedIntegerMixin
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
%load_ext pycodestyle_magic
def profile_print(func_to_call, *args):
profiler = LineProfiler()
profiler.add_function(func_to_call)
profiler.runcall(func_to_call, *args)
profiler.print_stats()
%%pycodestyle
class MyKNeighborsClassifier(NeighborsBase, KNeighborsMixin,
SupervisedIntegerMixin, ClassifierMixin):
def __init__(self, n_neighbors=3):
self.n_neighbors = n_neighbors
def fit(self, X, y):
self.X = np.float64(X)
self.classes, self.y = np.unique(y, return_inverse=True)
def euclidean_metric(self, v):
return np.sqrt(((self.X - v) ** 2).sum(axis=1))
'''
def cnt(self, v):
z = np.zeros(self.classes.size)
for i in v:
z[i] += 1
return z
def predict_proba(self, X): # more understandable
X = np.float64(X)
# euclidean by default, can use multithreading
dist = pairwise_distances(X, self.X)
ind = np.argsort(dist, axis=1)[:, :self.n_neighbors]
return np.apply_along_axis(self.cnt, 1, self.y[ind]) / self.n_neighbors
'''
# '''
def predict_proba(self, X): # more quickly
X = np.float64(X)
# euclidean by default, can use multithreading
dist = pairwise_distances(X, self.X)
ind = np.argsort(dist, axis=1)[:, :self.n_neighbors]
classes = self.y[ind]
crange = np.arange(self.classes.shape[0])
clss = classes.reshape((classes.shape[0], 1, classes.shape[1]))
crng = crange.reshape((1, crange.shape[0], 1))
counts = np.sum(clss == crng, axis=2)
return counts / self.n_neighbors
# '''
def predict(self, X):
proba = self.predict_proba(X)
return self.classes[np.argsort(proba, axis=1)[:, -1]]
def score(self, X, y):
pred = self.predict(X)
return 1 - np.count_nonzero(y - pred) / y.shape[0]
Explanation: Общая информация
Срок сдачи: 13 марта 2017, 06:00 <br>
Штраф за опоздание: -2 балла после 06:00 13 марта, -4 балла после 06:00 20 марта, -6 баллов после 06:00 27 марта
При отправлении ДЗ указывайте фамилию в названии файла
Присылать ДЗ необходимо в виде ссылки на свой github репозиторий в slack @alkhamush
Необходимо в slack создать таск в приватный чат:
/todo Фамилия Имя ссылка на гитхаб @alkhamush
Пример:
/todo Ксения Стройкова https://github.com/stroykova/spheremailru/stroykova_hw1.ipynb @alkhamush
Используйте данный Ipython Notebook при оформлении домашнего задания.
Задание 1 (2 баллов)
Реализовать KNN в классе MyKNeighborsClassifier (обязательное условие: точность не ниже sklearn реализации)
Разберитесь самостоятельно, какая мера расстояния используется в KNeighborsClassifier дефолтно и реализуйте свой алгоритм именно с этой мерой. Самостоятельно разберитесь, как считается score из KNeighborsClassifier и реализуйте аналог в своём классе.
Задание 2 (2 балла)
Добиться скорости работы на fit, predict и predict_proba сравнимой со sklearn 4 балла для iris и mnist
Для этого используем numpy
Задание 3 (2 балла)
Для iris найдите такой параметр n_neighbors, при котором выдаётся наилучший score. Нарисуйте график зависимости score от n_neighbors
Задание 3 (2 балла)
Выполнить требования pep8
Задание 5 (2 балла)
Описать для чего нужны следующие библиотеки/классы/функции (список будет ниже)
End of explanation
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.1, stratify=iris.target)
clf = KNeighborsClassifier(n_neighbors=17)
my_clf = MyKNeighborsClassifier(n_neighbors=17)
%time clf.fit(X_train, y_train)
%time my_clf.fit(X_train, y_train)
%time clf.predict(X_test)
%time my_clf.predict(X_test)
#profile_print(my_clf.predict, X_test)
%time clf.predict_proba(X_test)
#%time my_clf.predict_proba(X_test)
profile_print(my_clf.predict_proba, X_test)
clf.score(X_test, y_test)
my_clf.score(X_test, y_test)
# Задание 3
# 16 - 17
num_n = 30
num_av = 2000
scm = np.zeros(num_n)
sc = np.zeros(num_av)
for n in range(1, num_n + 1):
print (n)
for i in range(num_av):
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.1, stratify=iris.target)
my_clf = MyKNeighborsClassifier(n_neighbors=n)
my_clf.fit(X_train, y_train)
sc[i] = my_clf.score(X_test, y_test)
scm[n - 1] = sc.mean()
plt.plot(range(1, num_n + 1), scm, 'ro-')
plt.show()
Explanation: IRIS
End of explanation
mnist = fetch_mldata('MNIST original')
X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.target, test_size=0.01, stratify=mnist.target)
y_train.shape
clf = KNeighborsClassifier(n_neighbors=5)
my_clf = MyKNeighborsClassifier(n_neighbors=5)
%time clf.fit(X_train, y_train)
%time my_clf.fit(X_train, y_train)
%time clf.predict(X_test)
%time my_clf.predict(X_test)
%time clf.predict_proba(X_test)
#%time my_clf.predict_proba(X_test)
%time profile_print(my_clf.predict_proba, X_test)
clf.score(X_test, y_test)
my_clf.score(X_test, y_test)
# n_neighbors = 5
num_n = 30
num_av = 20
scm = np.zeros(num_n)
sc = np.zeros(num_av)
for n in range(1, num_n + 1):
print (n)
for i in range(num_av):
print (n, ' ', i)
X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.target, test_size=0.001, stratify=mnist.target)
my_clf = MyKNeighborsClassifier(n_neighbors=n)
my_clf.fit(X_train, y_train)
sc[i] = my_clf.score(X_test, y_test)
scm[n - 1] = sc.mean()
plt.plot(range(1, num_n + 1), scm, 'ro-')
plt.show()
print (1)
Explanation: MNIST
End of explanation
# seaborn - красивые и простые в написании графики и визуализация
# matplotlib - более сложные в написании и более функциональные, чем seaborn
# train_test_split - разбиение данных на обучающую и тестовую часть
# Pipelin%load_ext e (from sklearn.pipeline import Pipeline) - конвейерный классификатор
# StandardScaler (from sklearn.preprocessing import StandardScaler) - нормировка
# ClassifierMixin - общий Mixin для классификаторов, в нем реализован score
# NeighborsBase - базовый класс Knn
# KNeighborsMixin - Mixin содержащий метод поиска ближайших соседей
# SupervisedIntegerMixin - Mixin с функцией fit для установления соответствия
# между данными и целевыми переменными
Explanation: Задание 5
End of explanation |
11,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A transformada discreta de Fourier (DFT)
Caso unidimensional
Transformada Discreta de Fourier em uma dimensão
Step1: Para exemplificar o caso unidimensional, vamos pegar uma imagem bidimensional (cameraman) e escolher apenas uma linha da imagem para ser nossa função unidimensional.
Step2: 2. Exemplo bidimensional | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from numpy.fft import *
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: A transformada discreta de Fourier (DFT)
Caso unidimensional
Transformada Discreta de Fourier em uma dimensão:
Entrada: $f(x)$ - $x$ coordenada dos pixels da imagem
Saída: $F(u)$ - $u$ frequência normalizada, número de ciclos na amostra
$$ F(u) = \sum_{x=0}^{N-1}f(x)\exp(-j2\pi(\frac{ux}{N})) $$
$$ 0 \leq x < N, 0 \leq u < N $$
Caso bidimensional
Transformada Discreta de Fourier em duas dimensões:
Entrada: $f(x,y)$ - $(x,y)$ coordenada dos pixels da imagem
Saída: $F(u,v)$ - $(u,v)$ frequência normalizada, número de ciclos na amostra
$$ F(u,v) = \sum_{x=0}^{N-1}\sum_{y=0}^{M-1}f(x,y)\exp(-j2\pi(\frac{ux}{N}+ \frac{vy}{M})) $$
$$ 0 \leq x < N , 0 \leq u < M $$
$$ 0 \leq y < M , 0 \leq v < M $$
Significado de $u$ na equação
Dado $N$ amostras, o $u$ na equação $ \exp(-j{2\pi}\frac{ux}{N}) $ indica o número de ciclos no espaço de $0$ a $N-1$. O período, em pixels, deste sinal é $\frac{N}{u}$. O período máximo é $N$ e período mínimo é 2.
1. Exemplo unidimensional
End of explanation
f1 = mpimg.imread('../data/cameraman.tif')[10,:]
plt.plot(f1)
F1 = fft(f1)
g1 = ifft(F1)
print ('comparando g1 e f1:', abs(g1-f1).max())
Explanation: Para exemplificar o caso unidimensional, vamos pegar uma imagem bidimensional (cameraman) e escolher apenas uma linha da imagem para ser nossa função unidimensional.
End of explanation
f2 = mpimg.imread('../data/cameraman.tif')
F2 = fft2(f2)
g2 = ifft2(F2)
print ('comparando g2 e f2:', abs(g2-f2).max())
plt.figure(1, figsize=(8,8))
plt.subplot(1,2,1)
plt.imshow(f2, cmap='gray')
plt.subplot(1,2,2)
plt.imshow(ia.dftview(F2), cmap='gray')
Explanation: 2. Exemplo bidimensional
End of explanation |
11,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - Pipeline pour un réduction d'une forêt aléatoire - énoncé
Le modèle Lasso permet de sélectionner des variables, une forêt aléatoire produit une prédiction comme étant la moyenne d'arbres de régression. Cet aspect a été abordé dans le notebook Reduction d'une forêt aléatoire. On cherche à automatiser le processus.
Step1: Datasets
Comme il faut toujours des données, on prend ce jeu Boston.
Step2: Forêt aléatoire suivi de Lasso
La méthode consiste à apprendre une forêt aléatoire puis à effectuer d'une régression sur chacun des estimateurs.
Step3: Nous avons réussi à reproduire le processus dans son ensemble. Pas toujours simple de se souvenir de toutes les étapes, c'est pourquoi il est plus simple de compiler l'ensemble dans un pipeline.
Exercice 1
Step4: Il reste à écrire le pipeline correspondant à la séquence d'apprentissage décrit quelque part dans ce notebook. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: 2A.ml - Pipeline pour un réduction d'une forêt aléatoire - énoncé
Le modèle Lasso permet de sélectionner des variables, une forêt aléatoire produit une prédiction comme étant la moyenne d'arbres de régression. Cet aspect a été abordé dans le notebook Reduction d'une forêt aléatoire. On cherche à automatiser le processus.
End of explanation
from sklearn.datasets import load_boston
data = load_boston()
X, y = data.data, data.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
Explanation: Datasets
Comme il faut toujours des données, on prend ce jeu Boston.
End of explanation
import numpy
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import Lasso
# Apprentissage d'une forêt aléatoire
clr = RandomForestRegressor()
clr.fit(X_train, y_train)
# Récupération de la prédiction de chaque arbre
X_train_2 = numpy.zeros((X_train.shape[0], len(clr.estimators_)))
estimators = numpy.array(clr.estimators_).ravel()
for i, est in enumerate(estimators):
pred = est.predict(X_train)
X_train_2[:, i] = pred
# Apprentissage d'une régression Lasso
lrs = Lasso(max_iter=10000)
lrs.fit(X_train_2, y_train)
lrs.coef_
Explanation: Forêt aléatoire suivi de Lasso
La méthode consiste à apprendre une forêt aléatoire puis à effectuer d'une régression sur chacun des estimateurs.
End of explanation
class RandomForestRegressorAsTransformer:
def __init__(self, **kwargs):
self.rf = RandomForestRegressor(**kwargs)
def fit(self, X, y):
# ...
return self
def transform(self, X):
# ...
# return les prédiction de chaque arbre
pass
# Tout ça pour écrire ce qui suit...
trrf = RandomForestRegressorAsTransformer()
trrf.fit(X_train, y_train)
trrf.transform(X_train)
Explanation: Nous avons réussi à reproduire le processus dans son ensemble. Pas toujours simple de se souvenir de toutes les étapes, c'est pourquoi il est plus simple de compiler l'ensemble dans un pipeline.
Exercice 1 : Premier pipeline
Peut-être trouverez-vous tout de suite un pipeline qui fonctionne. La partie difficile est la partie qui produit le vecteur des sorties de chaque arbre de régression. La première piste que j'ai explorée est un FunctionTransformer.
Exercice 2 : Second pipeline
La première idée de marche pas vraiment... On décide alors de déguiser la forêt aléatoire en un transformeur.
End of explanation
from sklearn.pipeline import Pipeline
pipe = Pipeline(steps=[
('name', 'passthrough'),
# ...
])
pipe.fit(X_train, y_train)
Explanation: Il reste à écrire le pipeline correspondant à la séquence d'apprentissage décrit quelque part dans ce notebook.
End of explanation |
11,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to Pynq Audio
This notebook shows the basic recording and playback features of the Pynq-Z1.
It uses the audio jack to play back recordings from the built-in microphone, as well as a pre-recorded audio sample. Moreover, visualization with matplotlib and playback with IPython.Audio are shown.
Create new audio object
Step1: Record and play
Record a 3-second sample and save it into a file.
Step2: Load and play
Load a sample and play the loaded sample.
Step3: Play in notebook
Users can also play the audio directly in notebook. To do this, the file format has to be converted from Pulse Density Modulation (PDM) to Pulse Code Modulation (PCM).
For more information, please refer to
Step4: Step 2
Step5: Step 3
Step6: Plotting PCM data
Users can display the audio data in notebook
Step7: Frequency spectrum
Step8: Frequency spectrum over time
Use the classic plot style for better display. | Python Code:
from pynq import Overlay
from pynq.drivers import Audio
Overlay('base.bit').download()
pAudio = Audio()
Explanation: Welcome to Pynq Audio
This notebook shows the basic recording and playback features of the Pynq-Z1.
It uses the audio jack to play back recordings from the built-in microphone, as well as a pre-recorded audio sample. Moreover, visualization with matplotlib and playback with IPython.Audio are shown.
Create new audio object
End of explanation
pAudio.record(3)
pAudio.save("Recording_1.pdm")
Explanation: Record and play
Record a 3-second sample and save it into a file.
End of explanation
pAudio.load("/home/xilinx/pynq/drivers/tests/pynq_welcome.pdm")
pAudio.play()
Explanation: Load and play
Load a sample and play the loaded sample.
End of explanation
import time
import numpy as np
start = time.time()
af_uint8 = np.unpackbits(pAudio.buffer.astype(np.int16)
.byteswap(True).view(np.uint8))
end = time.time()
print("Time to convert {:,d} PDM samples: {:0.2f} seconds"
.format(np.size(pAudio.buffer)*16, end-start))
print("Size of audio data: {:,d} Bytes"
.format(af_uint8.nbytes))
Explanation: Play in notebook
Users can also play the audio directly in notebook. To do this, the file format has to be converted from Pulse Density Modulation (PDM) to Pulse Code Modulation (PCM).
For more information, please refer to: https://en.wikipedia.org/wiki/Pulse-density_modulation.
Step 1: Preprocessing
In this step, we first convert the 32-bit integer buffer to 16-bit. Then we divide 16-bit words (16 1-bit samples each) into 8-bit words with 1-bit sample each.
End of explanation
import time
from scipy import signal
start = time.time()
af_dec = signal.decimate(af_uint8,8,zero_phase=True)
af_dec = signal.decimate(af_dec,6,zero_phase=True)
af_dec = signal.decimate(af_dec,2,zero_phase=True)
af_dec = (af_dec[10:-10]-af_dec[10:-10].mean())
end = time.time()
print("Time to convert {:,d} Bytes: {:0.2f} seconds"
.format(af_uint8.nbytes, end-start))
print("Size of audio data: {:,d} Bytes"
.format(af_dec.nbytes))
del af_uint8
Explanation: Step 2: Converting PDM to PCM
We now convert PDM to PCM by decimation. The sample rate is reduced from 3MHz to 32kHz.
We will remove the first and last 10 samples in case there are outliers introduced by decimation. We will also remove the DC offset from the waveform.
End of explanation
from IPython.display import Audio as IPAudio
IPAudio(af_dec, rate=32000)
Explanation: Step 3: Audio Playback in Web Browser
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.figure(num=None, figsize=(15, 5))
time_axis = np.arange(0,((len(af_dec))/32000),1/32000)
plt.title('Audio Signal in Time Domain')
plt.xlabel('Time in s')
plt.ylabel('Amplitude')
plt.plot(time_axis, af_dec)
plt.show()
Explanation: Plotting PCM data
Users can display the audio data in notebook:
Plot the audio signal's amplitude over time.
Plot the spectrogram of the audio signal.
Amplitude over time
End of explanation
from scipy.fftpack import fft
yf = fft(af_dec)
yf_2 = yf[1:len(yf)//2]
xf = np.linspace(0.0, 32000//2, len(yf_2))
plt.figure(num=None, figsize=(15, 5))
plt.plot(xf, abs(yf_2))
plt.title('Magnitudes of Audio Signal Frequency Components')
plt.xlabel('Frequency in Hz')
plt.ylabel('Magnitude')
plt.show()
Explanation: Frequency spectrum
End of explanation
import matplotlib
np.seterr(divide='ignore',invalid='ignore')
matplotlib.style.use("classic")
plt.figure(num=None, figsize=(15, 4))
plt.title('Audio Signal Spectogram')
plt.xlabel('Time in s')
plt.ylabel('Frequency in Hz')
_ = plt.specgram(af_dec, Fs=32000)
Explanation: Frequency spectrum over time
Use the classic plot style for better display.
End of explanation |
11,742 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am using python and scikit-learn to find cosine similarity between item descriptions. | Problem:
import numpy as np
import pandas as pd
import sklearn
from sklearn.feature_extraction.text import TfidfVectorizer
df = load_data()
tfidf = TfidfVectorizer()
from sklearn.metrics.pairwise import cosine_similarity
response = tfidf.fit_transform(df['description']).toarray()
tf_idf = response
cosine_similarity_matrix = np.zeros((len(df), len(df)))
for i in range(len(df)):
for j in range(len(df)):
cosine_similarity_matrix[i, j] = cosine_similarity([tf_idf[i, :]], [tf_idf[j, :]]) |
11,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to data munging with Jupyter and pandas
PyGotham 2015
Step1: The case for open source data tools
Reproducibility and Transparency
Cost -- compare capabilities between software you already use and open source here
Allows a diversity of platforms on a given team
The case for notebooks
They're amenable to sketching, and they're amenable to reproducibility.
You can retrace your own steps and also make a narrative for someone else to follow.
Built-in documentation improves workflow; magic methods anticipate data analysis needs.
IPython/Jupyter Notebook
Step2: 2. built-in documentation
shift-tab brings up brief function documentation
Step3: 3. markup!
Markdown and HTML
<img src='http
Step4: 5. multimedia
Step5: 6. Sharing! Notebooks are now viewable directly on github, and also exportable as PDF or HTML.
7. Shell commands!
Step6: 8. Keyboard shortcuts! Your implementation may vary.
Use what's here or roll your own.
pandas
Input
Source
Step7: Transformation
Step8: A WILD ZIP CODE APPEARS!
<img src="http
Step9: Keep in mind you have options, and use magic methods to test implementation inline!
Step10: Merging! Better than in traffic!
Group by knows aggregating strings is nonsensical, and so drops those.
But let's add state information to the mix again! | Python Code:
from __future__ import division
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import rpy2
from IPython.display import display, Image, YouTubeVideo
%matplotlib inline
Explanation: Introduction to data munging with Jupyter and pandas
PyGotham 2015
End of explanation
pd.re
from math import
Explanation: The case for open source data tools
Reproducibility and Transparency
Cost -- compare capabilities between software you already use and open source here
Allows a diversity of platforms on a given team
The case for notebooks
They're amenable to sketching, and they're amenable to reproducibility.
You can retrace your own steps and also make a narrative for someone else to follow.
Built-in documentation improves workflow; magic methods anticipate data analysis needs.
IPython/Jupyter Notebook: some cool tips for beginners
1. holy moly tab completion
End of explanation
?pd.read_csv
Explanation: 2. built-in documentation
shift-tab brings up brief function documentation
End of explanation
%quickref
%lsmagic
%load_ext rmagic
%R x <- c(0:10, 50)
%R xm <- mean(x)
%R c(xm, mean(x, trim = 0.10))
Explanation: 3. markup!
Markdown and HTML
<img src='http://i.imgur.com/WypQf94.gif' align='left'></span>
<br>
<br>
<br>
LaTeX
<span style="font-size: 24px"> $\bar{x} = \frac{\sum_{i=1}^{n}w_i\cdot x_i}{\sum_{i=1}^{n}w_i}$</span>
Syntax highlighting for other languages
R
x <- c(0:10, 50)
xm <- mean(x)
c(xm, mean(x, trim = 0.10))
4. magic methods
End of explanation
YouTubeVideo("L4Hbv4ugUWk")
Explanation: 5. multimedia
End of explanation
!ls
Explanation: 6. Sharing! Notebooks are now viewable directly on github, and also exportable as PDF or HTML.
7. Shell commands!
End of explanation
?pd.read_csv()
# read in a CSV
# specify that zipcode should be treated as a string rather than an int!
AGI = pd.read_csv('12zpallagi.csv',dtype={'zipcode': str})
AGI.info()
Explanation: 8. Keyboard shortcuts! Your implementation may vary.
Use what's here or roll your own.
pandas
Input
Source: IRS.gov
End of explanation
# you can select columns by label or position!
AGI_column_subset = AGI[['STATE','AGI_STUB','zipcode','N1','A00100']]
# get information about type for a given field, and how many values you can expect for each
AGI_column_subset.info()
AGI_column_subset.describe()
# note this is inclusive!
AGI_row_subset = AGI_column_subset.ix[6:11]
AGI_row_subset
AGI_column_subset.rename(columns={'N1':'population','A00100':'amount'},inplace=True)
AGI_column_subset.head()
# group by zipcode and sum other values, resetting index
AGI_grouped = AGI_column_subset.groupby('zipcode').sum().reset_index()
AGI_grouped.head()
Explanation: Transformation
End of explanation
AGI_grouped['population'].mean()
#this can also be done using the na_values param upon being read in
null_zips = (AGI_grouped['zipcode'] == '00000')
AGI_grouped.loc[null_zips, 'zipcode'] = np.nan
AGI_grouped.head()
AGI_notnull = AGI_grouped.dropna()
AGI_notnull['population'].mean()
AGI_grouped.dropna(inplace=True)
# make a new column with the real amount, not in thousands
AGI_grouped['actual_amount'] = AGI_grouped['amount'] * 1000
Explanation: A WILD ZIP CODE APPEARS!
<img src="http://ecdn.funzypics.com/grumpycatmemes/pics/16/OH-No--The-Cute-Kitty-Cats-Are-Attacking-Like-Aliens----The-Hunt-Is-On-Wow-Quest-Guide-.jpg" align="left">
Delete or render null? You decide!
End of explanation
%timeit applied = AGI_grouped['amount'].apply(lambda x: x * 1000)
#being vectorized operations, this is happening at the C level and thereby much faster
%timeit vectorized = AGI_grouped['amount'] * 1000
AGI_grouped
# make a mean, using standard math operations!
AGI_grouped['weighted_mean_AGI'] = AGI_grouped['actual_amount']/AGI_grouped['population']
#use anonymous functions to change every value in a column!
#because this is an apply, much slower
AGI_grouped['weighted_mean_AGI']= AGI_grouped['weighted_mean_AGI'].apply(lambda x: round(x, 0))
AGI_grouped.info()
AGI_grouped.describe()
# drop columns you won't need
AGI_grouped.drop(['AGI_STUB','amount','actual_amount'],axis=1,inplace=True)
AGI_grouped.head()
Explanation: Keep in mind you have options, and use magic methods to test implementation inline!
End of explanation
# also look into pandas.Series.unique
AGI_subset_geo = AGI[['zipcode','STATE']].drop_duplicates()
AGI_subset_geo.head()
#merge rather than join if you want to use a common column other than the index
AGI_final = pd.merge(AGI_grouped, AGI_subset_geo, how='left', on='zipcode')
AGI_final.head()
# this gives you the greatest weighted_mean_AGI first
AGI_final.sort('weighted_mean_AGI',ascending=False).head()
# chain methods!
AGI_final.groupby('STATE').mean().sort('weighted_mean_AGI',ascending=False)
AGI_final.sort('weighted_mean_AGI').head().plot(kind='bar')
Explanation: Merging! Better than in traffic!
Group by knows aggregating strings is nonsensical, and so drops those.
But let's add state information to the mix again!
End of explanation |
11,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the <a href="https
Step1: How do we define direction of an earth magnetic field?
Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system
Step2: Magnetic applet
Based on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right).
For the prism, you can alter | Python Code:
import numpy as np
from geoscilabs.mag import Mag, Simulator
from SimPEG.potential_fields import magnetics as mag
from SimPEG import utils, data
from discretize import TensorMesh
Explanation: This is the <a href="https://jupyter.org/">Jupyter Notebook</a>, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it.
To use the notebook:
- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)
- You can alter variables and re-run cells
- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook)
This notebook uses code adapted from
SimPEG
- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
End of explanation
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/main/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey, dobj = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey, dobj)
param
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
model
Explanation: How do we define direction of an earth magnetic field?
Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system:
- X (Easting),
- Y (Northing), and
- Z (Up).
Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:
- Declination: An angle from geographic North (Ng) (positive clockwise)
- Inclination: Vertical angle from the N-E plane (positive down)
<img src="https://github.com/geoscixyz/geosci-labs/raw/main/images/mag/earthfield.png?raw=true" style="width: 60%; height: 60%"> </img>
What's data: total field anomaly
We consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth.
Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as
$$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$
We measure both earth and anomalous magnetic field such that
$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$
Total field anomaly, $\triangle \vec{B}$ can be defined as
$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$
If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:
$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$
<img src="https://github.com/geoscixyz/geosci-labs/raw/main/images/mag/totalfieldanomaly.png?raw=true" style="width: 50%; height: 50%">
Define a 3D prism
Our model is a rectangular prism. Parameters to define this prism are given below:
dx: length in Easting (x) direction (meter)
dy: length in Northing (y) direction (meter)
dz: length in Depth (z) direction (meter) below the receiver
depth: top boundary of the prism (meter)
pinc: inclination of the prism (reference is a unit northing vector; degree)
pdec: declination of the prism (reference is a unit northing vector; degree)
You can also change the height of the survey grid above the ground
- rx_h: height of the grid (meter)
Green dots show a plane where we measure data.
End of explanation
Simulator.PFSimulator(model, param)
Explanation: Magnetic applet
Based on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right).
For the prism, you can alter:
- sus: susceptibility of the prism
Parameters for the earth field are:
- Einc: inclination of the earth field (degree)
- Edec: declination of the earth field (degree)
- Bigrf: intensity of the earth field (nT)
For data, you can view:
- tf: total field anomaly,
- bx :x-component,
- by :y-component,
- bz :z-component
You can simulate and view remanent magnetization effect with parameters:
- irt: "induced", "remanent", or "total"
- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)
- rinc: inclination of the remanent magnetization (degree)
- rdec: declination of the remanent magnetization (degree)
End of explanation |
11,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Astro 283 Homework 5</h1>
Bijan Pourhamzeh
Step1: <h3>Random Sampling</h3>
Here we sample from a distribution given by the equation
$$
p(x\mid \alpha,\beta) = \left{
\begin{array}{ll}
\alpha^{-1}\exp\left(-\frac{x+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x\beta}}{\alpha}\right) & \quad x\geq 0\
0 & \quad\text{otherwise}
\end{array}
\right.
$$
This is done by the rejection method. Choosing $(\alpha,\beta)=(6.9,42.0)$, we only draw from $x<250$ since the distribution function is sufficiently small for larger values.
Step2: <h3>Comparing quadratic and cubic polynomial fits</h3>
To compare the relative fit of a quadratic and cubic polynomial to the data, we compute the ratio
\begin{eqnarray}
\frac{P\left(x^2\mid{D}\right)}{P\left(x^3\mid{D}\right)} &=& \frac{\int p\left(x^2,{\lambda}\mid{D}\right)d^3\lambda}{\int p\left(x^3,{\eta}\mid{D}\right) d^4\eta}\
&=& \frac{\int P\left({D}\mid x^2,{\lambda}\right)p\left(x^2,{\lambda}\right) d^3\lambda}{\int P\left({D}\mid x^3,{\eta}\right)p\left(x^3,{\eta}\right) d^4\eta}\
\end{eqnarray}
where we use marginalization and Baye's rule. Note that the denominator coming from Baye's rule cancels out. We assume that the priors $P\left(x^2,{\lambda}\right)$ and $P\left(x^3,{\eta}\right)$ are uniform, so from the product rule,
\begin{eqnarray}
\frac{p\left(x^2,{\lambda}\right)}{p\left(x^3,{\eta}\right)} &=& \frac{\prod_{i=1}^3\frac{1}{\lambda_i^\text{max}-\lambda_i^\text{min}}}{\prod_{i=1}^4\frac{1}{\eta_i^\text{max}-\eta_i^\text{min}}}
\end{eqnarray}
Since the errors are independent and Gaussian-distributed, the likelihood functions are given by
\begin{eqnarray}
P\left({D}\mid x^n,A\right) &=& \prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\exp\left[-\frac{(y_i-f_n(x_i,A))^2}{2\sigma_i^2}\right]\
&=& \left(\prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\right)\exp\left(-\frac{\chi_0^2}{2}\right)\exp\left(-\frac{\chi^2-\chi_0^2}{2}\right)
\end{eqnarray}
where $A$ is a generic set of parameters and $N$ is the number of data points. In the second line, we write it in a form that is more useful computationally, where
\begin{eqnarray}
\chi^2 &=& \sum_{i=1}^N\frac{(y_i-f_n(x_i,A))^2}{\sigma_i^2}\
\chi^2_0 &=& \sum_{i=1}^N\frac{(y_i-f_n(x_i,A_0))^2}{\sigma_i^2}
\end{eqnarray}
and $A_0$ is the set of best fit parameters. For the fitting functions, we have
\begin{eqnarray}
f_2(x_i,{\lambda}) &=& \lambda_2x^2_i+\lambda_1x_i+\lambda_0\
f_3(x_i,{\eta}) &=& \eta_3x^3_i+\eta_2x^2_i+\eta_1x_i+\eta_0
\end{eqnarray}
Plugging this all in, setting $\lambda_i^\text{max}=\eta_i^\text{max}=500$, setting $\lambda_i^\text{min}=\eta_i^\text{min}=-500$, and integrating $\lambda_i,\eta_i$ from the max and min values, we get an estimate of the ratio written above.
The integration is done using a Monte Carlo rejection method. The points are sampled from the Gaussians
\begin{eqnarray}
g_2\left({\lambda}\right) &=& \frac{1}{\sqrt{(2\pi)^2\det\Sigma_2}}\exp\left[-\frac{1}{2}(\lambda-\lambda_0)^T\Sigma_2^{-1}(\lambda-\lambda_0)\right]\
g_3\left({\eta}\right) &=& \frac{1}{\sqrt{(2\pi)^3\det\Sigma_3}}\exp\left[-\frac{1}{2}(\eta-\eta_0)^T\Sigma_3^{-1}(\eta-\eta_0)\right]
\end{eqnarray}
where $\lambda_0,\Sigma_2,\eta_0,\Sigma_3$ are the best fit parameters and errors. We use the rejection method to obtain samples from a uniform distribution.
From the result, we see that the quadratic fit is better since
$$
\frac{P\left(x^2\mid{D}\right)}{P\left(x^3\mid{D}\right)} \sim 10^5
$$ | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import iv
import scipy.stats
from csv import reader
from __future__ import print_function
Explanation: <h1>Astro 283 Homework 5</h1>
Bijan Pourhamzeh
End of explanation
#Choose parameters and plot to see what it looks like
alpha = 6.9
beta = 42.0
x = np.arange(0,200,0.1)
pdf = lambda x: (1/alpha)*np.exp(-(x+beta)/alpha)*iv(0,2*np.sqrt(x*beta)/alpha)
plt.plot(x,pdf(x),'b-')
plt.xlabel('x')
#Do rejection method with uniform sampling and plot to see if it looks right
iters = 100000
hi = 250
M = 4.5
x = []
u = []
for n in range(0,iters):
u_rand = np.random.rand()
x_rand = np.random.uniform(0,hi)
if u_rand < pdf(x_rand)/(M/hi):
u += [u_rand]
x += [x_rand]
plt.plot(x,u,'ro')
plt.xlabel('x')
#Write to file. Skip first two because I'm crazy.
N = 50
f = open('astro283_hw5_bijan.txt','w')
#f.write('(alpha,beta) = (6.9, 42.0)\n\n')
for i in range(0,N+2):
if i < 2:
continue
else:
index = np.random.randint(0,M)
f.write(str(x[index])+'\n')
f.close()
#Read in data for problem 2
data = reader(open('hw5prob2-data.txt','rt'), delimiter = ' ')
x_dat = []
y_dat = []
sig = []
for row in data:
if row[0] == 'x':
continue
else:
x_dat.append(float(row[0]))
y_dat.append(float(row[1]))
sig.append(float(row[2]))
Explanation: <h3>Random Sampling</h3>
Here we sample from a distribution given by the equation
$$
p(x\mid \alpha,\beta) = \left{
\begin{array}{ll}
\alpha^{-1}\exp\left(-\frac{x+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x\beta}}{\alpha}\right) & \quad x\geq 0\
0 & \quad\text{otherwise}
\end{array}
\right.
$$
This is done by the rejection method. Choosing $(\alpha,\beta)=(6.9,42.0)$, we only draw from $x<250$ since the distribution function is sufficiently small for larger values.
End of explanation
#Define functions for integration
def poly(x,n,L):
if n+1 != len(L):
return "Error: need ", n+1, " parameters!"
out = 0
for k in range(0,n+1):
out += L[n-k]*np.power(x,n-k)
return out
def chi_sq(x, y, sigma, fun, *params):
out = 0
for i in range(0,len(x)):
out += (y[i] - fun(x[i], *params))**2/(sigma[i]**2)
return out
def gaussian_f(x, x0, cov):
if type(cov) is not np.matrix:
cov = np.matrix(cov)
vec = np.matrix(x - x0)
n_p = len(vec)
return float(1/(np.sqrt((2*np.pi)**n_p*np.linalg.det(cov)))*np.exp(-0.5*vec*cov.I*vec.T))
#obtain fit parameters
from scipy.optimize import curve_fit
f_3 = lambda x,L3,L2,L1,L0: poly(x,3,[L3,L2,L1,L0])
f_2 = lambda x,L2,L1,L0: poly(x,2,[L2,L1,L0])
popt_3, pcov_3 = curve_fit(f_3, x_dat, y_dat, p0=[1,1,1,1], sigma=sig)
popt_2, pcov_2 = curve_fit(f_2, x_dat, y_dat, p0=[1,1,1], sigma=sig)
print(popt_3)
print(popt_2)
#Do integration by Gaussian sampling around best fit. Sampling is done by rejection method.
int_N = 10000
b = 500
a = -500
int_sum_3 = 0.
for k in range(0,int_N):
u_rand = np.random.rand()
x_gau = [np.random.uniform(popt_3[0]-2,popt_3[0]+2),
np.random.uniform(popt_3[1]-.5,popt_3[1]+.5),
np.random.uniform(popt_3[2]-.05,popt_3[2]+.05),
np.random.uniform(popt_3[3]-.005,popt_3[3]+.005)]
while u_rand > gaussian_f(x_gau,popt_3,pcov_3)/gaussian_f(popt_3,popt_3,pcov_3):
u_rand = np.random.rand()
x_gau = [np.random.uniform(popt_3[0]-2,popt_3[0]+2),
np.random.uniform(popt_3[1]-.5,popt_3[1]+.5),
np.random.uniform(popt_3[2]-.05,popt_3[2]+.05),
np.random.uniform(popt_3[3]-.005,popt_3[3]+.005)]
int_sum_3 += np.exp(-0.5*(chi_sq(x_dat,y_dat,sig,f_3,*x_gau)
-chi_sq(x_dat,y_dat,sig,f_3,*popt_3)))/gaussian_f(x_gau,popt_3,pcov_3)
int_sum_3 = (1/float(int_N))*int_sum_3
#print(int_sum_3)
int_sum_2 = 0.
for k in range(0,int_N):
u_rand = np.random.rand()
x_gau = [np.random.uniform(popt_2[0]-2,popt_2[0]+2),
np.random.uniform(popt_2[1]-.5,popt_2[1]+.5),
np.random.uniform(popt_2[2]-.05,popt_2[2]+.05)]
while u_rand > gaussian_f(x_gau,popt_2,pcov_2)/gaussian_f(popt_2,popt_2,pcov_2):
u_rand = np.random.rand()
x_gau = [np.random.uniform(popt_2[0]-2,popt_2[0]+2),
np.random.uniform(popt_2[1]-.5,popt_2[1]+.5),
np.random.uniform(popt_2[2]-.05,popt_2[2]+.05)]
int_sum_2 += np.exp(-0.5*(chi_sq(x_dat,y_dat,sig,f_2,*x_gau)
-chi_sq(x_dat,y_dat,sig,f_2,*popt_2)))/gaussian_f(x_gau,popt_2,pcov_2)
int_sum_2 = (1/float(int_N))*int_sum_2
#print(int_sum_2)
#Compute ratio
ratio = (int_sum_2/int_sum_3)*np.exp(-0.5*(chi_sq(x_dat,y_dat,sig,f_2,*popt_2)
-chi_sq(x_dat,y_dat,sig,f_3,*popt_3)))*(b-a)
print(ratio)
#Plot fits for fun
t=np.arange(0,10,.1)
a3, a2, a1, a0 = popt_3
b2, b1, a0 = popt_2
plt.errorbar(x_dat, y_dat, yerr=sig, fmt='o', label='Data')
fit_3, = plt.plot(t, f_3(t, *popt_3), 'r-', label='Cubic fit')
fit_2, = plt.plot(t, f_2(t, *popt_2), 'g-', label='Quadratic fit')
plt.ylabel('y')
plt.xlabel('x')
plt.title('Data with quadratic and cubic fits')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
Explanation: <h3>Comparing quadratic and cubic polynomial fits</h3>
To compare the relative fit of a quadratic and cubic polynomial to the data, we compute the ratio
\begin{eqnarray}
\frac{P\left(x^2\mid{D}\right)}{P\left(x^3\mid{D}\right)} &=& \frac{\int p\left(x^2,{\lambda}\mid{D}\right)d^3\lambda}{\int p\left(x^3,{\eta}\mid{D}\right) d^4\eta}\
&=& \frac{\int P\left({D}\mid x^2,{\lambda}\right)p\left(x^2,{\lambda}\right) d^3\lambda}{\int P\left({D}\mid x^3,{\eta}\right)p\left(x^3,{\eta}\right) d^4\eta}\
\end{eqnarray}
where we use marginalization and Baye's rule. Note that the denominator coming from Baye's rule cancels out. We assume that the priors $P\left(x^2,{\lambda}\right)$ and $P\left(x^3,{\eta}\right)$ are uniform, so from the product rule,
\begin{eqnarray}
\frac{p\left(x^2,{\lambda}\right)}{p\left(x^3,{\eta}\right)} &=& \frac{\prod_{i=1}^3\frac{1}{\lambda_i^\text{max}-\lambda_i^\text{min}}}{\prod_{i=1}^4\frac{1}{\eta_i^\text{max}-\eta_i^\text{min}}}
\end{eqnarray}
Since the errors are independent and Gaussian-distributed, the likelihood functions are given by
\begin{eqnarray}
P\left({D}\mid x^n,A\right) &=& \prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\exp\left[-\frac{(y_i-f_n(x_i,A))^2}{2\sigma_i^2}\right]\
&=& \left(\prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\right)\exp\left(-\frac{\chi_0^2}{2}\right)\exp\left(-\frac{\chi^2-\chi_0^2}{2}\right)
\end{eqnarray}
where $A$ is a generic set of parameters and $N$ is the number of data points. In the second line, we write it in a form that is more useful computationally, where
\begin{eqnarray}
\chi^2 &=& \sum_{i=1}^N\frac{(y_i-f_n(x_i,A))^2}{\sigma_i^2}\
\chi^2_0 &=& \sum_{i=1}^N\frac{(y_i-f_n(x_i,A_0))^2}{\sigma_i^2}
\end{eqnarray}
and $A_0$ is the set of best fit parameters. For the fitting functions, we have
\begin{eqnarray}
f_2(x_i,{\lambda}) &=& \lambda_2x^2_i+\lambda_1x_i+\lambda_0\
f_3(x_i,{\eta}) &=& \eta_3x^3_i+\eta_2x^2_i+\eta_1x_i+\eta_0
\end{eqnarray}
Plugging this all in, setting $\lambda_i^\text{max}=\eta_i^\text{max}=500$, setting $\lambda_i^\text{min}=\eta_i^\text{min}=-500$, and integrating $\lambda_i,\eta_i$ from the max and min values, we get an estimate of the ratio written above.
The integration is done using a Monte Carlo rejection method. The points are sampled from the Gaussians
\begin{eqnarray}
g_2\left({\lambda}\right) &=& \frac{1}{\sqrt{(2\pi)^2\det\Sigma_2}}\exp\left[-\frac{1}{2}(\lambda-\lambda_0)^T\Sigma_2^{-1}(\lambda-\lambda_0)\right]\
g_3\left({\eta}\right) &=& \frac{1}{\sqrt{(2\pi)^3\det\Sigma_3}}\exp\left[-\frac{1}{2}(\eta-\eta_0)^T\Sigma_3^{-1}(\eta-\eta_0)\right]
\end{eqnarray}
where $\lambda_0,\Sigma_2,\eta_0,\Sigma_3$ are the best fit parameters and errors. We use the rejection method to obtain samples from a uniform distribution.
From the result, we see that the quadratic fit is better since
$$
\frac{P\left(x^2\mid{D}\right)}{P\left(x^3\mid{D}\right)} \sim 10^5
$$
End of explanation |
11,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Definition of the problem
We need to develop a model that can classify breast cells as bningn (non harmful) or malignant (cancerous).
The list of attribues are
Step1: Clean data
missing data
NaNs
Step2: There is no missing data in the dataset.
Now check if the actual data makes sense.
Step3: Warning.
The column "Bare Nuclei" contains contains strings and '?' values.
Let's fix this.
Question
Step4: Note that 402 rows have the mode value of '1'.
This represets about 60% of the data.
Hence, it makes more sence to replace '?' with the mode
Step5: Model
Now that we have cleaned the data, we pass to the model.
Check class balance
Step6: Take home message
Step7: Take home messages
Step8: As said before, I use a random forest classifier since this works best for umbalanced classes
Step9: WOW, the model performed great with default parameters.
Question
Step11: The model barely improves its accuracy.
Take home message
Step12: The learning curve is oscillating by 0.02
Hence the model is not yet converged with respect to the size of the training set, for that tight precision.
Take home message | Python Code:
# 1 Read dataset
cols = [
'clump thickness',
'uniformity of cell size',
'uniformity of cell shape',
'marginal adhesion',
'single epithelial cell size',
'bare nuclei',
'bland chromatin',
'normal nucleoli',
'mitoses',
'class']
df = pd.read_csv('breast-cancer-wisconsin.data',index_col=0,header=None)
df.index.name = 'id number'
df.columns=cols
df.head()
# Change class labels to 0 and 1 for simplicity
df['class']=df['class'].apply(lambda x: 0 if x == 2 else 1 )
Explanation: Definition of the problem
We need to develop a model that can classify breast cells as bningn (non harmful) or malignant (cancerous).
The list of attribues are:
Sample code number: id number
Clump Thickness: 1 - 10
Uniformity of Cell Size: 1 - 10
Uniformity of Cell Shape: 1 - 10
Marginal Adhesion: 1 - 10
Single Epithelial Cell Size: 1 - 10
Bare Nuclei: 1 - 10
Bland Chromatin: 1 - 10
Normal Nucleoli: 1 - 10
Mitoses: 1 - 10
Class: (0 for benign, 1 for malignant)
End of explanation
# Is there missing data or NaNs?
df_size = len(df)
df_null = df.isnull().values.any()
#print("Data frame size {}, missing data in {}".format(df_size,len(df_null)))
df_null
#No null values
Explanation: Clean data
missing data
NaNs
End of explanation
# See if there are strange values in the dataset:
def visualize_unique_values(df):
# Loop over each column
print('Column','Unique_values')
for icol in range(len(cols)):
# Select a column
df_col = df.ix[:,icol]
# Check unique values
unique_values = df_col.unique()
print(cols[icol],unique_values)
visualize_unique_values(df)
Explanation: There is no missing data in the dataset.
Now check if the actual data makes sense.
End of explanation
bare_nuclei = df['bare nuclei']
# 1 get data frame with all non missing data:
df2 = bare_nuclei.loc[bare_nuclei != '?']
print(len(df2),len(df))
# Get the mode value
col_mode=eval(df2.mode().values[0])
print("Mode :", col_mode)
#Verify:
bare_nuclei.value_counts()
Explanation: Warning.
The column "Bare Nuclei" contains contains strings and '?' values.
Let's fix this.
Question: replace '?' with mode or median?
End of explanation
# Convert data to mode:
df2 = bare_nuclei.apply(lambda x: col_mode if x == '?' else int(x) )
#Check it worked:
print(df2.unique())
# Replace dataset column with clean data
df['bare nuclei'] = df2
# Check this actually worked
visualize_unique_values(df)
Explanation: Note that 402 rows have the mode value of '1'.
This represets about 60% of the data.
Hence, it makes more sence to replace '?' with the mode: '1'
End of explanation
y = df['class']
X = df.copy()
del X['class']
class1 = y[y == 0]
class2 = y[y == 1]
print("Class balance\n Class 0: {}\n Class 1: {}\n Ratio: {}".format(len(class1),len(class1),len(class1)/len(class2)))
Explanation: Model
Now that we have cleaned the data, we pass to the model.
Check class balance
End of explanation
%matplotlib inline
import seaborn as sns
sns.pairplot(df, hue="class")
#df.head(1)
Explanation: Take home message: The classes are slightly umbalanced by about a factor of 2.
To deal with class imbalance, there are several options:
Source https://elitedatascience.com/imbalanced-classes
Up-sample minority class
Down-sample majority class
Penalize algorithms
Use random forests
Based on my own experience with unbalanced data, I go for 4. Use random forests.
Visualize features
Let's visualize the feature distributions to get more insights.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=2, stratify= y )
Explanation: Take home messages:
No clear separation is found in between the classes, so need to go beyond a linear model. Random forests is OK too.
Uniformity of cell size and uniformity of cell shape are clearly correlated, to take into account later.
Partition data:
Partition as 60% and 40%.
Stratify, important for unbalanced classes.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
model = RandomForestClassifier()
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
print(classification_report(y_test,y_pred))
Explanation: As said before, I use a random forest classifier since this works best for umbalanced classes
End of explanation
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
# Max number of features by default is sqrt(n_features), which is good to keep to prevent from overfitting.
#If “auto”, then max_features=sqrt(n_features).
rfc = RandomForestClassifier()
params = {'criterion': ['gini','entropy'],'n_estimators': range(10, 50, 10)}
searcher = RandomizedSearchCV(rfc, params, n_jobs=-1, n_iter=4,scoring='f1')
searcher.fit(X_train, y_train) #assuming X and y are your data
import numpy as np
# Utility function to report best scores
def report(results, n_top=1):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
results['mean_test_score'][candidate],
results['std_test_score'][candidate]))
print("Parameters: {0}".format(results['params'][candidate]))
print("")
report(searcher.cv_results_)
model = RandomForestClassifier(n_estimators=30, max_depth = None)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
print(classification_report(y_test,y_pred))
Explanation: WOW, the model performed great with default parameters.
Question: can we improve f1-score from 0.96 to 1.0?
This is probably important since we are dealing with cancer, a serious disease.
End of explanation
from sklearn.metrics import f1_score
n_steps=10
step = int(len(X_train)/n_steps)
results=[]
for ix in range(n_steps):
size_train = step + ix * step
model.fit(X_train[:size_train],y_train[:size_train])
y_pred = model.predict(X_test)
score = f1_score(y_test,y_pred)
results.append([size_train,score])
results = np.array(results)
import matplotlib.pyplot as plt
plt.plot(results[:,0],results[:,1])
plt.xlabel('Training set size')
plt.ylabel('F1 score')
plt.title('Learning curve')
plt.grid(True)
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred)
class_names=[0,1]
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
Explanation: The model barely improves its accuracy.
Take home message:
We now have estimated the optimal parameters for our RF model, and accuracy is ~0.97
Question:
Is the algorithm converged with respect to the size of the training set?
End of explanation
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import StratifiedKFold
from scipy import interp
# #############################################################################
# Data IO and generation
# Import some data to play with
X = X_train
y = y_train
X, y = X[y != 2], y[y != 2]
n_samples, n_features = X.shape
# Add noisy features
#random_state = np.random.RandomState(0)
#X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# #############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = StratifiedKFold(n_splits=6)
classifier = RandomForestClassifier(n_estimators=30, random_state = 0)
#svm.SVC(kernel='linear', probability=True,
# random_state=random_state)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
i = 0
for train, test in cv.split(X, y):
probas_ = classifier.fit(X.iloc[train], y.iloc[train]).predict_proba(X.iloc[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y.iloc[test], probas_[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Luck', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
# Confusion matrix
from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
Explanation: The learning curve is oscillating by 0.02
Hence the model is not yet converged with respect to the size of the training set, for that tight precision.
Take home message:
To further improve the performance of the model we need more data!!
Summary
I cleaned and explored the data.
Missing '?' values were replaced with the mode for the corresponding column.
By visualizing scatter plots, I found that some features are correlated.
To get feature importances, I need other model than my RF since some features are correlated.
I found the classes are umbalanced by a factor of two.
To deal with the umbalanced class problem, I use a RF classifier and adopt the F1 metric.
RF classifier with default parameters performs well, with F1 score of 0.97
Tunning of the RF parameters did not improve the F1 score, and hence default parameters close to the optimal.
By plotting the learning curve, I found that the model could be further improved by increasing the training set size.
ROC curve
End of explanation |
11,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to handle models from Python?
In this tutorial you will learn how to handle models from Python. This is done using the GammaLib classes GModels, GModel, GModelSpatial, GModelSpectral, and GModelTemporal. You can find all the information on the GammaLib classes in the Doxygen documentation.
In GammaLib there are some conventions about units. If not otherwise specified
Step1: Reading and updating an existing XML model
Opening an XML model and parsing its content
An existing XML model can be read into a model container (GModels class).
Step2: The easiest way to inspect a model container is by using the function print that will display in the terminal its entire content.
Step3: The model container contains two models, for a total of ten parameters. The first model is a sky model named Crab, the second a background model. Each model has a spatial, a spectral, and a temporal component. Each model component has parameters. For each parameter you have a value, an error (in our example it is always zero because no fit to the data was done yet), an allowed value range for fitting the parameter to data, and, whenever relevant, units. In the parenthesis you can se if the parameter is free or fixed, and some more technical information, i.e, if the model has an analytical gradient w.r.t. this parameter implemented (can be fit to data quickly under certain conditions), and if internally the parameter is scaled (parameters should be scaled so that value/scale is of order unity for a more efficient handling by the optimiser during a fit to the data). Furthemore each model can be restricted to a specific instrument or observation. In the example the only restriction is that the CTA background model is used only for CTA data (Instruments ...............
Step4: You can access specific model components through the spatial, spectral, and temporal methods, for example
Step5: You can also blindly parse models and model parameters. Below we get names and values for the spectral parameters of all the models in the container.
Step6: Finally there are convenience methods to compute for a model component the flux and energy flux over an energy interval (spectral components), and the flux within a circular region (spatial models).
Step7: Spatial models can be normalised to unity. This is always the case for analytical models including point sources. Therefore we do expect to have a spatial model flux of 1 for a region encompassing the source.
Updating a model
Once you have a model container you can update its parameters, change model components and attributes, delete existing models or append new ones.
In the first example we change the position of the Crab source by 0.1 deg toward positive R.A. and we free the spatial parameters, but we restrict the allowed range around the known position.
Step8: In the second example we change the spectral model from a power law to an exponentially-cutoff power law, keeping the power law parameters to the original values, and adding a cutoff at 50 TeV.
Step9: In the last example we remove the IRF background model, and append a cube background model from another container.
Step10: Save model changes to an XML file
You can use directly the model container in Python to pass it to ctools or cscripts, but you can also write it to an XML file.
Step11: Creating a new model
New models can be created directly in Python. First, create a new empty model container.
Step12: To create a new source we need to define at least spatial and spectral components (if not specified the temporal component is taken to be constants). Refer to the Doxygen documentation for finding out all the models available in GammaLib and how to use them.
We first create a Gaussian spatial component.
Step13: We create a power-law spectral component.
Step14: We create the source and append it to the model container.
Step15: We also add a CTA IRF background model with power-law spectral correction.
Step16: We inspect the model we just created.
Step17: The container can be used in Python or written to disk in an XML file. | Python Code:
import gammalib
Explanation: How to handle models from Python?
In this tutorial you will learn how to handle models from Python. This is done using the GammaLib classes GModels, GModel, GModelSpatial, GModelSpectral, and GModelTemporal. You can find all the information on the GammaLib classes in the Doxygen documentation.
In GammaLib there are some conventions about units. If not otherwise specified:
energies are in MeV
photon fluxes are in photons/cm2/s
differential photon fluxes are in photons/cm2/s/MeV
energy fluxes are in erg/cm2/s
To start we import the gammalib Python module.
End of explanation
container = gammalib.GModels('$CTOOLS/share/models/crab.xml')
Explanation: Reading and updating an existing XML model
Opening an XML model and parsing its content
An existing XML model can be read into a model container (GModels class).
End of explanation
print(container)
Explanation: The easiest way to inspect a model container is by using the function print that will display in the terminal its entire content.
End of explanation
print('value', container['Crab']['Prefactor'].value())
print('error', container['Crab']['Prefactor'].error())
print('min value', container['Crab']['Prefactor'].min())
print('max value', container['Crab']['Prefactor'].max())
print('is free?', container['Crab']['Prefactor'].is_free())
Explanation: The model container contains two models, for a total of ten parameters. The first model is a sky model named Crab, the second a background model. Each model has a spatial, a spectral, and a temporal component. Each model component has parameters. For each parameter you have a value, an error (in our example it is always zero because no fit to the data was done yet), an allowed value range for fitting the parameter to data, and, whenever relevant, units. In the parenthesis you can se if the parameter is free or fixed, and some more technical information, i.e, if the model has an analytical gradient w.r.t. this parameter implemented (can be fit to data quickly under certain conditions), and if internally the parameter is scaled (parameters should be scaled so that value/scale is of order unity for a more efficient handling by the optimiser during a fit to the data). Furthemore each model can be restricted to a specific instrument or observation. In the example the only restriction is that the CTA background model is used only for CTA data (Instruments ...............: CTA).
Model containers can be parsed using the model and parameter names if they are know to you. Below for example we directly access the properties of the Prefactor for the spectral model of the Crab source.
End of explanation
print(container['Crab'].spectral())
Explanation: You can access specific model components through the spatial, spectral, and temporal methods, for example:
End of explanation
for s in range(container.size()):
model = container[s]
print('###### Name:', model.name())
for i in range(model.spectral().size()):
param = model.spectral()[i]
print(param.name(),'=',param.value())
Explanation: You can also blindly parse models and model parameters. Below we get names and values for the spectral parameters of all the models in the container.
End of explanation
# calculate Crab spectral model flux in the 1-10 TeV energy range
# define energy bounds
emin = gammalib.GEnergy(1.,'TeV')
emax = gammalib.GEnergy(10.,'TeV')
# extract model from container'
crab = container['Crab']
# photon flux (cm-2 s-1)
flux = crab.spectral().flux(emin,emax)
print('Crab photon flux:', flux, 'cm-2 s-1')
# energy flux (cm-2 s-1)
eflux = crab.spectral().eflux(emin,emax)
print('Crab energy flux', eflux, 'erg cm-2 s-1')
# calculate Crab spatial model flux in 0.2 deg region centred on the source
# region centre = source position
centre = crab.spatial().dir()
# circular region
reg = gammalib.GSkyRegionCircle(centre,0.2)
# flux in the circular region (relative value)
flux = crab.spatial().flux(reg)
print('Crab flux in region:',flux,'(relative value)')
Explanation: Finally there are convenience methods to compute for a model component the flux and energy flux over an energy interval (spectral components), and the flux within a circular region (spatial models).
End of explanation
# change R.A.
crab['RA'].value(crab['RA'].value() + .1)
# free spatial parameters
crab['RA'].free()
crab['DEC'].free()
# restric range
crab['RA'].min(79.)
crab['RA'].max(89.)
crab['DEC'].min(21.5)
crab['DEC'].max(22.5)
# inspect model to verify changes
print(crab)
Explanation: Spatial models can be normalised to unity. This is always the case for analytical models including point sources. Therefore we do expect to have a spatial model flux of 1 for a region encompassing the source.
Updating a model
Once you have a model container you can update its parameters, change model components and attributes, delete existing models or append new ones.
In the first example we change the position of the Crab source by 0.1 deg toward positive R.A. and we free the spatial parameters, but we restrict the allowed range around the known position.
End of explanation
expplaw = gammalib.GModelSpectralExpPlaw()
expplaw['Prefactor'].value(crab['Prefactor'].value())
expplaw['Index'].value(crab['Index'].value())
expplaw['PivotEnergy'].value(crab['PivotEnergy'].value())
expplaw['CutoffEnergy'].value(50.e6) # value in MeV
crab.spectral(expplaw)
# inspect model to verify changes
print(crab)
Explanation: In the second example we change the spectral model from a power law to an exponentially-cutoff power law, keeping the power law parameters to the original values, and adding a cutoff at 50 TeV.
End of explanation
# remove existing background model
container.remove('CTABackgroundModel')
# open example background cube model
bkgcube_container = gammalib.GModels('$CTOOLS/share/models/bkg_cube.xml')
# append first model in new container to old container
container.append(bkgcube_container[0])
# inspect model to verify changes
print(container)
Explanation: In the last example we remove the IRF background model, and append a cube background model from another container.
End of explanation
container.save('my_crab.xml')
Explanation: Save model changes to an XML file
You can use directly the model container in Python to pass it to ctools or cscripts, but you can also write it to an XML file.
End of explanation
my_container = gammalib.GModels()
Explanation: Creating a new model
New models can be created directly in Python. First, create a new empty model container.
End of explanation
# define source direction
srcdir = gammalib.GSkyDir()
# set R.A. and Dec
srcdir.radec_deg(54.,-19.)
# Gaussian spatial component
spatial = gammalib.GModelSpatialRadialGauss(srcdir,0.5) # centre and radius in deg
Explanation: To create a new source we need to define at least spatial and spectral components (if not specified the temporal component is taken to be constants). Refer to the Doxygen documentation for finding out all the models available in GammaLib and how to use them.
We first create a Gaussian spatial component.
End of explanation
# define pivot energy
pivot = gammalib.GEnergy(1,'TeV')
# power law
spectral = gammalib.GModelSpectralPlaw(1.e-18,-2.5,pivot) # differential photon flux, index, pivot energy
Explanation: We create a power-law spectral component.
End of explanation
# create source
source = gammalib.GModelSky(spatial,spectral)
source.name('my_source')
# append to container
my_container.append(source)
Explanation: We create the source and append it to the model container.
End of explanation
# spectral correction
spectral = gammalib.GModelSpectralPlaw(1, 0, gammalib.GEnergy(1, 'TeV'))
# create background model
bkgmodel = gammalib.GCTAModelIrfBackground(spectral)
bkgmodel.name('Background')
bkgmodel.instruments('CTA')
# append to container
my_container.append(bkgmodel)
Explanation: We also add a CTA IRF background model with power-law spectral correction.
End of explanation
print(my_container)
Explanation: We inspect the model we just created.
End of explanation
my_container.save('mymodel.xml')
Explanation: The container can be used in Python or written to disk in an XML file.
End of explanation |
11,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
When I first ran this, my dataframes weren't "aligned".
So it's very important to check your datasets after every load.
The correspondence between dates and topics and numerical features is critical for training!
Step1: Well, that was much faster...
Step2: Well that seems more "balanced" at least.
And it's nice to have a continuous score.
Step3: Wait, why are we classifying with a regressor anyway?
Step4: Not so hot...
Balance the training again?
Get rid of some negatives?
Step5: So once again about 38% correlation is the best we can do...
Don't try this at home!
It never finished after 10 hours of running on my laptop | Python Code:
print(len(dates))
print(len(topics))
print(len(nums))
print(sum(nums.favorite_count >= 1))
sum(nums.index == dates.index) == len(dates)
sum(nums.index == topics.index) == len(dates)
sgd = SGDRegressor()
sgd
sgd = SGDRegressor().fit(topics.values, nums.favorite_count)
Explanation: When I first ran this, my dataframes weren't "aligned".
So it's very important to check your datasets after every load.
The correspondence between dates and topics and numerical features is critical for training!
End of explanation
predicted_favorites = sgd.predict(topics.values)
predicted_favorites
np.sum(predicted_favorites >= 1)
Explanation: Well, that was much faster...
End of explanation
np.sum(nums.favorite_count.values >= 1)
from pug.nlp.stats import Confusion
results = pd.DataFrame()
results['predicted'] = pd.Series(predicted_favorites >= 1)
results['truth'] = pd.Series(nums.favorite_count >= 1)
conf = Confusion(results)
conf
results.predicted.corr(results.truth)
Explanation: Well that seems more "balanced" at least.
And it's nice to have a continuous score.
End of explanation
pd.Series(predicted_favorites).corr(nums.favorite_count)
Explanation: Wait, why are we classifying with a regressor anyway?
End of explanation
pos = np.array(nums.favorite_count >= 1)
neg = ~pos
portion_pos = 2 * float(sum(pos)) / len(nums)
mask = ((np.random.binomial(1, portion_pos, size=len(nums)).astype(bool) & neg) | pos)
sgd = SGDRegressor().fit(topics[mask], nums.favorite_count[mask] >= 1)
print(portion_pos)
print(sum(mask))
print(sum(pos) * 2)
print(sum(neg))
len(nums)
results = pd.DataFrame()
# you have to adjust the threshold as you add more negatives
results['predicted'] = pd.Series(sgd.predict(topics.values) > .3)
results['truth'] = pd.Series(nums.favorite_count.values >= 1)
conf = Confusion(results)
conf
results.predicted.corr(results.truth)
conf.stats_dict
Explanation: Not so hot...
Balance the training again?
Get rid of some negatives?
End of explanation
svr = SVR().fit(topics.values, nums.favorite_count)
Explanation: So once again about 38% correlation is the best we can do...
Don't try this at home!
It never finished after 10 hours of running on my laptop
End of explanation |
11,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks
Step2: Example Model
Some useful utilities
. Remember that our image data is initially N x H x W x C, where
Step3: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Layers, Activations, Loss functions
Step4: Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture
Step5: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes)
Step6: You should see the following from the run above
(64, 10)
True
GPU!
Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
Step7: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information
* Layers, Activations, Loss functions
Step8: Train the model
Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
Step9: Check the accuracy of the model.
Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
Step12: Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.
Things you should try
Step13: Describe what you did here
In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
Tell us here
Test Set - Do this only once
Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy. | Python Code:
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
Explanation: What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)
What is it?
TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
Why?
Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
How will I learn TensorFlow?
TensorFlow has many excellent tutorials available, including those from Google themselves.
Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
Load Datasets
End of explanation
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1, [-1, 5408])
y_out = tf.matmul(h1_flat, W1) + b1
return y_out
y_out = simple_model(X, y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y, 10), logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
Explanation: Example Model
Some useful utilities
. Remember that our image data is initially N x H x W x C, where:
* N is the number of datapoints
* H is the height of each image in pixels
* W is the height of each image in pixels
* C is the number of channels (usually 3: R, G, B)
This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.
The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
TensorFlow Details
In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.
End of explanation
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict, axis=1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss, correct_prediction, accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(Xd.shape[0] // batch_size):
# generate indicies for the batch
start_idx = (i * batch_size) % Xd.shape[0]
idx = train_indicies[start_idx:start_idx + batch_size]
# create a feed dictionary for this batch
feed_dict = {
X: Xd[idx,:],
y: yd[idx],
is_training: training_now,
}
# get batch size
actual_batch_size = yd[idx].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables, feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss * actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct / Xd.shape[0]
total_loss = np.sum(losses) / Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess, y_out, mean_loss, X_train, y_train, 1, 64, 100, train_step, True)
print('Validation')
run_model(sess, y_out, mean_loss, X_val, y_val, 1, 64)
Explanation: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
BatchNorm: https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization
Training the model on one epoch
While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide.
Optionally we can also specify a device context such as /cpu:0 or /gpu:0. For documentation on this behavior see this TensorFlow guide
You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below
End of explanation
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[8192, 1024])
b1 = tf.get_variable("b1", shape=[1024])
W2 = tf.get_variable("W2", shape=[1024, 10])
b2 = tf.get_variable("b2", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1, 1, 1, 1], padding='SAME') + bconv1
h1 = tf.nn.relu(a1)
h1_normalized = tf.layers.batch_normalization(h1)
h1_pooled = tf.layers.MaxPooling2D(pool_size=2,
strides=2,
padding='VALID')(h1_normalized)
h1_flat = tf.reshape(h1_pooled, [-1, 8192])
a2 = tf.matmul(h1_flat, W1) + b1
h2 = tf.nn.relu(a2)
y_out = tf.matmul(h2, W2) + b2
return y_out
y_out = complex_model(X, y, is_training)
Explanation: Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:
7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs
End of explanation
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32, 3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
Explanation: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
End of explanation
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
Explanation: You should see the following from the run above
(64, 10)
True
GPU!
Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
End of explanation
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
total_loss = tf.losses.softmax_cross_entropy(tf.one_hot(y, 10), logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
optimizer = tf.train.RMSPropOptimizer(1e-3)
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
Explanation: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information
* Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
* Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
End of explanation
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess, y_out, mean_loss, X_train, y_train, 1, 64, 100, train_step)
Explanation: Train the model
Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
End of explanation
print('Validation')
run_model(sess, y_out, mean_loss, X_val, y_val, 1, 64)
Explanation: Check the accuracy of the model.
Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
End of explanation
# Feel free to play with this cell
def conv_relu_bn(X, filters_shape, stride, scope_name, regulizers):
Define block of layers
Conv2d - BN - Leaky-ReLU
Args:
X(tf.tensor): input tensor (N, H, W, F)
filters_shape(tuple): filtes shapes (H, W, filters_num)
stride(int): strides for convolution
scope_name(str): prefix for scope name
regulizers(list): list of tf.tensors regulizers
Returns:
out(tf.tensor): output tensor(N, H, W, filters_num)
_, H, W, F = X.shape
filter_H, filter_W, filters_num = filters_shape
conv_weights = tf.get_variable(scope_name + '/conv_weights', shape=[filter_H, filter_W, F, filters_num])
conv_bias = tf.get_variable(scope_name + '/bias_weights', shape=[filters_num])
activation = tf.nn.conv2d(X,
conv_weights,
strides=[1, stride, stride, 1],
padding='SAME')
activation += conv_bias
activation_normilized = tf.layers.batch_normalization(activation)
out = tf.nn.leaky_relu(activation_normilized, alpha=0.4)
regulizers.append(tf.nn.l2_loss(conv_weights))
return out
def resnext_block(X, branch_num, scope_name, regulizers):
Define ResNeXt network block
Args:
X(tf.tensor): 4-D tensor (N, H, W, F) where
N - batch size, F - filter number
branch_num(int): numbers of branches
scope_name(str): name of scope
regulizers(list): list of tf.tensors regulizers
Returns:
out(tf.tensor): 4-D tensor (N, H, W, F)
_, H, W, F = X.shape
assert F % branch_num, 'Number of filters in inception block must divide by branch_num'
scope_name = scope_name + '/branch_%d/conv_%d'
branch_filt = F // branch_num
conv1 = [conv_relu_bn(X, (1, 1, branch_filt), 1, scope_name % (idx, 1), regulizers)
for idx in range(branch_num)]
conv2 = [conv_relu_bn(conv1[idx], (3, 3, branch_filt), 1, scope_name % (idx, 2), regulizers)
for idx in range(branch_num)]
conv3 = [conv_relu_bn(conv2[idx], (1, 1, F), 1, scope_name % (idx, 3), regulizers)
for idx in range(branch_num)]
combined_convs = tf.add_n(conv3)
out = combined_convs + X
return out
def my_model(X, y, is_training):
regulizers = []
x = X
x = conv_relu_bn(X, filters_shape=[3, 3, 64],
stride=1, scope_name='conv1',
regulizers=regulizers)
x = resnext_block(x, branch_num=4,
scope_name='resnext_1',
regulizers=regulizers)
x = conv_relu_bn(X, filters_shape=[3, 3, 128],
stride=2, scope_name='conv2',
regulizers=regulizers)
x = resnext_block(x, branch_num=4,
scope_name='resnext_2',
regulizers=regulizers)
x = conv_relu_bn(X, filters_shape=[3, 3, 256],
stride=2, scope_name='conv3',
regulizers=regulizers)
x = resnext_block(x, branch_num=4,
scope_name='resnext_3',
regulizers=regulizers)
x = conv_relu_bn(X, filters_shape=[3, 3, 512],
stride=2, scope_name='conv4',
regulizers=regulizers)
x = resnext_block(x, branch_num=4,
scope_name='resnext_4',
regulizers=regulizers)
x = tf.layers.average_pooling2d(x, pool_size=(x.shape[1], x.shape[2]), strides=1)
x = tf.reshape(x, [-1, 512])
W1 = tf.get_variable("W1", shape=[512, 10])
b1 = tf.get_variable("b1", shape=[10])
regulizers.append(tf.nn.l2_loss(W1))
out = tf.matmul(x, W1) + b1
return out, regulizers
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
REG_COEF = 5e-5
y_out, regulizers = my_model(X, y, is_training)
total_loss = tf.losses.softmax_cross_entropy(tf.one_hot(y, 10), logits=y_out)
mean_loss = tf.reduce_mean(total_loss + REG_COEF * tf.add_n(regulizers))
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.natural_exp_decay(learning_rate=3e-3,
global_step=global_step,
decay_steps=10000,
decay_rate=0.5)
optimizer = tf.train.AdamOptimizer(learning_rate)
train_step = optimizer.minimize(mean_loss, global_step=global_step)
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for _ in range(10):
print('Training: ', end='')
run_model(sess, y_out, mean_loss, X_train, y_train,
epochs=5, batch_size=256, print_every=100000,
training=train_step, plot_losses=False)
print('Validation: ', end='')
run_model(sess, y_out, mean_loss, X_val, y_val, 1, 64)
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
Explanation: Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above.
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
Use TensorFlow Scope: Use TensorFlow scope and/or tf.layers to make it easier to write deeper networks. See this tutorial for how to use tf.layers.
Use Learning Rate Decay: As the notes point out, decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the Tensorflow documentation for learning rate decay.
Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
Regularization: Add l2 weight regularization, or perhaps use Dropout as in the TensorFlow MNIST tutorial
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
Model ensembles
Data augmentation
New Architectures
ResNets where the input from the previous layer is added to the output.
DenseNets where inputs into previous layers are concatenated together.
This blog has an in-depth overview
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.
Have fun and happy training!
End of explanation
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
Explanation: Describe what you did here
In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
Tell us here
Test Set - Do this only once
Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
End of explanation |
11,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boston Housing Prediction
Author
Step1: Loading the boston dataset - Train and Test
Step2: Understanding the distribution and relationship of the data
Describing the data to understand the mean and std for all of the features
Step3: Plotting the target price value
Step4: From the above plot we can understand that the price of the houses ranges in an average price range of
Step5: Most co-related features
Based on the above co-relation chart, we would like to take into consider the features which are more closely related to the target value. The features in consideration are
Step6: Now let us plot a distribution chart of the selected features. This would help us understand the distribution of the data a little better.
Step7: From the above dist plots we can conclude that the distribution of the data in 3 out of 4 features have skewed data distribution. The feature
Step8: After applying logarithm test to the above data set, it seems that only LSTAT is responding correctly and is getting normally distributed. PTRATIO and INDUS is not having any significant impact in the normalization.
Now let us examine the co-relation between the features and the MEDV feature
Step9: Building the data with the selected features
Step10: Splitting the train data for train and cross-validation datasets
Step11: Model Fitting
Using DecisionTreeRegressor for finding the maximum score
Step12: Selecting the max depth
Step13: Training the model with max_depth
Step14: Finding the importance of feature in the regression algorithm
Step15: We can conclude that rm and lstat are two of them most important factor in the prices of the house in boston area.
Visualizing the decision made on the dataset
Step16: Create Final Submission (Kaggle only)
Plotting the prediction against the TEST data
Step17: Generate the test dataframe as csv output | Python Code:
import pandas as pd
import numpy as np
Explanation: Boston Housing Prediction
Author: Rishu Shrivastava, Babu Sivaprakasam
Link: https://www.kaggle.com/c/boston-housing
Last Update: 02 Apr 2018
Importing libraries
End of explanation
data_path = "C:/Users/Rishu/Desktop/dATA/boston/"
boston_data=pd.read_csv(data_path+'train.csv')
boston_data.info()
boston_data.head()
boston_data_test=pd.read_csv(data_path+'test.csv')
boston_data_test.head()
Explanation: Loading the boston dataset - Train and Test
End of explanation
boston_data.describe()
Explanation: Understanding the distribution and relationship of the data
Describing the data to understand the mean and std for all of the features
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
sns.distplot(boston_data['medv'], rug=True, color="b")
plt.axvline(boston_data['medv'].mean(), color="b", linestyle='solid', linewidth=2)
plt.axvline(boston_data['medv'].median(), color="b", linestyle='dashed', linewidth=2)
plt.show()
print ("Mean Price value :",boston_data['medv'].mean())
print ("Standard Deviation:",boston_data['medv'].std())
Explanation: Plotting the target price value: "medv"
End of explanation
ax = plt.subplots(figsize = (14,7))
sns.heatmap(boston_data[['crim','indus','nox','rm','dis','rad','tax','ptratio','black','lstat','medv']].corr(),
linecolor = 'white', square=True, annot=True)
plt.show()
sns.jointplot(x='lstat', y='medv', data=boston_data, color="r", kind="reg")
plt.show()
Explanation: From the above plot we can understand that the price of the houses ranges in an average price range of: 22.76 (in $1000).
So any ML algorithm with a bad accuracy will end up predicting the mean value.
Understanding the features and its relation with the price of the boston houses "medv"
From the data set, let us consider the following features (only the prime contenders out of 15 features):
1. crim (per capita crime rate by town):
An area with higher crime rate is expected to have lesser price than with a well established areas.
2. indus (proportion of non-retail business acres per town):
Non retail business can be an important factor in the prices of house as it provides convienence to the house owners and people. But acres of non-retail business land doesn't give much insight into the prices of the house. Hence we can give this feature low priority, subject to correlation with medv data.
3. nox (nitrogen oxides concentration):
Nitrogen oxide can be a major factor in the housing prices as the preferences of buying a house with lower pollution would be higher.
4. rm (average number of rooms per dwelling):
Higher the number of rooms, higher the price.
5. dis (weighted mean of distances to five Boston employment centres):
Closer the distance to offices, expected to have more interest in the areas as it reduces commute. Not sure about the American way, but closer to offices, higher the house prices.
6. ptratio (pupil-teacher ratio by town):
Though i am not 100% sure about the relation between pupil and teacher. I am assuming more the number of pupil to teacher ratio, people are expected to send kids there, in turn making the prices higher. We can give this feature low priority as it may not be closely related to house pricing, subject to correlation with medv data.
7. black (the proportion of blacks by town):
Subject to correlation with the target data
8. lstat (lower status of the population) :
People who are earning lower wages are not expected to live in a high priced houses. Hence lower the lstat higher the housing prices.
Finding correlation with target and the selected features
End of explanation
# Pair plot of the features
sns.pairplot(boston_data[['indus','rm','ptratio','lstat','medv']])
plt.show()
Explanation: Most co-related features
Based on the above co-relation chart, we would like to take into consider the features which are more closely related to the target value. The features in consideration are:
1. indus
2. rm : Highest positive correlation with medv (coeff: 0.69)
3. ptratio
4. lstat : Highly negative correlated feature with coefficient of -0.74
Now let us visualize the distribution of the 4 selected features in a pairplot
End of explanation
fig = plt.figure(figsize=(14,7))
plt.subplot(2,2,1)
sns.distplot(boston_data['indus'], rug=True, color="b")
plt.axvline(boston_data['indus'].mean(), color="b", linestyle='solid', linewidth=2)
plt.axvline(boston_data['indus'].median(), color="b", linestyle='dashed', linewidth=2)
plt.subplot(2,2,2)
sns.distplot(boston_data['rm'], rug=True, color="r")
plt.axvline(boston_data['rm'].mean(), color="r", linestyle='solid', linewidth=2)
plt.axvline(boston_data['rm'].median(), color="r", linestyle='dashed', linewidth=2)
plt.subplot(2,2,3)
sns.distplot(boston_data['ptratio'], rug=True, color="g")
plt.axvline(boston_data['ptratio'].mean(), color="g", linestyle='solid', linewidth=2)
plt.axvline(boston_data['ptratio'].median(), color="g", linestyle='dashed', linewidth=2)
plt.subplot(2,2,4)
sns.distplot(boston_data['lstat'], rug=True, color="y")
plt.axvline(boston_data['lstat'].mean(), color="y", linestyle='solid', linewidth=2)
plt.axvline(boston_data['lstat'].median(), color="y", linestyle='dashed', linewidth=2)
plt.show()
Explanation: Now let us plot a distribution chart of the selected features. This would help us understand the distribution of the data a little better.
End of explanation
fig = plt.figure(figsize=(14,7))
plt.subplot(2,2,1)
sns.distplot(np.log(boston_data['indus']), rug=True, color="b")
plt.axvline(np.log(boston_data['indus']).mean(), color="b", linestyle='solid', linewidth=2)
plt.axvline(np.log(boston_data['indus']).median(), color="b", linestyle='dashed', linewidth=2)
plt.subplot(2,2,2)
sns.distplot(boston_data['rm'], rug=True, color="r")
plt.axvline(boston_data['rm'].mean(), color="r", linestyle='solid', linewidth=2)
plt.axvline(boston_data['rm'].median(), color="r", linestyle='dashed', linewidth=2)
plt.subplot(2,2,3)
sns.distplot(np.log(boston_data['ptratio']), rug=True, color="g")
plt.axvline(np.log(boston_data['ptratio']).mean(), color="g", linestyle='solid', linewidth=2)
plt.axvline(np.log(boston_data['ptratio']).median(), color="g", linestyle='dashed', linewidth=2)
plt.subplot(2,2,4)
sns.distplot(np.log(boston_data['lstat']), rug=True, color="y")
plt.axvline(np.log(boston_data['lstat']).mean(), color="y", linestyle='solid', linewidth=2)
plt.axvline(np.log(boston_data['lstat']).median(), color="y", linestyle='dashed', linewidth=2)
plt.show()
Explanation: From the above dist plots we can conclude that the distribution of the data in 3 out of 4 features have skewed data distribution. The feature : RM is currently having the distribution in a somewhat normalized fashion.
PTRATIO is negatively skewed;
LSTAT and INDUS are positively skewed
Now we need to normalize these data sets to bring the data into a normal distribution.
End of explanation
fig = plt.figure(figsize=(14,7))
plt.subplot(2,2,1)
x = np.log(boston_data[['indus']])
sns.regplot(x=x, y="medv", data=boston_data, color="b")
plt.subplot(2,2,2)
x2 = boston_data[['rm']]
sns.regplot(x=x2, y="medv", data=boston_data, color="r")
plt.subplot(2,2,3)
x3 = np.log(boston_data[['ptratio']])
sns.regplot(x=x3, y="medv", data=boston_data, color="g")
plt.subplot(2,2,4)
x4 = np.log(boston_data[['lstat']])
sns.regplot(x=x4, y="medv", data=boston_data, color="y")
plt.show()
Explanation: After applying logarithm test to the above data set, it seems that only LSTAT is responding correctly and is getting normally distributed. PTRATIO and INDUS is not having any significant impact in the normalization.
Now let us examine the co-relation between the features and the MEDV feature:
End of explanation
boston_data['lstat_log']=np.log(boston_data['lstat'])
boston_data_test['lstat_log_test']=np.log(boston_data_test['lstat'])
#boston_data['ptratio_log']=np.log(boston_data['ptratio'])
#boston_data_test['ptratio_log_test']=np.log(boston_data_test['ptratio'])
#boston_data['indus_log']=np.log(boston_data['indus'])
#boston_data_test['indus_log_test']=np.log(boston_data_test['indus'])
X = boston_data[['rm','lstat_log']]
X_bd_test=boston_data_test[['rm','lstat_log_test']]
y = boston_data[['medv']]
Explanation: Building the data with the selected features
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_cv, y_train, y_cv = train_test_split(X, y, random_state=0)
print(len(X_train), len(y_train), len(X_cv), len(y_cv))
Explanation: Splitting the train data for train and cross-validation datasets
End of explanation
from sklearn.tree import DecisionTreeRegressor
max_score = 0
max_depth = 0
def decision_tree(j):
dtr = DecisionTreeRegressor(random_state=0,max_depth=j)
return dtr.fit(X_train, y_train)
for i in range(1,11):
_dtr = decision_tree(i)
clf_score = _dtr.score(X_cv,y_cv)
print("Decision Tree Regressor at max_depth:",i," scored: ",clf_score)
if clf_score>max_score:
max_score = clf_score
max_depth = i
Explanation: Model Fitting
Using DecisionTreeRegressor for finding the maximum score
End of explanation
print("The maximum score is achieved at a depth of : ",max_depth," with score of :",max_score)
Explanation: Selecting the max depth
End of explanation
dtr_clf = decision_tree(max_depth)
Explanation: Training the model with max_depth
End of explanation
sns.barplot(X_train.columns, dtr_clf.feature_importances_)
plt.show()
Explanation: Finding the importance of feature in the regression algorithm
End of explanation
from IPython.display import Image
import pydotplus
from sklearn.externals.six import StringIO
from sklearn import tree
dot_data = StringIO()
tree.export_graphviz(dtr_clf, out_file=dot_data,
feature_names=X_train.columns,
class_names="medv",
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
Explanation: We can conclude that rm and lstat are two of them most important factor in the prices of the house in boston area.
Visualizing the decision made on the dataset
End of explanation
bd_predict = dtr_clf.predict(X_bd_test)
plt.scatter(boston_data_test['ID'],bd_predict)
plt.show()
print ("Mean Price value before modelling:",boston_data['medv'].mean())
print ("Mean Price value after modelling :",bd_predict.mean())
Explanation: Create Final Submission (Kaggle only)
Plotting the prediction against the TEST data
End of explanation
submission = pd.DataFrame({
"ID": boston_data_test['ID'],
"medv": bd_predict
})
submission.to_csv(data_path+'output.csv', index=False)
Explanation: Generate the test dataframe as csv output
End of explanation |
11,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning. Use this dataset to answer the questions below.
Step1: Question 0 (Example)
What is the first country in df?
This function should return a Series.
Step2: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
Step3: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
Step4: Question 3
Which country has the biggest difference between their summer and winter gold medal counts relative to their total gold medal count? Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
Step5: Question 4
Write a function to update the dataframe to include a new column called "Points" which is a weighted value where each gold medal counts for 3 points, silver medals for 2 points, and bronze mdeals for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
Step6: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint
Step7: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)?
This function should return a list of string values.
Step8: Question 7
Which county has had the largest change in population within the five year period (hint
Step9: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index). | Python Code:
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.head()
Explanation: You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning. Use this dataset to answer the questions below.
End of explanation
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
Explanation: Question 0 (Example)
What is the first country in df?
This function should return a Series.
End of explanation
def answer_one():
return df['Gold'].idxmax()
answer_one()
Explanation: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
End of explanation
def answer_two():
return (df['Gold'] - df['Gold.1']).idxmax()
answer_two()
Explanation: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
End of explanation
def answer_three():
tmp_df = df[(df['Gold.1'] > 0) & (df['Gold'] > 0)]
return ((tmp_df['Gold'] - tmp_df['Gold.1']) / ((tmp_df['Gold'] + tmp_df['Gold.1']))).idxmax()
answer_three()
Explanation: Question 3
Which country has the biggest difference between their summer and winter gold medal counts relative to their total gold medal count? Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
End of explanation
def answer_four():
Points = 3*df['Gold.2'] + 2*df['Silver.2'] + 1*df['Bronze.2']
return Points
answer_four()
Explanation: Question 4
Write a function to update the dataframe to include a new column called "Points" which is a weighted value where each gold medal counts for 3 points, silver medals for 2 points, and bronze mdeals for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
End of explanation
census_df = pd.read_csv('census.csv')
census_df.columns
def answer_five():
return census_df.groupby(['STNAME']).size().idxmax()
answer_five()
Explanation: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)
This function should return a single string value.
End of explanation
def answer_six():
t = census_df[census_df['SUMLEV'] == 50]
t = t.sort_values(by=['STNAME', 'CENSUS2010POP'], ascending=False).groupby(['STNAME']).head(3)
return list(t.groupby(['STNAME']).sum().sort_values(by='CENSUS2010POP', ascending=False).head(3).index)
answer_six()
Explanation: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)?
This function should return a list of string values.
End of explanation
def answer_seven():
tmp_df = census_df[census_df['SUMLEV'] == 50]
tmp_df['2011'] = tmp_df['POPESTIMATE2011'] - tmp_df['POPESTIMATE2010']
tmp_df['2012'] = tmp_df['POPESTIMATE2012'] - tmp_df['POPESTIMATE2011']
tmp_df['2013'] = tmp_df['POPESTIMATE2013'] - tmp_df['POPESTIMATE2012']
tmp_df['2014'] = tmp_df['POPESTIMATE2014'] - tmp_df['POPESTIMATE2013']
tmp_df['2015'] = tmp_df['POPESTIMATE2015'] - tmp_df['POPESTIMATE2014']
tmp_df['max'] = tmp_df[['2011', '2012', '2013', '2014', '2015']].max(axis=1)
return tmp_df.sort_values(by='max', ascending=False).iloc[0].CTYNAME
answer_seven()
Explanation: Question 7
Which county has had the largest change in population within the five year period (hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all five columns)?
This function should return a single string value.
End of explanation
def answer_eight():
result = census_df[(census_df["REGION"].isin([1,2])) & (census_df['CTYNAME'].str.startswith('Washington')) & (census_df['POPESTIMATE2015'] > census_df['POPESTIMATE2014'])]
return result[['STNAME', 'CTYNAME']]
answer_eight()
Explanation: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).
End of explanation |
11,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evolutionary game theory - solutions
Assume the frequency dependent selection model for a population with two types of individuals
Step1: B. $f_1(x)=x_1x_2 - x_2\qquad f_2(x)=x_2 - x_1 + 1/2$
$f_1(x)=f_2(x)\Rightarrow x_1x_2 - x_2 = x_2 - x_1 + 1/2$ setting $x=x_1$ so that $1 - x = x_2$ gives | Python Code:
import sympy as sym
x_1 = sym.symbols("x_1")
sym.solveset(3 * x_1 - 2 * (1 - x_1), x_1)
Explanation: Evolutionary game theory - solutions
Assume the frequency dependent selection model for a population with two types of individuals: $x=(x_1, x_2)$ such that $x_1 + x_2 = 1$. Obtain all the stable distribution for the sytem defined by the following fitness functions:
For all of the functions in question, $x=(0, 1)$ and $x=(1, 0)$ are equilibria. There is a 3rd potential equilibria given by $f_1(x) = f_2(x)$. This is bookwork: https://vknight.org/gt/chapters/11/#Frequency-dependent-selection
$f_1(x)=x_1 - x_2\qquad f_2(x)=x_2 - 2 x_1$
$f_1(x)=f_2(x)\Rightarrow x_1 - x_2 = x_2 - 2x_1 \Rightarrow 3x_1 = 2x_2$ which gives (using the fact that $x_1 + x_2=1$ single solution: $(x_1, x_2)=(2/5, 3/5)$
End of explanation
x = sym.symbols("x", positive=True)
res = sym.solveset(- x ** 2 + 4 * x - sym.S(5) / 2, x)
res
for sol in list(res):
print(sol, float(sol), float(1 - sol))
Explanation: B. $f_1(x)=x_1x_2 - x_2\qquad f_2(x)=x_2 - x_1 + 1/2$
$f_1(x)=f_2(x)\Rightarrow x_1x_2 - x_2 = x_2 - x_1 + 1/2$ setting $x=x_1$ so that $1 - x = x_2$ gives: $x - x ^ 2 - 1 + x = 1 - x - x + 1/2$ which corresponds to:
$$-x ^ 2 + 4 x - 5/2=0$$
This has solution $x=2 \pm \sqrt{6}/2$, thus $(x_1, x_2) = (2 - \sqrt{6}/2, -1 + \sqrt{6}/2)$ is the only set of solutions for which $1 \geq x_1 \geq 0$ and $1\geq x_2 \geq 0$.
End of explanation |
11,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My installtion instructions
Step1: Import Policy, RL agent, ...
Step3: Define a Callback Function
Step5: Create and wrap the environment
Step6: Define and train the PPO agent
Step9: Plotting helpers
Stable Baselines has some built-in plotting helper, that you can find in stable_baselines.results_plotter. However, to show how to do it yourself, we are going to use custom plotting functions.
Step10: Make a GIF of a Trained Agent
C.f. https
Step11: Evaluate the trained agent | Python Code:
import stable_baselines
stable_baselines.__version__
Explanation: My installtion instructions: https://gitlab.com/-/snippets/2057703
Source: https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/master/monitor_training.ipynb
See also: https://stable-baselines.readthedocs.io/en/master/guide/examples.html#try-it-online-with-colab-notebooks
Stable Baselines, a Fork of OpenAI Baselines - Monitor Training and Plotting
Github Repo: https://github.com/hill-a/stable-baselines
Medium article: https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82
RL Baselines Zoo is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines.
It also provides basic scripts for training, evaluating agents, tuning hyperparameters and recording videos.
Documentation is available online: https://stable-baselines.readthedocs.io/
Install Dependencies and Stable Baselines Using Pip
List of full dependencies can be found in the README.
sudo apt-get update && sudo apt-get install cmake libopenmpi-dev zlib1g-dev
pip install stable-baselines[mpi]
C.f. https://stackoverflow.com/a/61318224
Google Colab
Uncomment the following cells if you use Google Colab
Import Baselines
End of explanation
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import gym
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import PPO2
from stable_baselines.bench import Monitor
from stable_baselines.common.callbacks import BaseCallback
from stable_baselines.results_plotter import load_results, ts2xy
from stable_baselines import results_plotter
sns.set_context("talk")
Explanation: Import Policy, RL agent, ...
End of explanation
class SaveOnBestTrainingRewardCallback(BaseCallback):
Callback for saving a model (the check is done every ``check_freq`` steps)
based on the training reward (in practice, we recommend using ``EvalCallback``).
:param check_freq: (int)
:param log_dir: (str) Path to the folder where the model will be saved.
It must contains the file created by the ``Monitor`` wrapper.
:param verbose: (int)
def __init__(self, check_freq: int, log_dir: str, verbose=1):
super(SaveOnBestTrainingRewardCallback, self).__init__(verbose)
self.check_freq = check_freq
self.log_dir = log_dir
self.save_path = os.path.join(log_dir, 'best_model')
self.best_mean_reward = -np.inf
def _init_callback(self) -> None:
# Create folder if needed
if self.save_path is not None:
os.makedirs(self.save_path, exist_ok=True)
def _on_step(self) -> bool:
if self.n_calls % self.check_freq == 0:
# Retrieve training reward
x, y = ts2xy(load_results(self.log_dir), 'timesteps')
if len(x) > 0:
# Mean training reward over the last 100 episodes
mean_reward = np.mean(y[-100:])
if self.verbose > 0:
print("Num timesteps: {}".format(self.num_timesteps))
print("Best mean reward: {:.2f} - Last mean reward per episode: {:.2f}".format(self.best_mean_reward, mean_reward))
# New best model, you could save the agent here
if mean_reward > self.best_mean_reward:
self.best_mean_reward = mean_reward
# Example for saving best model
if self.verbose > 0:
print("Saving new best model to {}".format(self.save_path))
self.model.save(self.save_path)
return True
Explanation: Define a Callback Function
End of explanation
from gym import spaces
import random
# https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html
# https://towardsdatascience.com/creating-a-custom-openai-gym-environment-for-stock-trading-be532be3910e
POSITION_MIN = -1000.
POSITION_MAX = 1000.
VELOCITY_MIN = -100.
VELOCITY_MAX = 100.
ACTION_MIN = -2.
ACTION_MAX = 2.
DT = 0.1
MASS = 0.1
MAX_STEPS = 1000
class PointMassEnv(gym.Env):
Custom Environment that follows gym interface
metadata = {'render.modes': ['human']}
def __init__(self, mass=MASS, dt=DT):
super(PointMassEnv, self).__init__() # Define action and observation space
self.mass = mass
self.dt = dt
self.position = None
self.velocity = None
self.current_step = None
self.viewer = None
# Actions: force
self.action_space = spaces.Box(low=ACTION_MIN, high=ACTION_MAX, shape=(1,), dtype=np.float32)
# State: (position, velocity)
self.observation_space = spaces.Box(low=np.array([VELOCITY_MIN, POSITION_MIN]), high=np.array([VELOCITY_MAX, POSITION_MAX]), dtype=np.float32)
def _reward(self, position):
# https://en.wikipedia.org/wiki/Gaussian_function
a = 1. # The height of the curve's peak
b = 0. # The position of the center of the peak
c = 100. # The width of the "bell"
x = position
return a * np.exp(-(x - b)**2/(2.*c**2))
def step(self, action):
# Execute one time step within the environment
self.current_step += 1
# Kinetics of point mass
force = action[0]
acceleration = force / self.mass
self.velocity += acceleration * self.dt
self.position += self.velocity * self.dt
#print("force: {}, acceleration: {}, velocity: {}, position: {}".format(force, acceleration, self.velocity, self.position))
obs = np.array([self.velocity, self.position])
# Compute reward and done
reward = self._reward(self.position)
done = self.current_step > MAX_STEPS
return obs, reward, done, {}
def reset(self):
# Reset the state of the environment to an initial state
self.position = random.uniform(POSITION_MIN, POSITION_MAX)
self.velocity = 0.
self.current_step = 0
return np.array([self.velocity, self.position])
# def render(self, mode='human', close=False):
# # Render the environment to the screen
# print(self.velocity, self.position)
#
#
# def close(self):
# pass
def render(self, mode='human'):
screen_width = 600
screen_height = 400
world_width = POSITION_MAX - POSITION_MIN
scale = screen_width/world_width
carty = 100 # TOP OF CART
cartwidth = 50.0
cartheight = 30.0
if self.viewer is None:
from gym.envs.classic_control import rendering
self.viewer = rendering.Viewer(screen_width, screen_height)
l, r, t, b = -cartwidth / 2, cartwidth / 2, cartheight / 2, -cartheight / 2
axleoffset = cartheight / 4.0
cart = rendering.FilledPolygon([(l, b), (l, t), (r, t), (r, b)])
self.carttrans = rendering.Transform()
cart.add_attr(self.carttrans)
self.viewer.add_geom(cart)
self.track = rendering.Line((0, carty), (screen_width, carty))
self.track.set_color(0, 0, 0)
self.viewer.add_geom(self.track)
if self.position is None:
return None
x = self.position
cartx = x * scale + screen_width / 2.0 # MIDDLE OF CART
self.carttrans.set_translation(cartx, carty)
return self.viewer.render(return_rgb_array=mode == 'rgb_array')
def close(self):
if self.viewer:
self.viewer.close()
self.viewer = None
env = PointMassEnv()
env.reset()
#env = PointMassEnv()
#env.reset()
#a = env.action_space.sample()
#s = env.step(a)
#a, s
##env = gym.make('CartPole-v1')
#env = gym.make('MountainCarContinuous-v0')
#env.reset()
#a = env.action_space.sample()
#s = env.step(a)
#a, s
#env.action_space.sample()
y = []
T = range(1000)
for t in T:
action = np.array([2.])
next_velocity, next_pos = env.step(action)[0]
y.append(next_pos)
plt.plot(list(T), y);
#x = np.arange(-100, 100, 0.1)
#y = np.array([env._reward(_x) for _x in x])
#plt.plot(x, y);
# Create log dir
log_dir = "/tmp/gym/"
os.makedirs(log_dir, exist_ok=True)
# Create and wrap the environment
#env = gym.make('CartPole-v1')
env = PointMassEnv()
# Logs will be saved in log_dir/monitor.csv
env = Monitor(env, log_dir)
env = DummyVecEnv([lambda: env]) # PPO2 requires a vectorized environment to run
Explanation: Create and wrap the environment
End of explanation
# Create the callback: check every 1000 steps
callback = SaveOnBestTrainingRewardCallback(check_freq=1000, log_dir=log_dir)
%%time
model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=100000, callback=callback)
Explanation: Define and train the PPO agent
End of explanation
# Helper from the library
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "PPO Point Mass")
def moving_average(values, window):
Smooth values by doing a moving average
:param values: (numpy array)
:param window: (int)
:return: (numpy array)
weights = np.repeat(1.0, window) / window
return np.convolve(values, weights, 'valid')
def plot_results(log_folder, title='Learning Curve'):
plot the results
:param log_folder: (str) the save location of the results to plot
:param title: (str) the title of the task to plot
x, y = ts2xy(load_results(log_folder), 'timesteps')
#y = moving_average(y, window=50)
## Truncate x
#x = x[len(x) - len(y):]
fig = plt.figure(title, figsize=(16,6))
plt.plot(x, y)
plt.xlabel('Number of Timesteps')
plt.ylabel('Rewards')
plt.title(title + " Smoothed")
plt.show()
plot_results(log_dir)
Explanation: Plotting helpers
Stable Baselines has some built-in plotting helper, that you can find in stable_baselines.results_plotter. However, to show how to do it yourself, we are going to use custom plotting functions.
End of explanation
import imageio
images = []
obs = model.env.reset()
img = model.env.render(mode='rgb_array')
for i in range(350):
images.append(img)
action, _ = model.predict(obs)
obs, _, _ ,_ = model.env.step(action)
img = model.env.render(mode='rgb_array')
imageio.mimsave('ppo_point_mass_env.gif', [np.array(img) for i, img in enumerate(images) if i%2 == 0], fps=29)
import IPython
from IPython.display import Image
Image(open('ppo_point_mass_env.gif','rb').read()) # https://stackoverflow.com/questions/61110188/how-to-display-a-gif-in-jupyter-notebook-using-google-colab
Explanation: Make a GIF of a Trained Agent
C.f. https://stable-baselines.readthedocs.io/en/master/guide/examples.html#bonus-make-a-gif-of-a-trained-agent
End of explanation
env._max_episode_steps = 1000
reward_list = []
NUM_EPISODES = 100
for episode_index in range(NUM_EPISODES):
reward_sum = 0
obs = env.reset()
done = False
while not done:
action, _states = model.predict(obs)
obs, reward, done, info = env.step(action)
reward_sum += reward
#env.render() # Cannot render on Google Colab
reward_list.append(reward_sum)
print("Mean reward:", sum(reward_list) / NUM_EPISODES)
env.close()
Explanation: Evaluate the trained agent
End of explanation |
11,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Arrays and Vectorization
Frequently, matrices and vectors are needed for computation and are a convenient way to store and access data. Vectors are more commonly many rows with a single column. A significant amount of work has been done to make computers very fast at doing matrix math, and while the tradeoff is commonly framed as 'more memory for faster calculation', there is typically enough memory in contemporary computation devices to process chunks of matrices.
In Python's NumPy, vectors and matrices are referred to as arrays
Step1: Arrays versus lists
While both data types hold a series of discrete information, arrays are stored more efficiently in memory and have significantly higher performance than Python lists. They also bring with them a host of properties and syntax that makes them more efficient, especially for numeric operations.
Step2: If statement says "10 loops, best of 3
Step3: Index arrays using square brackets, starting from zero and specifying row, column
Step4: Arrays are duck typed just like Python variables, that is to say that Python will try to determine what kind of variable it should be based on how it's used.
Numpy arrays are all the same type of variable. To check the data type (dtype) enter
Step5: Different variable types use different amounts of memory and can have an effect on performance for very large arrays.
Changing the type of array is possible via
Step6: We can replace values in an array
Step7: Deleting values from an array is possible, but due to the way they're stored in memory, it makes sense to keep the array structure. Often, a 'nan' is used (not a number) or some nonsensical value is used, i.e.
Step8: Fancy ways of indexing
Slicing Arrays
Step9: Logical Indexing
We can specify only the elements we want by using an array of True/False values
Step10: Using the isnan function in numpy
Step11: Building arrays from other arrays (appending)
We can build arrays from other array via Python stacking in a horizontal or vertical way
Step12: We can bring these dimensions back down to one via flatten
Step13: Caution
Step14: However, if the dimensions don't match, it won't work
Step15: Arrays as Python objects
Python can be used as an object oriented language, and numpy arrays have lots of properties. There are many functions we can use as numpy.<function>(<array>) and array.<function>
For example, the transpose above
Step16: One of the most frequenly used properties of arrays is the dimension
Step17: Sorting | Python Code:
# Python imports
import numpy as np
Explanation: Numpy Arrays and Vectorization
Frequently, matrices and vectors are needed for computation and are a convenient way to store and access data. Vectors are more commonly many rows with a single column. A significant amount of work has been done to make computers very fast at doing matrix math, and while the tradeoff is commonly framed as 'more memory for faster calculation', there is typically enough memory in contemporary computation devices to process chunks of matrices.
In Python's NumPy, vectors and matrices are referred to as arrays: a constant-sized collection of elements (of the same type - integer, floating point number, string of characters, etc.).
Underneath, Python arrays use C for greater efficiency.
Note that this is different from the python list - lists are a python datatype, whereas arrays are objects that are made available via the python package numpy.
Array restrictions:
- You can't append things to an array (i.e. you can't make it bigger without creating an entirely new array)
- You can only put things of the same type into an array
The array is the basis of all (fast) scientific computing in Python.
We need to have a solid foundation of what an array is, how to use it, and what it can do.
By the end of this file you should have seen simple examples of:
1. Arrays are faster than lists!
2. Create an array
3. Different types of arrays
4. Creating and accessing (indexing) arrays
5. Building arrays from other arrays (appending)
6. Operations on arrays of different sizes (broadcasting)
7. Arrays as Python objects
Further reading:
https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html
End of explanation
l = 20000
test_list = list(range(l))
test_array = np.arange(l)
print(type(test_list))
print(type(test_array))
print(test_list[:300]) # Print the first 300 elements
# (more on indexing in a bit):
print(test_array)
%timeit [np.sqrt(i) for i in test_list]
%timeit [np.sqrt(test_array)]
Explanation: Arrays versus lists
While both data types hold a series of discrete information, arrays are stored more efficiently in memory and have significantly higher performance than Python lists. They also bring with them a host of properties and syntax that makes them more efficient, especially for numeric operations.
End of explanation
test_array = np.array([[1,2,3,4], [6,7,8,9]])
print(test_array)
Explanation: If statement says "10 loops, best of 3: [time]" it means the fastest of 10 repeated runs was recorded - then the 10 runs were repeated twice more, resulting in an overall fastest time.
Creating and accessing (indexing) arrays
We can create arrays from scratch:
End of explanation
test_array[0,3]
Explanation: Index arrays using square brackets, starting from zero and specifying row, column:
End of explanation
test_array.dtype
Explanation: Arrays are duck typed just like Python variables, that is to say that Python will try to determine what kind of variable it should be based on how it's used.
Numpy arrays are all the same type of variable. To check the data type (dtype) enter:
End of explanation
test_array = test_array.astype('float64')
print(test_array)
# We can create arrays of boolean values too:
bool_array = np.array([[True, True, False,True],[False,False,True,False]])
print(bool_array)
Explanation: Different variable types use different amounts of memory and can have an effect on performance for very large arrays.
Changing the type of array is possible via:
End of explanation
test_array[0,3]=99 # Assign value directly
print(test_array)
Explanation: We can replace values in an array:
End of explanation
test_array[0,3] = 'nan'
print(test_array)
Explanation: Deleting values from an array is possible, but due to the way they're stored in memory, it makes sense to keep the array structure. Often, a 'nan' is used (not a number) or some nonsensical value is used, i.e.: 0 or -1.
Keep in mind that 'nan' only works for some types of arrays:
End of explanation
test_array[:,1] # Use the ':' to index along one dimension fully
test_array[1,1:] # Adding a colon indexes the rest of the values
# (includes the numbered index)
test_array[1,1:-1] # We can index relative to the first and last elements
test_array[1,::2] # We can specify the indexing order
test_array[1,1::-1] # We can get pretty fancy about it
# Index second row, second from first to second from
# last in reverse order.
Explanation: Fancy ways of indexing
Slicing Arrays:
Slicing arrays refers to indexing >1 elements in a previous array. Slicing is often used when parallelizing computations using arrays. Indexing is array[row, column].
End of explanation
test_array[bool_array] # Use our bool_array from earlier
Explanation: Logical Indexing
We can specify only the elements we want by using an array of True/False values:
End of explanation
nans = np.isnan(test_array)
print(nans)
test_array[nans] = 4
print(test_array)
Explanation: Using the isnan function in numpy:
End of explanation
test_array_Vstacked = np.vstack((test_array, [1,2,3,4]))
print(test_array_Vstacked)
test_array_Hstacked = np.hstack((test_array, test_array))
print(test_array_Hstacked)
Explanation: Building arrays from other arrays (appending)
We can build arrays from other array via Python stacking in a horizontal or vertical way:
End of explanation
test_array_Hstacked.flatten()
Explanation: We can bring these dimensions back down to one via flatten:
End of explanation
test_array
print("The broadcasted array is: ", test_array[0,:])
test_array[0,:] * test_array
Explanation: Caution: appending to numpy arrays frequently is memory intensive. Every time this happens, an entirely new chunk of memory needs to be used, so the old array is moved in memory to a new location.
It's faster to 'preallocate' an array with empty values, and simply populate as the computation progresses.
Operations on arrays of different sizes (broadcasting)
Python automatically handles arithmetic operations with arrays of different dimensions. In other words, when arrays have different (but compatible) shapes, the smaller is 'broadcast' across the larger.
End of explanation
print("The broadcasted array is: ", test_array[:,0])
#test_array[:,0] * test_array # Uncomment the line to see that the
# dimensions don't match
# Make use of the matrix transpose (also can use array.T)
np.transpose( test_array[:,0]*np.transpose(test_array) )
Explanation: However, if the dimensions don't match, it won't work:
End of explanation
print("The original array is: ", test_array)
print("The transposed array is: ", np.transpose(test_array) )
# Alternatively, using test_array as an opject:
print("The transposed array is: ", test_array.transpose() )
Explanation: Arrays as Python objects
Python can be used as an object oriented language, and numpy arrays have lots of properties. There are many functions we can use as numpy.<function>(<array>) and array.<function>
For example, the transpose above:
End of explanation
print("The original array dimensions are: ", test_array.shape)
print("The array transpose dimensions are: ", test_array.transpose().shape)
Explanation: One of the most frequenly used properties of arrays is the dimension:
End of explanation
test_array2 = np.array([1,5,4,0,1])
print("The original array is: ", test_array2)
test_array3 = test_array2.sort() # Run the sort - note that the new variable isn't assigned
print("The reassigned array should be sorted: ", test_array3)
print("test_array2 after sort: ", test_array2)
Explanation: Sorting:
Sorting arrays happens in-place, so once the function is called on an array, the sorting happens to the original array:
End of explanation |
11,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling Protein-Ligand Interactions with Atomic Convolutions
By Nathan C. Frey | Twitter and Bharath Ramsundar | Twitter
This DeepChem tutorial introduces the Atomic Convolutional Neural Network. We'll see the structure of the AtomicConvModel and write a simple program to run Atomic Convolutions.
ACNN Architecture
ACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.
The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. The following methods are used to build the ACNN architecture
Step1: Getting protein-ligand data
If you worked through Tutorial 13 on modeling protein-ligand interactions, you'll already be familiar with how to obtain a set of data from PDBbind for training our model. Since we explored molecular complexes in detail in the previous tutorial, this time we'll simply initialize an AtomicConvFeaturizer and load the PDBbind dataset directly using MolNet.
Step2: load_pdbbind allows us to specify if we want to use the entire protein or only the binding pocket (pocket=True) for featurization. Using only the pocket saves memory and speeds up the featurization. We can also use the "core" dataset of ~200 high-quality complexes for rapidly testing our model, or the larger "refined" set of nearly 5000 complexes for more datapoints and more robust training/validation. On Colab, it takes only a minute to featurize the core PDBbind set! This is pretty incredible, and it means you can quickly experiment with different featurizations and model architectures.
Step3: Unfortunately, if you try to use the "refined" dataset, there are some complexes that cannot be featurized. To resolve this issue, rather than increasing complex_num_atoms, simply omit the lines of the dataset that have an x value of None
Step4: Training the model
Now that we've got our dataset, let's go ahead and initialize an AtomicConvModel to train. Keep the input parameters the same as those used in AtomicConvFeaturizer, or else we'll get errors. layer_sizes controls the number of layers and the size of each dense layer in the network. We choose these hyperparameters to be the same as those used in the original paper.
Step5: The loss curves are not exactly smooth, which is unsurprising because we are using 154 training and 19 validation datapoints. Increasing the dataset size may help with this, but will also require greater computational resources.
Step6: The ACNN paper showed a Pearson $R^2$ score of 0.912 and 0.448 for a random 80/20 split of the PDBbind core train/test sets. Here, we've used an 80/10/10 training/validation/test split and achieved similar performance for the training set (0.943). We can see from the performance on the training, validation, and test sets (and from the results in the paper) that the ACNN can learn chemical interactions from small training datasets, but struggles to generalize. Still, it is pretty amazing that we can train an AtomicConvModel with only a few lines of code and start predicting binding affinities!
From here, you can experiment with different hyperparameters, more challenging splits, and the "refined" set of PDBbind to see if you can reduce overfitting and come up with a more robust model. | Python Code:
!pip install -q condacolab
import condacolab
condacolab.install()
!/usr/local/bin/conda info -e
!/usr/local/bin/conda install -c conda-forge pycosat mdtraj pdbfixer openmm -y -q # needed for AtomicConvs
!pip install --pre deepchem
import deepchem
deepchem.__version__
import deepchem as dc
import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from rdkit import Chem
from deepchem.molnet import load_pdbbind
from deepchem.models import AtomicConvModel
from deepchem.feat import AtomicConvFeaturizer
Explanation: Modeling Protein-Ligand Interactions with Atomic Convolutions
By Nathan C. Frey | Twitter and Bharath Ramsundar | Twitter
This DeepChem tutorial introduces the Atomic Convolutional Neural Network. We'll see the structure of the AtomicConvModel and write a simple program to run Atomic Convolutions.
ACNN Architecture
ACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.
The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. The following methods are used to build the ACNN architecture:
Distance Matrix
The distance matrix $R$ is constructed from the Cartesian atomic coordinates $X$. It calculates distances from the distance tensor $D$. The distance matrix construction accepts as input a $(N, 3)$ coordinate matrix $C$. This matrix is “neighbor listed” into a $(N, M)$ matrix $R$.
python
R = tf.reduce_sum(tf.multiply(D, D), 3) # D: Distance Tensor
R = tf.sqrt(R) # R: Distance Matrix
return R
Atom type convolution
The output of the atom type convolution is constructed from the distance matrix $R$ and atomic number matrix $Z$. The matrix $R$ is fed into a (1x1) filter with stride 1 and depth of $N_{at}$ , where $N_{at}$ is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on the neighbor distance matrix $R$.
Radial Pooling layer
Radial Pooling is basically a dimensionality reduction process that down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.
Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1x$M$x1) with stride 1 and a depth of $N_r$, where $N_r$ is the number of desired radial filters and $M$ is the maximum number of neighbors.
Atomistic fully connected network
Atomic Convolution layers are stacked by feeding the flattened ($N$, $N_{at}$ $\cdot$ $N_r$) output of the radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. The
same fully connected weights and biases are used for each atom in a given molecule.
Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what we expect as the output.
For the training, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex and the target is the binding affinity ($K_i$) of the ligand to the protein in the complex.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
f1_num_atoms = 100 # maximum number of atoms to consider in the ligand
f2_num_atoms = 1000 # maximum number of atoms to consider in the protein
max_num_neighbors = 12 # maximum number of spatial neighbors for an atom
acf = AtomicConvFeaturizer(frag1_num_atoms=f1_num_atoms,
frag2_num_atoms=f2_num_atoms,
complex_num_atoms=f1_num_atoms+f2_num_atoms,
max_num_neighbors=max_num_neighbors,
neighbor_cutoff=4)
Explanation: Getting protein-ligand data
If you worked through Tutorial 13 on modeling protein-ligand interactions, you'll already be familiar with how to obtain a set of data from PDBbind for training our model. Since we explored molecular complexes in detail in the previous tutorial, this time we'll simply initialize an AtomicConvFeaturizer and load the PDBbind dataset directly using MolNet.
End of explanation
%%time
tasks, datasets, transformers = load_pdbbind(featurizer=acf,
save_dir='.',
data_dir='.',
pocket=True,
reload=False,
set_name='core')
Explanation: load_pdbbind allows us to specify if we want to use the entire protein or only the binding pocket (pocket=True) for featurization. Using only the pocket saves memory and speeds up the featurization. We can also use the "core" dataset of ~200 high-quality complexes for rapidly testing our model, or the larger "refined" set of nearly 5000 complexes for more datapoints and more robust training/validation. On Colab, it takes only a minute to featurize the core PDBbind set! This is pretty incredible, and it means you can quickly experiment with different featurizations and model architectures.
End of explanation
class MyTransformer(dc.trans.Transformer):
def transform_array(x, y, w, ids):
kept_rows = x != None
return x[kept_rows], y[kept_rows], w[kept_rows], ids[kept_rows],
datasets = [d.transform(MyTransformer) for d in datasets]
datasets
train, val, test = datasets
Explanation: Unfortunately, if you try to use the "refined" dataset, there are some complexes that cannot be featurized. To resolve this issue, rather than increasing complex_num_atoms, simply omit the lines of the dataset that have an x value of None
End of explanation
acm = AtomicConvModel(n_tasks=1,
frag1_num_atoms=f1_num_atoms,
frag2_num_atoms=f2_num_atoms,
complex_num_atoms=f1_num_atoms+f2_num_atoms,
max_num_neighbors=max_num_neighbors,
batch_size=12,
layer_sizes=[32, 32, 16],
learning_rate=0.003,
)
losses, val_losses = [], []
%%time
max_epochs = 50
metric = dc.metrics.Metric(dc.metrics.score_function.rms_score)
step_cutoff = len(train)//12
def val_cb(model, step):
if step%step_cutoff!=0:
return
val_losses.append(model.evaluate(val, metrics=[metric])['rms_score']**2) # L2 Loss
losses.append(model.evaluate(train, metrics=[metric])['rms_score']**2) # L2 Loss
acm.fit(train, nb_epoch=max_epochs, max_checkpoints_to_keep=1,
callbacks=[val_cb])
Explanation: Training the model
Now that we've got our dataset, let's go ahead and initialize an AtomicConvModel to train. Keep the input parameters the same as those used in AtomicConvFeaturizer, or else we'll get errors. layer_sizes controls the number of layers and the size of each dense layer in the network. We choose these hyperparameters to be the same as those used in the original paper.
End of explanation
f, ax = plt.subplots()
ax.scatter(range(len(losses)), losses, label='train loss')
ax.scatter(range(len(val_losses)), val_losses, label='val loss')
plt.legend(loc='upper right');
Explanation: The loss curves are not exactly smooth, which is unsurprising because we are using 154 training and 19 validation datapoints. Increasing the dataset size may help with this, but will also require greater computational resources.
End of explanation
score = dc.metrics.Metric(dc.metrics.score_function.pearson_r2_score)
for tvt, ds in zip(['train', 'val', 'test'], datasets):
print(tvt, acm.evaluate(ds, metrics=[score]))
Explanation: The ACNN paper showed a Pearson $R^2$ score of 0.912 and 0.448 for a random 80/20 split of the PDBbind core train/test sets. Here, we've used an 80/10/10 training/validation/test split and achieved similar performance for the training set (0.943). We can see from the performance on the training, validation, and test sets (and from the results in the paper) that the ACNN can learn chemical interactions from small training datasets, but struggles to generalize. Still, it is pretty amazing that we can train an AtomicConvModel with only a few lines of code and start predicting binding affinities!
From here, you can experiment with different hyperparameters, more challenging splits, and the "refined" set of PDBbind to see if you can reduce overfitting and come up with a more robust model.
End of explanation |
11,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutions and sliding windows
Plots inline
Step1: Imports
Step2: Some utility functions for making an image montage for display and padding images
Step3: Load a photo of some fruit
Step4: Scale down by a factor of 8
Step5: Construct a single Gabor filter kernel with a wavelength of 8 and an angle of 60 degrees and select the imaginary component.
Step6: Extract 128x128 windows from the image, with a spacing of 32 pixels, convolve with the Gabor kernel constructed above and make a montage of the result | Python Code:
%matplotlib inline
Explanation: Convolutions and sliding windows
Plots inline:
End of explanation
import os
import numpy as np
from matplotlib import pyplot as plt
from scipy.ndimage import convolve
from skimage.filters import gabor_kernel
from skimage.color import rgb2grey
from skimage.util.montage import montage2d
from skimage.util import view_as_windows
from skimage.transform import downscale_local_mean
Explanation: Imports:
End of explanation
def image_montage(im_3d, padding=1, cval=None, grid_shape=None):
if cval is None:
return montage2d(np.pad(im_3d, [(0,0), (padding, padding), (padding, padding)], mode='constant'),
grid_shape=grid_shape)
else:
return montage2d(np.pad(im_3d, [(0,0), (padding, padding), (padding, padding)], mode='constant',
constant_values=[(0,0), (cval,cval), (cval,cval)]), grid_shape=grid_shape)
def pad_image(img, shape):
d0 = shape[0]-img.shape[0]
d1 = shape[1]-img.shape[1]
p0a = d0/2
p0b = d0-p0a
p1a = d1/2
p1b = d1-p1a
return np.pad(img, [(p0a, p0b), (p1a, p1b)], mode='constant')
Explanation: Some utility functions for making an image montage for display and padding images:
End of explanation
IMAGE_PATH = os.path.join('images', 'fruit.JPG')
# Extract a square block
img = rgb2grey(plt.imread(IMAGE_PATH)[:1536,:1536])
print img.shape
plt.imshow(img, cmap='gray')
plt.show()
Explanation: Load a photo of some fruit:
End of explanation
img_small = downscale_local_mean(img, (8,8))
plt.imshow(img_small, cmap='gray')
plt.show()
Explanation: Scale down by a factor of 8:
End of explanation
WAVELENGTH = 8.0
THETA = np.pi / 3.0
k_complex = gabor_kernel(1.0/WAVELENGTH, THETA, 1.2)
k_imag = np.imag(k_complex)
plt.imshow(k_imag, cmap='gray', interpolation='none')
plt.imsave('images/single_kernel.png', k_imag, cmap='gray')
Explanation: Construct a single Gabor filter kernel with a wavelength of 8 and an angle of 60 degrees and select the imaginary component.
End of explanation
windows = view_as_windows(img_small, (128,128), (32,32))
grid_shape = windows.shape[:2]
windows = windows.reshape((-1, 128,128))
window_feats = [-convolve(1-windows[i], k_imag) for i in range(windows.shape[0])]
feats_3d = np.concatenate([c[None,:,:] for c in window_feats], axis=0)
feats_montage = image_montage(feats_3d, padding=10, grid_shape=grid_shape)
plt.imshow(feats_montage, cmap='gray', interpolation='none')
plt.imsave('images/fruit_window_montage.png', feats_montage, cmap='gray')
Explanation: Extract 128x128 windows from the image, with a spacing of 32 pixels, convolve with the Gabor kernel constructed above and make a montage of the result:
End of explanation |
11,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a chord progression with a genetic algorithm
This work is the result of an experiment done some months ago. I used a simple genetic algorithm to find a solution to a classic exercise of harmony
Step1: Will be rempresented as
Step2: The file lily_template has the template to create the lilypond file and to give it format I used the string.Template class.
Then I set some global variables that I'm going to use
Step3: Initialization
In the initialization part, you have to create the initial population with all the individuals. In this example an individual will be a chord progression, with all the chords beeing of four notes each. I represented each individual as a list of chords, and each chord a list of notes, for example
Step5: This individual would be represented as
[[48, 64, 67, 79], [53, 59, 65, 74], [55, 59, 62, 74], [48, 55, 67, 76]]
To do anything I need the tonality of the exercise, so I defined a function to find it given the bass voice and the key signature, it just looks at the key signature and it looks if the voice has any sight of being a minor key
Step7: The function returns a tuple with the tonality represented as a number from 0 to 11 (C to B) and a 'M' if the tonality is major or 'm' if it's minor.
Then I wrote a function to create a single chord given the name of the chord and the fundamental note. The interesting part of this is that a chord of four notes can have multiple permutations, different set of notes and it can have a close or open harmony.
For example all these chords are D major and have the same bass
Step9: After this function, I only need to select a chord for each of the notes in the bass. This process use the midi representation of the notes and making arithmetic operation with them and with a random choice between the possible options. At the end we have a complete chord progression.
Step11: Now, DEAP requires a generator to create the individuals, so I just create a simple generator that yields each chord of the progression.
Step13: Selection
From Wikipedia
Step15: Also between two chords, you have to avoid the consecutive fifths and octaves, and a normal person tends to make the intervals in a voice more "natural" making jumps not bigger than a fifth that often.
Step17: And for the evaluation of an individual I used this generator to access an element and the two neighbors
Step19: The actual evaluation function
Step21: Crossover
In the crossover section I used a simple One point crossover provided by DEAP in its set of tools.
Mutation
In the mutation section I just create a new chord for each one of the lasts chords that randomly pass a threshold.
Step23: The next function is just to create a lilypond file using the chords in the individual to see a nice sheet.
Step24: In the main function is where the actual algorithm's running, it's a simple evolutionary algorithm with a hall of fame where the best individuals will be saved. The little while loop is just if you want to run multiple times the algorithm until an individual gets an evaluation lower than, in this case, 15.
Step25: And at the end set up all the functions and the form of the individual in the toolbox that way DEAP can use them in the algorithm.
The program verbose each of the generations showing the number of individuals evaluated, the average evaluation value, the standard deviation, the minimum and the maximum. At the end it shows the three best individuals in all the evolution process, and it creates a lilypond file with the best of all.
Step26: And just to show the result I made a little script to trim the pdf that lilypond generate. | Python Code:
from IPython import display
display.Image('img/simple.jpg', width=400)
Explanation: Creating a chord progression with a genetic algorithm
This work is the result of an experiment done some months ago. I used a simple genetic algorithm to find a solution to a classic exercise of harmony: given a certain voice (normally the bass) create the other three voices to make a chord progression. I know that the aproach to solve a progression with a genetic algorithm may not be the best I just wanted to play with this algorithms making somthing fun, the code isn't perfect and the algorithm can be improved adding more options in the chord selection, but for simplicity I didn't use seventh chords.
Working with music
The first part of this challenge is to find a way to easily represent the notes in the melody, luckily for us many years ago MIDI was created, so I used the numbers in MIDI to match every single note being, for example, 60 the central C. And at the beginning of the sequence of notes I set the key signature with the number of sharps or flats.
So an example like this:
End of explanation
import random
import math
import numpy
from string import Template
from deap import base
from deap import creator
from deap import tools
from deap import algorithms
from lily_template import TEMPLATE
Explanation: Will be rempresented as: 0# 57 59 60 62
Lilypond
Going from music notation to this MIDI number representation with just one voice is quite easy (although I might work a hack for that also), but once I generate all the notes that form the chords the process ends up being a pain in the ass. Therefore, at the end I take the output and pass it through a little script that transform the numbers to Lilypond, generate a .ly file and an .jpg of the sheet.
The genetic algorithm
A genetic algorithm works pretty much like a species evolution in real life. You take a group of individuals, called population, in this population there are a few individuals that have the best attributes, those are the ones that will survive and carry on the genes to the next generation. This process continues generation over generation until there is an individual with the perfect attributes, or at least with the best so far. You can read more in Wikipedia.
The process can be structured in this steps:
- Initialization
- Selection
- Crossover
- Mutation
- Termination
Let's tackle them one by one. But first let me explain the framework that I used.
Working enviroment
I used a library called DEAP, Distributed Evolutionary Algorithms in Python, it is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It has all the basic tools to work with genetic algorithms you only have to create the functions to create, select, mate and mutate the individuals.
End of explanation
# Global Variables
OPTIONS_M = ((0,-3,5), (0,-3,5), (0,-4,5), (0,-3,6), (0,-3,5), (0,-4,5), (0,-4,5))
OPTIONS_m = ((0,-4,5), (0,-4,5), (0,-3,5), (0,-3,5), (0,-4,5), (0,-3,6), (0,5))
MOD_M = ('M','m','m','M','M','m','d')
MOD_m = ('m','d','M','m','M','M','M')
Explanation: The file lily_template has the template to create the lilypond file and to give it format I used the string.Template class.
Then I set some global variables that I'm going to use:
OPTIONS_* are the different roles that a note can have in a chord: The first note in the scale of the tonality can be the first note in the tonic chord, the third degree in the 6° chord or the fifth degree in the subdominant. So I represented this options as differences between the fundamental note of the possible chords and the note degree in the scale.
This diferences change a little bit in a minor tonality.
MOD_* are just the grades of the chords in a major and a minor tonality.
End of explanation
display.Image('img/ex_prog.jpg', width=400)
Explanation: Initialization
In the initialization part, you have to create the initial population with all the individuals. In this example an individual will be a chord progression, with all the chords beeing of four notes each. I represented each individual as a list of chords, and each chord a list of notes, for example:
End of explanation
def setTon(line):
Return the tonality of the exercise and the bass notes of it
ton = line[:2]
notes = list(map(int, line[3:].split(' ')))
if ton[1] == '#':
ton = (int(ton[0])*7)%12
else:
ton = (int(ton[0])*5)%12
for note in notes:
if (ton+6)%12 == note%12:
ton = str((ton-3)%12)+'m'
break
else:
if ton-3 == notes[-1]%12:
ton = str((ton-3)%12)+'m'
else:
ton = str(ton)+'M'
return ton, notes
Explanation: This individual would be represented as
[[48, 64, 67, 79], [53, 59, 65, 74], [55, 59, 62, 74], [48, 55, 67, 76]]
To do anything I need the tonality of the exercise, so I defined a function to find it given the bass voice and the key signature, it just looks at the key signature and it looks if the voice has any sight of being a minor key:
End of explanation
display.Image('img/all_same.jpg', width=400)
def creatChord(nameC, noteF):
Create one chord given the name of the chord and the fundamental note
num_funda = int(nameC[:-1])
if nameC[-1] == 'M':
val_notes = [num_funda, (num_funda+4)%12, (num_funda+7)%12]
elif nameC[-1] == 'm':
val_notes = [num_funda, (num_funda+3)%12, (num_funda+7)%12]
elif nameC[-1] == 'd':
val_notes = [num_funda, (num_funda+3)%12, (num_funda+6)%12]
# Tessitura of each voice
tenorR = list(range(48, 69))
contR = list(range(52, 77))
sopR = list(range(60, 86))
# Depending in the bass note this are the options for the others voices
if noteF%12 == val_notes[0]:
opc = [[1,1,1], [2,1,0], [0,1,2]]
elif noteF%12 == val_notes[1]:
opc = [[1,0,2], [3,0,0], [2,0,1]]
elif noteF%12 == val_notes[2]:
opc = [[1,1,1], [2,1,0]]
opc = random.choice(opc)
chordN = list()
for num, val in zip(opc, val_notes):
chordN += [val]*num
random.shuffle(chordN)
chord = [noteF,]
for nte, voce in zip(chordN, [tenorR, contR, sopR]):
posible_n = [x for x in voce if x%12 == nte]
chord.append(random.choice(posible_n))
return chord
Explanation: The function returns a tuple with the tonality represented as a number from 0 to 11 (C to B) and a 'M' if the tonality is major or 'm' if it's minor.
Then I wrote a function to create a single chord given the name of the chord and the fundamental note. The interesting part of this is that a chord of four notes can have multiple permutations, different set of notes and it can have a close or open harmony.
For example all these chords are D major and have the same bass:
End of explanation
def selChord(ton, notesBass):
Select the chords from all the posibilities
listaOp = OPTIONS_M if ton[-1] == 'M' else OPTIONS_m
listaMod = MOD_M if ton[-1] == 'M' else MOD_m
prog = list()
for note in notesBass:
name = note%12
grad = name-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
num = (random.choice(listaOp[grad]) + name +12) % 12
grad = num-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
name = '{}{}'.format(num, listaMod[grad])
prog.append([creatChord(name, note), grad])
return prog
Explanation: After this function, I only need to select a chord for each of the notes in the bass. This process use the midi representation of the notes and making arithmetic operation with them and with a random choice between the possible options. At the end we have a complete chord progression.
End of explanation
def newChordProg(ton, notes):
Create a new individual given the tonality and the base notes
chords = selChord(ton, notes)
for c in chords:
yield c
Explanation: Now, DEAP requires a generator to create the individuals, so I just create a simple generator that yields each chord of the progression.
End of explanation
def check_interval(chord):
Return the number of mistakes in the distance between the notes.
res = 0
if chord[2] - chord[1] > 12 or chord[2]-chord[1] < 0:
res += 15
if chord[3] - chord[2] > 12 or chord[3]-chord[2] < 0:
res += 15
if chord[1] == chord[2] or chord[2] == chord[3]:
res += 1.4
return res
Explanation: Selection
From Wikipedia: "During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected."
So, I created a fitness function based on the classical harmony, it evaluate the progression to find "errors" like a distance between notes greater than an octave, or try to avoid two voices singing the same note:
End of explanation
def check_2_chords(ch1, ch2):
Return the number of mistakes in the intervals between 2 chords.
res = 0
# Check for 5° and 8°
ite1 = map(lambda x,y: y-x, ch1[:-1], ch1[1:])
ite2 = map(lambda x,y: y-x, ch2[:-1], ch2[1:])
for inter1, inter2 in zip(ite1, ite2):
if inter1 == 7 and inter2 == 7:
res += 15
elif inter1 == 0 and inter2 == 0:
res += 15
elif inter1 == 12 and inter2 == 12:
res += 15
# Check for big intervals, just to make it more "human"
for note1, note2 in zip(ch1[1:], ch2[1:]):
if abs(note1-note2) >= 7: # 7 equals 5° interval
res += .7
return res
Explanation: Also between two chords, you have to avoid the consecutive fifths and octaves, and a normal person tends to make the intervals in a voice more "natural" making jumps not bigger than a fifth that often.
End of explanation
def neighborhood(iterable):
Generator gives the prev actual and next.
iterator = iter(iterable)
prev = None
item = next(iterator) # throws StopIteration if empty.
for nex in iterator:
yield (prev,item,nex)
prev = item
item = nex
yield (prev,item,None)
Explanation: And for the evaluation of an individual I used this generator to access an element and the two neighbors:
End of explanation
def evalNumErr(ton, individual):
Evaluation function.
res = 0
for prev, item, nex in neighborhood(individual):
res += check_interval(item[0])
if prev == None:
if item[1] != 0:
res += 6
continue
else:
if prev[1] in [4, 6] and item[1] in [3, 1]:
res += 20
res += check_2_chords(prev[0], item[0])
if nex == None:
if item[1] in [1, 2, 3, 4, 5, 6]:
res += 6
return (res,)
Explanation: The actual evaluation function:
End of explanation
def mutChangeNotes(ton, individual, indpb):
Mutant function.
new_ind = toolbox.clone(individual)
for x in range(len(individual[0])):
if random.random() < indpb:
listaOp = OPTIONS_M if ton[-1] == 'M' else OPTIONS_m
listaMod = MOD_M if ton[-1] == 'M' else MOD_m
note = individual[x][0][0]
name = note%12
grad = name-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
num = (random.choice(listaOp[grad]) + name +12) % 12
grad = num-int(ton[:-1])
grad = math.ceil(((grad+12)%12) / 2)
name = '{}{}'.format(num, listaMod[grad])
new_ind[x] = [creatChord(name, note), grad]
del new_ind.fitness.values
return new_ind,
Explanation: Crossover
In the crossover section I used a simple One point crossover provided by DEAP in its set of tools.
Mutation
In the mutation section I just create a new chord for each one of the lasts chords that randomly pass a threshold.
End of explanation
def transform_lilypond(ton, indiv, make_file=False):
Take one list of chords and print the it in lilypond notation.
note_map = dict()
if ton[-1] == 'M':
note_map = {0: 'c',
1: 'cis',
2: 'd',
3: 'dis',
4: 'e',
5: 'f',
6: 'fis',
7: 'g',
8: 'gis',
9: 'a',
10:'ais',
11:'b'
}
gra = 'major'
else:
note_map = {0: 'c',
1: 'des',
2: 'd',
3: 'ees',
4: 'e',
5: 'f',
6: 'ges',
7: 'g',
8: 'aes',
9: 'a',
10:'bes',
11:'b'
}
gra = 'minor'
voces = [[], [], [], []]
for chord in indiv:
for note, voce in zip(chord, voces):
octave = (note // 12)-4
name_lily = note_map[note % 12]
if octave < 0:
name_lily += ',' * (octave * -1)
elif octave > 0:
name_lily += "'" * octave
voce.append(name_lily)
if make_file:
with open('lily/'+ton+'.ly', 'w') as f:
key_map = {'0': 'c',
'1': 'des',
'2': 'd',
'3': 'ees',
'4': 'e',
'5': 'f',
'6': 'ges',
'7': 'g',
'8': 'aes',
'9': 'a',
'10':'bes',
'11':'b'
}
print(ton)
f.write(Template(TEMPLATE).substitute(key=key_map[ton[:-1]], grade=gra, notes='{}|\n{}|\n{}|\n{}|\n'.format(*(' '.join(voce) for voce in reversed(voces)))))
print('{}|\n{}|\n{}|\n{}|\n'.format(*(' '.join(voce) for voce in reversed(voces))))
Explanation: The next function is just to create a lilypond file using the chords in the individual to see a nice sheet.
End of explanation
def main(ton):
pop = toolbox.population(n=400)
hof = tools.HallOfFame(3)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register('avg', numpy.mean)
stats.register('std', numpy.std)
stats.register('min', numpy.min)
stats.register('max', numpy.max)
pop, log = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.3, ngen=70, stats=stats, halloffame=hof, verbose=True)
while min(log.select('min')) > 15:
pop = toolbox.population(n=400)
pop, log = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.3, ngen=70, stats=stats, halloffame=hof, verbose=True)
for best in hof:
print([x[0] for x in best], end='\n============\n')
transform_lilypond(ton, [x[0] for x in hof[0]], make_file=True)
Explanation: In the main function is where the actual algorithm's running, it's a simple evolutionary algorithm with a hall of fame where the best individuals will be saved. The little while loop is just if you want to run multiple times the algorithm until an individual gets an evaluation lower than, in this case, 15.
End of explanation
if __name__ == '__main__':
line = input('n[#b] notas ')
ton, notes = setTon(line)
print(ton, notes)
# ========================= GA setup =========================
creator.create('FitnessMin', base.Fitness, weights=(-1.0,))
creator.create('Individual', list, fitness=creator.FitnessMin)
toolbox = base.Toolbox()
toolbox.register('creat_notes', newChordProg, ton, notes)
toolbox.register('individual', tools.initIterate, creator.Individual,
toolbox.creat_notes)
toolbox.register('population', tools.initRepeat, list, toolbox.individual)
toolbox.register('evaluate', evalNumErr, ton)
toolbox.register('mate', tools.cxOnePoint)
toolbox.register('mutate', mutChangeNotes, ton, indpb=0.4)
toolbox.register('select', tools.selTournament, tournsize=3)
# =============================================================
main(ton)
Explanation: And at the end set up all the functions and the form of the individual in the toolbox that way DEAP can use them in the algorithm.
The program verbose each of the generations showing the number of individuals evaluated, the average evaluation value, the standard deviation, the minimum and the maximum. At the end it shows the three best individuals in all the evolution process, and it creates a lilypond file with the best of all.
End of explanation
import os
os.system('python auto_trim.py {} {}'.format('lily/'+ton+'.ly', 'temp.jpg'))
display.Image('img/temp.jpg', width=600)
Explanation: And just to show the result I made a little script to trim the pdf that lilypond generate.
End of explanation |
11,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
Step1: What was the most popular type of complaint, and how many times was it filed?
Step2: Make a horizontal bar graph of the top 5 most frequent complaint types.
Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
Step4: According to your selection of data, how many cases were filed in March? How about May?
Step5: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
Step6: What was the most popular type of complaint on April 1st?
What were the most popular three types of complaint on April 1st
Step7: What month has the most reports filed? How many? Graph it.
Step8: What week of the year has the most reports filed? How many? Graph the weekly complaints.
Step9: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
Step10: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
Step11: What hour of the day are the most complaints? Graph a day of complaints.
Step12: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
Step13: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
Step14: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
Step16: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
Step17: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer. | Python Code:
#df = pd.read_csv("small-311-2015.csv")
df = pd.read_csv("311-2014.csv", nrows=200000)
df.head(2)
df.info()
def parse_date (str_date):
return dateutil.parser.parse(str_date)
df['created_dt']= df['Created Date'].apply(parse_date)
df.head(3)
df.info()
Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
End of explanation
df["Complaint Type"].value_counts().head(1)
Explanation: What was the most popular type of complaint, and how many times was it filed?
End of explanation
df["Complaint Type"].value_counts().head(5).sort_values().plot(kind='barh')
Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types.
End of explanation
df["Borough"].value_counts()
people_bronx= 1438159
people_queens= 2321580
people_manhattan=1636268
people_brooklyn= 2621793
people_staten_island= 473279
complaints_per_capita_bronx= 29610/people_bronx
complaints_per_capita_bronx
complaints_per_capita_queens=46824/people_queens
complaints_per_capita_queens
complaints_per_capita_manhattan=42050/people_manhattan
complaints_per_capita_manhattan
complaints_per_capita_staten_island=473279/people_staten_island
complaints_per_capita_staten_island
complaints_per_capita_brooklyn=2621793/people_brooklyn
complaints_per_capita_brooklyn
Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
End of explanation
df.index = df['created_dt']
#del df['Created Date']
df.head()
print("There were", len(df['2015-03']), "cases filed in March")
print("There were", len(df['2015-05']), "cases filed in May")
Explanation: According to your selection of data, how many cases were filed in March? How about May?
End of explanation
df['2015-04-01']
Explanation: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
End of explanation
df['2015-04-01']['Complaint Type'].value_counts().head(3)
df.info()
Explanation: What was the most popular type of complaint on April 1st?
What were the most popular three types of complaint on April 1st
End of explanation
df.resample('M').count()
df.resample('M').index[0]
import numpy as np
np.__version__
df.resample('M').count().plot(y="Unique Key")
ax= df.groupby(df.index.month).count().plot(y='Unique Key', legend=False)
ax.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel("Number of Complaints")
ax.set_title("311 complains in 2015")
Explanation: What month has the most reports filed? How many? Graph it.
End of explanation
#df.resample('W').count().head(5)
df.resample('W').count().plot(y="Unique Key", color= "purple")
Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints.
End of explanation
df[df['Complaint Type'].str.contains("Noise")].head()
noise_df= df[df['Complaint Type'].str.contains("Noise")]
noise_graph= noise_df.groupby(noise_df.index.month).count().plot(y='Unique Key', legend=False)
noise_graph.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
noise_graph.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
noise_graph.set_ylabel("Number of Noise Complaints")
noise_graph.set_title("311 noise complains in 2015")
noise_df.groupby(by=noise_df.index.hour)['Unique Key'].count().plot()
noise_graph= noise_df.groupby(noise_df.index.dayofweek).count().plot(y='Unique Key', legend=False)
noise_graph.set_xticks([1,2,3,4,5,6,7])
noise_graph.set_xticklabels(['Mon', 'Tues', 'Wed', 'Thur', 'Fri', 'Sat', 'Sun'])
noise_graph.set_ylabel("Number of Noise Complaints")
noise_graph.set_title("311 noise complains in 2015")
Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
End of explanation
daily_count= df['Unique Key'].resample('D').count().sort_values(ascending=False)
top_5_days= daily_count.head(5)
top_5_days
ax = top_5_days.plot(kind='bar') # I dont know how to put names to the labels
ax.set_title("Top 5 days")
ax.set_xlabel("Day")
ax.set_ylabel("Complaints")
Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
End of explanation
hour_graph= df.groupby(df.index.hour).count().plot(y='Unique Key', legend=False)
hour_graph.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23])
hour_graph.set_title("A day of complaints")
hour_graph.set_xlabel("Hours")
hour_graph.set_ylabel("Complaints")
Explanation: What hour of the day are the most complaints? Graph a day of complaints.
End of explanation
twelve_am_complaints= df[df.index.hour <1]
twelve_am_complaints.head()
twelve_am_complaints['Complaint Type'].value_counts().head(5)
one_am_complaints= df[df.index.hour == 1]
one_am_complaints['Complaint Type'].value_counts().head(5)
eleven_pm_complaints= df[df.index.hour == 23]
eleven_pm_complaints['Complaint Type'].value_counts().head(5)
Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
End of explanation
twelve_am_complaints.groupby(twelve_am_complaints.index.minute).count()
Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
End of explanation
df['Agency'].value_counts().head(5)
df_NYPD = df[df['Agency'] == 'NYPD']
df_HPD = df[df['Agency'] == 'HPD']
df_DOT = df[df['Agency'] == 'DOT']
df_DPR= df[df['Agency'] == 'DPR']
df_DOHMH= df[df['Agency'] == 'DOHMH']
all_graph = df_NYPD.groupby(by= df_NYPD.index.hour).count().plot(y='Unique Key', label='NYPD complaints')
all_graph.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23])
all_graph.set_title("A day of complaints by the top 5 agencies")
all_graph.set_xlabel("Hours")
all_graph.set_ylabel("Complaints")
df_HPD.groupby(by= df_HPD.index.hour).count().plot(y='Unique Key', ax=all_graph , label='HPD complaints')
df_DOT.groupby(by= df_DOT.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOT complaints')
df_DPR.groupby(by= df_DPR.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DPR complaints')
df_DOHMH.groupby(by= df_DOHMH.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOHMH complaints')
Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
End of explanation
all_graph = df_NYPD.groupby(by= df_NYPD.index.weekofyear).count().plot(y='Unique Key', label='NYPD complaints')
#all_graph.set_xticks([1,50])
all_graph.set_title("A year of complaints by the top 5 agencies")
all_graph.set_xlabel("Weeks")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
df_HPD.groupby(by= df_HPD.index.week).count().plot(y='Unique Key', ax=all_graph , label='HPD complaints')
df_DOT.groupby(by= df_DOT.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOT complaints')
df_DPR.groupby(by= df_DPR.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DPR complaints')
df_DOHMH.groupby(by= df_DOHMH.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOHMH complaints')
plt.legend(bbox_to_anchor=(0, 1), loc='best', ncol=1)
print(May and June are the months with more complaints, followed by October, November and December.
In May the NYPD and HPD have an odd number of complaints)
Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
End of explanation
August_July = df["2015-07":"2015-08"]
August_July_complaints = August_July['Complaint Type'].value_counts().head(5)
August_July_complaints
May = df['2015-05']
May_complaints= May['Complaint Type'].value_counts().head(5)
May_complaints
# August_July_vs_May= August_July_complaints.plot(y='Unique Key', label='August - July complaints')
# August_July_vs_May.set_ylabel("Number of Complaints")
# August_July_vs_May.set_title("August-July vs May Complaints")
# May['Complaint Type'].value_counts().head(5).plot(y='Unique Key', ax=August_July_vs_May, label='May complaints')
# August_July_vs_May.set_xticks([1,2,3,4,5])
# August_July_vs_May.set_xticklabels(['Illegal Parking', 'Blocked Driveway', 'Noise - Street/Sidewalk', 'Street Condition', 'Noise - Commercial'])
#Most popular complaints of the HPD
df_HPD['Complaint Type'].value_counts().head(5)
summer_complaints= df_HPD["2015-06":"2015-08"]['Complaint Type'].value_counts().head(5)
summer_complaints
winter_complaints= df_HPD["2015-01":"2015-02"]['Complaint Type'].value_counts().head(5)
winter_complaints
winter_complaints_dec= df_HPD["2015-12"]['Complaint Type'].value_counts().head(5)
winter_complaints_dec
winter_results= df_HPD["2015-12"]['Complaint Type'].value_counts() + df_HPD["2015-01":"2015-02"]['Complaint Type'].value_counts()
winter_results
Explanation: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
End of explanation |
11,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Configuring MNE python
This tutorial gives a short introduction to MNE configurations.
Step1: MNE-python stores configurations to a folder called .mne in the user's
home directory, or to AppData directory on Windows. The path to the config
file can be found out by calling
Step2: These configurations include information like sample data paths and plotter
window sizes. Files inside this folder should never be modified manually.
Let's see what the configurations contain.
Step3: We see fields like "MNE_DATASETS_SAMPLE_PATH". As the name suggests, this is
the path the sample data is downloaded to. All the fields in the
configuration file can be modified by calling
Step4: The default value is now set to INFO. This level will now be used by default
every time we call a function in MNE. We can set the global logging level for
only this session by calling
Step5: Notice how the value in the config file was not changed. Logging level of
WARNING only applies for this session. Let's see what logging level of
WARNING prints for
Step6: Nothing. This means that no warnings were emitted during the computation. If
you look at the documentation of
Step7: As you see there is some info about what the function is doing. The logging
level can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can
also be set to an integer or a boolean value. The correspondence to string
values can be seen in the table below. verbose=None uses the default
value from the configuration file.
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+ | Python Code:
import os.path as op
import mne
from mne.datasets.sample import data_path
fname = op.join(data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname).crop(0, 10)
original_level = mne.get_config('MNE_LOGGING_LEVEL', 'INFO')
Explanation: Configuring MNE python
This tutorial gives a short introduction to MNE configurations.
End of explanation
print(mne.get_config_path())
Explanation: MNE-python stores configurations to a folder called .mne in the user's
home directory, or to AppData directory on Windows. The path to the config
file can be found out by calling :func:mne.get_config_path.
End of explanation
print(mne.get_config())
Explanation: These configurations include information like sample data paths and plotter
window sizes. Files inside this folder should never be modified manually.
Let's see what the configurations contain.
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', 'INFO')
print(mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: We see fields like "MNE_DATASETS_SAMPLE_PATH". As the name suggests, this is
the path the sample data is downloaded to. All the fields in the
configuration file can be modified by calling :func:mne.set_config.
Logging
Configurations also include the default logging level for the functions. This
field is called "MNE_LOGGING_LEVEL".
End of explanation
mne.set_log_level('WARNING')
print(mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: The default value is now set to INFO. This level will now be used by default
every time we call a function in MNE. We can set the global logging level for
only this session by calling :func:mne.set_log_level function.
End of explanation
cov = mne.compute_raw_covariance(raw)
Explanation: Notice how the value in the config file was not changed. Logging level of
WARNING only applies for this session. Let's see what logging level of
WARNING prints for :func:mne.compute_raw_covariance.
End of explanation
cov = mne.compute_raw_covariance(raw, verbose=True)
Explanation: Nothing. This means that no warnings were emitted during the computation. If
you look at the documentation of :func:mne.compute_raw_covariance, you
notice the verbose keyword. Setting this parameter does not touch the
configurations, but sets the logging level for just this one function call.
Let's see what happens with logging level of INFO.
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', original_level)
print('Config value restored to: %s' % mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: As you see there is some info about what the function is doing. The logging
level can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can
also be set to an integer or a boolean value. The correspondence to string
values can be seen in the table below. verbose=None uses the default
value from the configuration file.
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+
End of explanation |
11,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Probabilitic Graphical Models
Step1: Contents
What is machine learning
Different ways of learning from data
Why probabilistic graphical models
Major types of PGMs
1. What is machine learning
Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model from example inputs and using that to make predictions or decisions, rather than following strictly static program instructions.
We can take an example of predicting the type of flower based on the sepal length and width of the flower. Let's say we have some data (discretized iris data set on sepal length and width). The dataset looks something like this
Step2: 2. Different ways of learning from data
Now let's say we want to predict the type of flower for a new given data point. There are multiple ways to solve this problem. We will consider these two ways in some detail
Step3: In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right.
Now let's try to train a Decision Tree on this data.
Step4: So, in this case we got a classification accuracy of 60 %.
Now moving on to our second approach using a probabilistic model.
The most obvious way to do this classification task would be to compute a Joint Probability Distribution over all these variables and then marginalize and reduce over these according to our new data point to get the probabilities of classes.
Step5: Why Probabilistic Graphical Models
In the previous example we saw how Bayesian Inference works. We construct a Joint Distribution over the data and then condition on the observed variable to compute the posterior distribution. And then we query on this posterior distribution to predict the values of new data points.
But the problem with this method is that the Joint Probability Distribution is exponential to the number of states (cardinality) of each variable. So, for problems having a lot of features or having high cardinality of features, inference becomes a difficult task because of computational limitations. For example, for 10 random variables each having 10 states, the size of the Joint Distribution would be 10^10.
Proababilistic Graphical Models (PGM)
Step6: In this case the parameters of the network would be $P(L)$, $P(W)$ and $P(T | L, W)$. So, we will need to store 5 values for $L$, 3 values for $W$ and 45 values for $P(T | L, W)$. So, a total of 45 + 5 + 3 = 53 values to completely parameterize the network which is actually more than 45 values which we need for $P (T, L, W)$. But in the cases of bigger networks graphical models help in saving space. We can take the example of the student network shown below | Python Code:
from IPython.display import Image
Explanation: Introduction to Probabilitic Graphical Models
End of explanation
%run ../scripts/1/discretize.py
data
Explanation: Contents
What is machine learning
Different ways of learning from data
Why probabilistic graphical models
Major types of PGMs
1. What is machine learning
Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model from example inputs and using that to make predictions or decisions, rather than following strictly static program instructions.
We can take an example of predicting the type of flower based on the sepal length and width of the flower. Let's say we have some data (discretized iris data set on sepal length and width). The dataset looks something like this:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Adding a little bit of noise so that it's easier to visualize
data_with_noise = data.iloc[:, :2] + np.random.normal(loc=0, scale=0.1, size=(150, 2))
plt.scatter(data_with_noise.length, data_with_noise.width, c=[ "bgr"[k] for k in data.iloc[:,2] ], s=200, alpha=0.3)
Explanation: 2. Different ways of learning from data
Now let's say we want to predict the type of flower for a new given data point. There are multiple ways to solve this problem. We will consider these two ways in some detail:
We could find a function which can directly map an input value to it's class label.
We can find the probability distributions over the variables and then use this distribution to answer queries about the new data point.
There are a lot of algorithms for finding a mapping function. For example linear regression tries to find a linear equation which explains the data. Support vector machine tries to find a plane which separates the data points. Decision Tree tries to find a set of simple greater than and less than equations to classify the data. Let's try to apply Decision Tree on this data set.
We can plot the data and it looks something like this:
End of explanation
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data[['length', 'width']].values, data.type.values, test_size=0.2)
classifier = DecisionTreeClassifier(max_depth=4)
classifier.fit(X_train, y_train)
classifier.predict(X_test)
classifier.score(X_test, y_test)
Explanation: In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right.
Now let's try to train a Decision Tree on this data.
End of explanation
X_train, X_test = data[:120], data[120:]
X_train
# Computing the joint probability distribution over the training data
joint_prob = X_train.groupby(['length', 'width', 'type']).size() / 120
joint_prob
# Predicting values
# Selecting just the feature variables.
X_test_features = X_test.iloc[:, :2].values
X_test_actual_results = X_test.iloc[:, 2].values
predicted_values = []
for i in X_test_features:
predicted_values.append(joint_prob[i[0], i[1]].idxmax())
predicted_values = np.array(predicted_values)
predicted_values
# Comparing results with the actual data.
predicted_values == X_test_actual_results
score = (predicted_values == X_test_actual_results).sum() / 30
print(score)
Explanation: So, in this case we got a classification accuracy of 60 %.
Now moving on to our second approach using a probabilistic model.
The most obvious way to do this classification task would be to compute a Joint Probability Distribution over all these variables and then marginalize and reduce over these according to our new data point to get the probabilities of classes.
End of explanation
Image(filename='../images/1/Iris_BN.png')
Explanation: Why Probabilistic Graphical Models
In the previous example we saw how Bayesian Inference works. We construct a Joint Distribution over the data and then condition on the observed variable to compute the posterior distribution. And then we query on this posterior distribution to predict the values of new data points.
But the problem with this method is that the Joint Probability Distribution is exponential to the number of states (cardinality) of each variable. So, for problems having a lot of features or having high cardinality of features, inference becomes a difficult task because of computational limitations. For example, for 10 random variables each having 10 states, the size of the Joint Distribution would be 10^10.
Proababilistic Graphical Models (PGM): PGM is a technique of compactly representing Joint Probability Distribution over random variables by exploiting the (conditional) independencies between the variables. PGM also provides us methods for efficiently doing inference over these joint distributions.
Each graphical model is characterized by a graph structure (can be directed, undirected or both) and a set of parameters associated with each graph.
The problem in the above example can be represented using a Bayesian Model (a type of graphical model) as:
End of explanation
Image(filename='../images/1/student.png')
Explanation: In this case the parameters of the network would be $P(L)$, $P(W)$ and $P(T | L, W)$. So, we will need to store 5 values for $L$, 3 values for $W$ and 45 values for $P(T | L, W)$. So, a total of 45 + 5 + 3 = 53 values to completely parameterize the network which is actually more than 45 values which we need for $P (T, L, W)$. But in the cases of bigger networks graphical models help in saving space. We can take the example of the student network shown below:
End of explanation |
11,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Training
Code for finding the best predictive model
Author
Step1: The default directory is the code subdirectory. Changing to the main repo directory above.
Upload Data
Step2: Random States
Step3: Training and Testing Split
Step4: Class Imbalance Corrections
Step5: Feature Selection
Step6: Note, since the formula for the variance of binary variables is p*(1-p), where p is the proportion of times that the binary variable is 1, I use the proportion to define the variance thresholds. The max variance is 0.25 at p=0.5.
Classification Models
Step7: Although tuning is not necessary for Naive Bayes, I pass the default parameters of those models to GridSearchCV anyway so that I can do a direct pair-wise comparison with the other models across the different steps of cross-validation.
In the interest of time, I didn't use the SVM classifier.
I used a training set to tune the model's hyperparameters and a test set to evaluate them. With more time and data, I would use repeated nested cross-validation to create a more robust model tuning, selection, and performance assessment workflow.
Step8: Creating Pipelines
Step9: Naive Bayes Estimators
Step10: QDA Estimators
Step11: Logistic Estimators
Step12: KNN Estimators
Step13: Random Forest Estimators
Step14: Fitting Estimators
Step15: Basic Estimators
Naive Bayes Estimators
Step16: QDA Estimators
Step17: Logistic Estimators
Step18: KNN Estimators
Step19: Random Forest Estimators
Step20: Testing Estimators
Below I show the ROC curves for the models over the test data.
Step21: Basic Estimators
Step22: Estimators with Random Oversampling of Minority Class
Step23: Interestingly, only the basic classifiers improve in predictive performance.
Estimators with SMOTE
Step24: Again, class imbalance corrections only benefit Naive Bayes and QDA.
Estimators with Random Undersampling of Majority Class
Step25: Training QDA with undersampling resulted in errors.
Estimators with Tomek Link Removal
Step26: Plotting the Best of Each Classifier
Step27: Inspecting the Best Classifier
While best model performs extremely well. It is important to be aware of model characteristics like the variability of prediction quality, a key model reliability metric, and the important features, which should inform data maintenance and engineering practices as well as model interpretation.
Variability of Prediction Quality
Normally I would use bootstrapped samples of the test data, with the model fitted on the whole training data, to obtain an empirical distribution of the model's performance (AUC ROC in this case). However, with limited data and time, I will use the AUC ROC on the validation folds of the CV grid search to get a sense of the variability. Normally the validation set AUC ROC values will be biased towards optimism compared to the true out of sample performance (on the test set), however, this isn't the case in the example below.
Step28: Isolate the best parameters
Step29: Fortunately, the performance is stable. With more data and time, I would do repeat cross-validation or repeated nested cross-validation to get more robust estimates of the out of sample error and its variability.
Important features
Unfortunately, even on a standardized scale, coefficient magnitude is not necessarily the right way to determine variable importance in a logistic regression. Fortunately, the random forest classifier has similar performance to the logistic regression, so I will use it to identify important features.
Step30: It is not surprising that the most important two features are ad attributes. This also adds confidence to the model by showing that the most important features make intuitive sense; although robust models can have seemingly non-intuitive important features.
Many of the other top ten features, like urlstatic.wired.com and ancurlwww.amazon.com, also make sense because they are likely links to the urls of the company that owns the ad.
I would not be surprised if adding random forest as a feature selection step in the pipeline wouldn't bring the AUC ROC of nearly all of the classifiers to 0.99.
Final Thoughts
The best model is clearly the logistic classifier without the sampling-based class imbalance corrections. While the random forest mirrors its performance, the random forest is a much more complex and computationally expensive model. Therefore, in practice, the logistic classifier would be best.
It is surprising that the class imbalance corrections had a limited impact on the more complex classifiers. In other classification tasks, the class imbalance corrections, especially the Tomek Link removals, significantly improved the AUC ROC of the more complex classifiers. However every classification task is different.
If the AUC ROC was not already 0.99, I would try feature selection via random forest, ensemble methods other than random forest like bagging classifiers, adaptive boosting, or even extreme gradient boosting or do more aggressive feature engineering and hyperparameter tuning. However at an AUC ROC of 0.99, this is definitely not a good investment of time.
Saving parameters of final model | Python Code:
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import json
from IPython.display import Image
from IPython.core.display import HTML
Explanation: Model Training
Code for finding the best predictive model
Author: Jimmy Charité
Email: [email protected]
Date: January 8, 2017
Directory & Initial Packages
End of explanation
retval=os.chdir("..")
clean_data=pd.read_csv("./clean_data/modeling_data.csv")
clean_data.head()
Explanation: The default directory is the code subdirectory. Changing to the main repo directory above.
Upload Data
End of explanation
my_rand_state=0
Explanation: Random States
End of explanation
from sklearn.model_selection import train_test_split
X = (clean_data.iloc[:,:-1]).as_matrix()
y = (clean_data.iloc[:,-1]).tolist()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25,
random_state=my_rand_state)
Explanation: Training and Testing Split
End of explanation
from imblearn.over_sampling import RandomOverSampler, SMOTE
from imblearn.under_sampling import RandomUnderSampler, TomekLinks
ros = RandomOverSampler(random_state=my_rand_state)
smote = SMOTE(random_state=my_rand_state)
rus = RandomUnderSampler(random_state=my_rand_state)
tl = TomekLinks(random_state=my_rand_state)
Explanation: Class Imbalance Corrections
End of explanation
from sklearn.feature_selection import VarianceThreshold
vt = VarianceThreshold()
threshold=[p*(1-p) for p in [0, 0.05, 0.1, 0.15]]
Explanation: Feature Selection
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
Explanation: Note, since the formula for the variance of binary variables is p*(1-p), where p is the proportion of times that the binary variable is 1, I use the proportion to define the variance thresholds. The max variance is 0.25 at p=0.5.
Classification Models
End of explanation
nb_clf=GaussianNB()
priors=[None]
qda_clf=QuadraticDiscriminantAnalysis()
reg_param=[0.0, 0.25, 0.5, 0.75]
log_clf=LogisticRegression()
C=[0.001 , 0.01, 10, 100,1000]
knn_clf=KNeighborsClassifier(n_jobs=4)
n_neighbors=list(range(1,17,2))
weights=['uniform','distance']
rf_clf=RandomForestClassifier()
n_estimators=[100]
max_features=[.1,.3,.5]
class_weight=['balanced']
class_weight.extend([{1: w} for w in [1, 2, 10]])
Explanation: Although tuning is not necessary for Naive Bayes, I pass the default parameters of those models to GridSearchCV anyway so that I can do a direct pair-wise comparison with the other models across the different steps of cross-validation.
In the interest of time, I didn't use the SVM classifier.
I used a training set to tune the model's hyperparameters and a test set to evaluate them. With more time and data, I would use repeated nested cross-validation to create a more robust model tuning, selection, and performance assessment workflow.
End of explanation
from imblearn import pipeline #needed if mixing imblearn with sklearn classes
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
n_jobs=4
n_folds=10
skfold = StratifiedKFold(n_splits=n_folds,random_state=my_rand_state, shuffle=False)
Explanation: Creating Pipelines
End of explanation
nb_clf_b = pipeline.Pipeline(steps=[('vt',vt),('clf',nb_clf)])
nb_clf_est_b = GridSearchCV(estimator=nb_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__priors=priors))
nb_clf_ros = pipeline.Pipeline(steps=[('ros',ros),('vt',vt),
('clf',nb_clf)])
nb_clf_est_ros = GridSearchCV(estimator=nb_clf_ros,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__priors=priors))
nb_clf_smote = pipeline.Pipeline(steps=[('smote',smote),('vt',vt),
('clf',nb_clf)])
nb_clf_est_smote = GridSearchCV(estimator=nb_clf_smote,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__priors=priors))
nb_clf_rus = pipeline.Pipeline(steps=[('rus',rus),('vt',vt),
('clf',nb_clf)])
nb_clf_est_rus = GridSearchCV(estimator=nb_clf_rus,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__priors=priors))
nb_clf_tl = pipeline.Pipeline(steps=[('tl',tl),('vt',vt),
('clf',nb_clf)])
nb_clf_est_tl = GridSearchCV(estimator=nb_clf_tl,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__priors=priors))
Explanation: Naive Bayes Estimators
End of explanation
qda_clf_b = pipeline.Pipeline(steps=[('vt',vt),('clf',qda_clf)])
qda_clf_est_b = GridSearchCV(estimator=qda_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__reg_param=reg_param))
qda_clf_ros = pipeline.Pipeline(steps=[('ros',ros),('vt',vt),
('clf',qda_clf)])
qda_clf_est_ros = GridSearchCV(estimator=qda_clf_ros,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__reg_param=reg_param))
qda_clf_smote = pipeline.Pipeline(steps=[('smote',smote),('vt',vt),
('clf',qda_clf)])
qda_clf_est_smote = GridSearchCV(estimator=qda_clf_smote,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__reg_param=reg_param))
qda_clf_rus = pipeline.Pipeline(steps=[('rus',rus),('vt',vt),
('clf',qda_clf)])
qda_clf_est_rus = GridSearchCV(estimator=qda_clf_rus,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__reg_param=reg_param))
qda_clf_tl = pipeline.Pipeline(steps=[('tl',tl),('vt',vt),
('clf',qda_clf)])
qda_clf_est_tl = GridSearchCV(estimator=qda_clf_tl,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__reg_param=reg_param))
Explanation: QDA Estimators
End of explanation
log_clf_b = pipeline.Pipeline(steps=[('vt',vt),('clf',log_clf)])
log_clf_est_b = GridSearchCV(estimator=log_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__C=C,
clf__class_weight=class_weight))
log_clf_ros = pipeline.Pipeline(steps=[('ros',ros),('vt',vt),
('clf',log_clf)])
log_clf_est_ros = GridSearchCV(estimator=log_clf_ros,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__C=C,
clf__class_weight=class_weight))
log_clf_smote = pipeline.Pipeline(steps=[('smote',smote),('vt',vt),
('clf',log_clf)])
log_clf_est_smote = GridSearchCV(estimator=log_clf_smote,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__C=C,
clf__class_weight=class_weight))
log_clf_rus = pipeline.Pipeline(steps=[('rus',rus),('vt',vt),
('clf',log_clf)])
log_clf_est_rus = GridSearchCV(estimator=log_clf_rus,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__C=C,
clf__class_weight=class_weight))
log_clf_tl = pipeline.Pipeline(steps=[('tl',tl),('vt',vt),
('clf',log_clf)])
log_clf_est_tl = GridSearchCV(estimator=log_clf_tl,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,clf__C=C,
clf__class_weight=class_weight))
Explanation: Logistic Estimators
End of explanation
knn_clf_b = pipeline.Pipeline(steps=[('vt',vt),('clf',knn_clf)])
knn_clf_est_b = GridSearchCV(estimator=knn_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_neighbors=n_neighbors,
clf__weights=weights))
knn_clf_ros = pipeline.Pipeline(steps=[('ros',ros),('vt',vt),
('clf',knn_clf)])
knn_clf_est_ros = GridSearchCV(estimator=knn_clf_ros,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_neighbors=n_neighbors,
clf__weights=weights))
knn_clf_smote = pipeline.Pipeline(steps=[('smote',smote),('vt',vt),
('clf',knn_clf)])
knn_clf_est_smote = GridSearchCV(estimator=knn_clf_smote,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_neighbors=n_neighbors,
clf__weights=weights))
knn_clf_rus = pipeline.Pipeline(steps=[('rus',rus),('vt',vt),
('clf',knn_clf)])
knn_clf_est_rus = GridSearchCV(estimator=knn_clf_rus,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_neighbors=n_neighbors,
clf__weights=weights))
knn_clf_tl = pipeline.Pipeline(steps=[('tl',tl),('vt',vt),
('clf',knn_clf)])
knn_clf_est_tl = GridSearchCV(estimator=knn_clf_tl,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_neighbors=n_neighbors,
clf__weights=weights))
Explanation: KNN Estimators
End of explanation
rf_clf_b = pipeline.Pipeline(steps=[('vt',vt),('clf',rf_clf)])
rf_clf_est_b = GridSearchCV(estimator=rf_clf_b,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_estimators=n_estimators,
clf__max_features=max_features,
clf__class_weight=class_weight))
rf_clf_ros = pipeline.Pipeline(steps=[('ros',ros),('vt',vt),
('clf',rf_clf)])
rf_clf_est_ros = GridSearchCV(estimator=rf_clf_ros,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_estimators=n_estimators,
clf__max_features=max_features,
clf__class_weight=class_weight))
rf_clf_smote = pipeline.Pipeline(steps=[('smote',smote),('vt',vt),
('clf',rf_clf)])
rf_clf_est_smote = GridSearchCV(estimator=rf_clf_smote,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_estimators=n_estimators,
clf__max_features=max_features,
clf__class_weight=class_weight))
rf_clf_rus = pipeline.Pipeline(steps=[('rus',rus),('vt',vt),
('clf',rf_clf)])
rf_clf_est_rus = GridSearchCV(estimator=rf_clf_rus,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_estimators=n_estimators,
clf__max_features=max_features,
clf__class_weight=class_weight))
rf_clf_tl = pipeline.Pipeline(steps=[('tl',tl),('vt',vt),
('clf',rf_clf)])
rf_clf_est_tl = GridSearchCV(estimator=rf_clf_tl,cv=skfold,
scoring='roc_auc',n_jobs=n_jobs,
param_grid=dict(vt__threshold=threshold,
clf__n_estimators=n_estimators,
clf__max_features=max_features,
clf__class_weight=class_weight))
Explanation: Random Forest Estimators
End of explanation
from sklearn.externals import joblib
Explanation: Fitting Estimators
End of explanation
nb_clf_est_b.fit(X_train,y_train)
joblib.dump(nb_clf_est_b, './other_output/nb_clf_est_b.pkl')
nb_clf_est_ros.fit(X_train,y_train)
joblib.dump(nb_clf_est_ros, './other_output/nb_clf_est_ros.pkl')
nb_clf_est_smote.fit(X_train,y_train)
joblib.dump(nb_clf_est_smote, './other_output/nb_clf_est_smote.pkl')
nb_clf_est_rus.fit(X_train,y_train)
joblib.dump(nb_clf_est_rus, './other_output/nb_clf_est_rus.pkl')
nb_clf_est_tl.fit(X_train,y_train)
joblib.dump(nb_clf_est_tl, './other_output/nb_clf_est_tl.pkl')
Explanation: Basic Estimators
Naive Bayes Estimators
End of explanation
qda_clf_est_b.fit(X_train,y_train)
joblib.dump(qda_clf_est_b, './other_output/qda_clf_est_b.pkl')
qda_clf_est_ros.fit(X_train,y_train)
joblib.dump(qda_clf_est_ros, './other_output/qda_clf_est_ros.pkl')
qda_clf_est_smote.fit(X_train,y_train)
joblib.dump(qda_clf_est_smote, './other_output/qda_clf_est_smote.pkl')
qda_clf_est_rus.fit(X_train,y_train)
joblib.dump(qda_clf_est_rus, './other_output/qda_clf_est_rus.pkl')
qda_clf_est_tl.fit(X_train,y_train)
joblib.dump(qda_clf_est_tl, './other_output/qda_clf_est_tl.pkl')
Explanation: QDA Estimators
End of explanation
log_clf_est_b.fit(X_train,y_train)
joblib.dump(log_clf_est_b, './other_output/log_clf_est_b.pkl')
log_clf_est_ros.fit(X_train,y_train)
joblib.dump(log_clf_est_ros, './other_output/log_clf_est_ros.pkl')
log_clf_est_smote.fit(X_train,y_train)
joblib.dump(log_clf_est_smote, './other_output/log_clf_est_smote.pkl')
log_clf_est_rus.fit(X_train,y_train)
joblib.dump(log_clf_est_rus, './other_output/log_clf_est_rus.pkl')
log_clf_est_tl.fit(X_train,y_train)
joblib.dump(log_clf_est_tl, './other_output/log_clf_est_tl.pkl')
Explanation: Logistic Estimators
End of explanation
knn_clf_est_b.fit(X_train,y_train)
joblib.dump(knn_clf_est_b, './other_output/knn_clf_est_b.pkl')
knn_clf_est_ros.fit(X_train,y_train)
joblib.dump(knn_clf_est_ros, './other_output/knn_clf_est_ros.pkl')
knn_clf_est_smote.fit(X_train,y_train)
joblib.dump(knn_clf_est_smote, './other_output/knn_clf_est_smote.pkl')
knn_clf_est_rus.fit(X_train,y_train)
joblib.dump(knn_clf_est_rus, './other_output/knn_clf_est_rus.pkl')
knn_clf_est_tl.fit(X_train,y_train)
joblib.dump(knn_clf_est_tl, './other_output/knn_clf_est_tl.pkl')
Explanation: KNN Estimators
End of explanation
rf_clf_est_b.fit(X_train,y_train)
joblib.dump(rf_clf_est_b, './other_output/rf_clf_est_b.pkl')
rf_clf_est_ros.fit(X_train,y_train)
joblib.dump(rf_clf_est_ros, './other_output/rf_clf_est_ros.pkl')
rf_clf_est_smote.fit(X_train,y_train)
joblib.dump(rf_clf_est_smote, './other_output/rf_clf_est_smote.pkl')
rf_clf_est_rus.fit(X_train,y_train)
joblib.dump(rf_clf_est_rus, './other_output/rf_clf_est_rus.pkl')
rf_clf_est_tl.fit(X_train,y_train)
joblib.dump(rf_clf_est_tl, './other_output/rf_clf_est_tl.pkl')
Explanation: Random Forest Estimators
End of explanation
from sklearn.metrics import roc_curve, auc
Explanation: Testing Estimators
Below I show the ROC curves for the models over the test data.
End of explanation
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_b.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_b.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_b.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
knn_fpr, knn_tpr, _ = roc_curve(y_test,
knn_clf_est_b.predict_proba(X_test)[:,1])
knn_roc_auc = auc(knn_fpr, knn_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_b.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(knn_fpr, knn_tpr, color='yellow', linestyle='--',
label='KNN (area = %0.2f)' % knn_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Basic Models')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_Basic.png', bbox_inches='tight')
plt.show()
Explanation: Basic Estimators
End of explanation
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_ros.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_ros.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_ros.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
knn_fpr, knn_tpr, _ = roc_curve(y_test,
knn_clf_est_ros.predict_proba(X_test)[:,1])
knn_roc_auc = auc(knn_fpr, knn_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_ros.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(knn_fpr, knn_tpr, color='yellow', linestyle='--',
label='KNN (area = %0.2f)' % knn_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Models with Oversampling')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_ROS.png', bbox_inches='tight')
plt.show()
Explanation: Estimators with Random Oversampling of Minority Class
End of explanation
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_smote.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_smote.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_smote.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
knn_fpr, knn_tpr, _ = roc_curve(y_test,
knn_clf_est_smote.predict_proba(X_test)[:,1])
knn_roc_auc = auc(knn_fpr, knn_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_smote.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(knn_fpr, knn_tpr, color='yellow', linestyle='--',
label='KNN (area = %0.2f)' % knn_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Models with SMOTE')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_SMOTE.png', bbox_inches='tight')
plt.show()
Explanation: Interestingly, only the basic classifiers improve in predictive performance.
Estimators with SMOTE
End of explanation
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_rus.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_rus.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
knn_fpr, knn_tpr, _ = roc_curve(y_test,
knn_clf_est_rus.predict_proba(X_test)[:,1])
knn_roc_auc = auc(knn_fpr, knn_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_rus.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(knn_fpr, knn_tpr, color='yellow', linestyle='--',
label='KNN (area = %0.2f)' % knn_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Models with Undersampling')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_RUS.png', bbox_inches='tight')
plt.show()
Explanation: Again, class imbalance corrections only benefit Naive Bayes and QDA.
Estimators with Random Undersampling of Majority Class
End of explanation
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_tl.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_tl.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_tl.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
knn_fpr, knn_tpr, _ = roc_curve(y_test,
knn_clf_est_tl.predict_proba(X_test)[:,1])
knn_roc_auc = auc(knn_fpr, knn_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_tl.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(knn_fpr, knn_tpr, color='yellow', linestyle='--',
label='KNN (area = %0.2f)' % knn_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Models with Tomek Link Removal')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_tl.png', bbox_inches='tight')
plt.show()
Explanation: Training QDA with undersampling resulted in errors.
Estimators with Tomek Link Removal
End of explanation
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_ros.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_ros.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_b.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
knn_fpr, knn_tpr, _ = roc_curve(y_test,
knn_clf_est_rus.predict_proba(X_test)[:,1])
knn_roc_auc = auc(knn_fpr, knn_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_b.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(knn_fpr, knn_tpr, color='yellow', linestyle='--',
label='KNN (area = %0.2f)' % knn_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Best Iteration of Each Model')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_Best.png', bbox_inches='tight')
plt.show()
Explanation: Plotting the Best of Each Classifier
End of explanation
log_cv_results=pd.DataFrame(log_clf_est_b.cv_results_)
log_cv_results.head()
Explanation: Inspecting the Best Classifier
While best model performs extremely well. It is important to be aware of model characteristics like the variability of prediction quality, a key model reliability metric, and the important features, which should inform data maintenance and engineering practices as well as model interpretation.
Variability of Prediction Quality
Normally I would use bootstrapped samples of the test data, with the model fitted on the whole training data, to obtain an empirical distribution of the model's performance (AUC ROC in this case). However, with limited data and time, I will use the AUC ROC on the validation folds of the CV grid search to get a sense of the variability. Normally the validation set AUC ROC values will be biased towards optimism compared to the true out of sample performance (on the test set), however, this isn't the case in the example below.
End of explanation
log_cv_results=log_cv_results[(log_cv_results.rank_test_score==1)]
log_cv_results.head()
keep_cols=["split"+str(i)+"_test_score" for i in range(0,10)]
log_cv_results=log_cv_results[keep_cols]
log_cv_results.head()
temp=log_cv_results.T.reset_index()
temp.columns=['Fold','Validation AUC ROC']
temp['Fold']=range(1,11)
temp
temp['Validation AUC ROC'].describe()
Explanation: Isolate the best parameters
End of explanation
rf_clf_b.set_params(**rf_clf_est_b.best_params_).fit(X_train,y_train)
importance=rf_clf_b.named_steps['clf'].feature_importances_
indices = np.argsort(importance)[::-1]
feature_importance=pd.DataFrame({'feature':clean_data.columns[:-1][indices],
'importance':importance})
feature_importance.sort_values(by='importance',inplace=True,ascending=False)
feature_importance[:10]
Explanation: Fortunately, the performance is stable. With more data and time, I would do repeat cross-validation or repeated nested cross-validation to get more robust estimates of the out of sample error and its variability.
Important features
Unfortunately, even on a standardized scale, coefficient magnitude is not necessarily the right way to determine variable importance in a logistic regression. Fortunately, the random forest classifier has similar performance to the logistic regression, so I will use it to identify important features.
End of explanation
best_paras = log_clf_est_b.best_params_
best_paras
with open('./model_para/logistic_best_paras.json', 'w') as outfile:
json.dump(best_paras, outfile)
Explanation: It is not surprising that the most important two features are ad attributes. This also adds confidence to the model by showing that the most important features make intuitive sense; although robust models can have seemingly non-intuitive important features.
Many of the other top ten features, like urlstatic.wired.com and ancurlwww.amazon.com, also make sense because they are likely links to the urls of the company that owns the ad.
I would not be surprised if adding random forest as a feature selection step in the pipeline wouldn't bring the AUC ROC of nearly all of the classifiers to 0.99.
Final Thoughts
The best model is clearly the logistic classifier without the sampling-based class imbalance corrections. While the random forest mirrors its performance, the random forest is a much more complex and computationally expensive model. Therefore, in practice, the logistic classifier would be best.
It is surprising that the class imbalance corrections had a limited impact on the more complex classifiers. In other classification tasks, the class imbalance corrections, especially the Tomek Link removals, significantly improved the AUC ROC of the more complex classifiers. However every classification task is different.
If the AUC ROC was not already 0.99, I would try feature selection via random forest, ensemble methods other than random forest like bagging classifiers, adaptive boosting, or even extreme gradient boosting or do more aggressive feature engineering and hyperparameter tuning. However at an AUC ROC of 0.99, this is definitely not a good investment of time.
Saving parameters of final model
End of explanation |
11,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Let's download and import our primary Canadian Immigration dataset using pandas read_excel() method. Normally, before we can do that, we would need to download a module which pandas requires to read in excel files. This module is xlrd. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the xlrd module
Step2: Let's take a look at the first five items in our dataset
Step3: Let's find out how many entries there are in our dataset
Step4: Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to Introduction to Matplotlib and Line Plots and Area Plots, Histograms, and Bar Plots for a detailed description of this preprocessing.
Step5: Visualizing Data using Matplotlib<a id="4"></a>
Import matplotlib
Step6: Waffle Charts <a id="6"></a>
A waffle chart is an interesting visualization that is normally created to display progress toward goals. It is commonly an effective option when you are trying to add interesting visualization features to a visual that consists mainly of cells, such as an Excel dashboard.
Let's revisit the previous case study about Denmark, Norway, and Sweden.
Step7: Unfortunately, unlike R, waffle charts are not built into any of the Python visualization libraries. Therefore, we will learn how to create them from scratch.
Step 1. The first step into creating a waffle chart is determing the proportion of each category with respect to the total.
Step8: Step 2. The second step is defining the overall size of the waffle chart.
Step9: Step 3. The third step is using the proportion of each category to determe it respective number of tiles
Step10: Based on the calculated proportions, Denmark will occupy 129 tiles of the waffle chart, Norway will occupy 77 tiles, and Sweden will occupy 194 tiles.
Step 4. The fourth step is creating a matrix that resembles the waffle chart and populating it.
Step11: Let's take a peek at how the matrix looks like.
Step12: As expected, the matrix consists of three categories and the total number of each category's instances matches the total number of tiles allocated to each category.
Step 5. Map the waffle chart matrix into a visual.
Step13: Step 6. Prettify the chart.
Step14: Step 7. Create a legend and add it to chart.
Step15: And there you go! What a good looking delicious waffle chart, don't you think?
Now it would very inefficient to repeat these seven steps every time we wish to create a waffle chart. So let's combine all seven steps into one function called create_waffle_chart. This function would take the following parameters as input
Step16: Now to create a waffle chart, all we have to do is call the function create_waffle_chart. Let's define the input parameters
Step17: And now let's call our function to create a waffle chart.
Step18: There seems to be a new Python package for generating waffle charts called PyWaffle, but it looks like the repository is still being built. But feel free to check it out and play with it.
Word Clouds <a id="8"></a>
Word clouds (also known as text clouds or tag clouds) work in a simple way
Step19: Word clouds are commonly used to perform high-level analysis and visualization of text data. Accordinly, let's digress from the immigration dataset and work with an example that involves analyzing text data. Let's try to analyze a short novel written by Lewis Carroll titled Alice's Adventures in Wonderland. Let's go ahead and download a .txt file of the novel.
Step20: Next, let's use the stopwords that we imported from word_cloud. We use the function set to remove any redundant stopwords.
Step21: Create a word cloud object and generate a word cloud. For simplicity, let's generate a word cloud using only the first 2000 words in the novel.
Step22: Awesome! Now that the word cloud is created, let's visualize it.
Step23: Interesting! So in the first 2000 words in the novel, the most common words are Alice, said, little, Queen, and so on. Let's resize the cloud so that we can see the less frequent words a little better.
Step24: Much better! However, said isn't really an informative word. So let's add it to our stopwords and re-generate the cloud.
Step25: Excellent! This looks really interesting! Another cool thing you can implement with the word_cloud package is superimposing the words onto a mask of any shape. Let's use a mask of Alice and her rabbit. We already created the mask for you, so let's go ahead and download it and call it alice_mask.png.
Step26: Let's take a look at how the mask looks like.
Step27: Shaping the word cloud according to the mask is straightforward using word_cloud package. For simplicity, we will continue using the first 2000 words in the novel.
Step28: Really impressive!
Unfortunately, our immmigration data does not have any text data, but where there is a will there is a way. Let's generate sample text data from our immigration dataset, say text data of 90 words.
Let's recall how our data looks like.
Step29: And what was the total immigration from 1980 to 2013?
Step30: Using countries with single-word names, let's duplicate each country's name based on how much they contribute to the total immigration.
Step31: We are not dealing with any stopwords here, so there is no need to pass them when creating the word cloud.
Step32: According to the above word cloud, it looks like the majority of the people who immigrated came from one of 15 countries that are displayed by the word cloud. One cool visual that you could build, is perhaps using the map of Canada and a mask and superimposing the word cloud on top of the map of Canada. That would be an interesting visual to build!
Regression Plots <a id="10"></a>
Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. You can learn more about seaborn by following this link and more about seaborn regression plots by following this link.
In lab Pie Charts, Box Plots, Scatter Plots, and Bubble Plots, we learned how to create a scatter plot and then fit a regression line. It took ~20 lines of code to create the scatter plot along with the regression fit. In this final section, we will explore seaborn and see how efficient it is to create regression lines and fits using this library!
Let's first install seaborn
Step33: Create a new dataframe that stores that total number of landed immigrants to Canada per year from 1980 to 2013.
Step34: With seaborn, generating a regression plot is as simple as calling the regplot function.
Step35: This is not magic; it is seaborn! You can also customize the color of the scatter plot and regression line. Let's change the color to green.
Step36: You can always customize the marker shape, so instead of circular markers, let's use '+'.
Step37: Let's blow up the plot a little bit so that it is more appealing to the sight.
Step38: And let's increase the size of markers so they match the new size of the figure, and add a title and x- and y-labels.
Step39: And finally increase the font size of the tickmark labels, the title, and the x- and y-labels so they don't feel left out!
Step40: Amazing! A complete scatter plot with a regression fit with 5 lines of code only. Isn't this really amazing?
If you are not a big fan of the purple background, you can easily change the style to a white plain background.
Step41: Or to a white background with gridlines.
Step42: Question | Python Code:
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
from PIL import Image # converting images into arrays
Explanation: <a href="https://cognitiveclass.ai"><img src = "https://ibm.box.com/shared/static/9gegpsmnsoo25ikkbl4qzlvlyjbgxs5x.png" width = 400> </a>
<h1 align=center><font size = 5>Waffle Charts, Word Clouds, and Regression Plots</font></h1>
Introduction
In this lab, we will learn how to create word clouds and waffle charts. Furthermore, we will start learning about additional visualization libraries that are based on Matplotlib, namely the library seaborn, and we will learn how to create regression plots using the seaborn library.
Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
1. [Exploring Datasets with *p*andas](#0)<br>
2. [Downloading and Prepping Data](#2)<br>
3. [Visualizing Data using Matplotlib](#4) <br>
4. [Waffle Charts](#6) <br>
5. [Word Clouds](#8) <br>
7. [Regression Plots](#10) <br>
</div>
<hr>
Exploring Datasets with pandas and Matplotlib<a id="0"></a>
Toolkits: The course heavily relies on pandas and Numpy for data wrangling, analysis, and visualization. The primary plotting library we will explore in the course is Matplotlib.
Dataset: Immigration to Canada from 1980 to 2013 - International migration flows to and from selected countries - The 2015 revision from United Nation's website
The dataset contains annual data on the flows of international migrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. In this lab, we will focus on the Canadian Immigration data.
Downloading and Prepping Data <a id="2"></a>
Import Primary Modules:
End of explanation
df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2)
print('Data downloaded and read into a dataframe!')
Explanation: Let's download and import our primary Canadian Immigration dataset using pandas read_excel() method. Normally, before we can do that, we would need to download a module which pandas requires to read in excel files. This module is xlrd. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the xlrd module:
!conda install -c anaconda xlrd --yes
Download the dataset and read it into a pandas dataframe:
End of explanation
df_can.head()
Explanation: Let's take a look at the first five items in our dataset
End of explanation
# print the dimensions of the dataframe
print(df_can.shape)
Explanation: Let's find out how many entries there are in our dataset
End of explanation
# clean up the dataset to remove unnecessary columns (eg. REG)
df_can.drop(['AREA','REG','DEV','Type','Coverage'], axis = 1, inplace = True)
# let's rename the columns so that they make sense
df_can.rename (columns = {'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace = True)
# for sake of consistency, let's also make all column labels of type string
df_can.columns = list(map(str, df_can.columns))
# set the country name as index - useful for quickly looking up countries using .loc method
df_can.set_index('Country', inplace = True)
# add total column
df_can['Total'] = df_can.sum (axis = 1)
# years that we will be using in this lesson - useful for plotting later on
years = list(map(str, range(1980, 2014)))
print ('data dimensions:', df_can.shape)
Explanation: Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to Introduction to Matplotlib and Line Plots and Area Plots, Histograms, and Bar Plots for a detailed description of this preprocessing.
End of explanation
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches # needed for waffle Charts
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0
Explanation: Visualizing Data using Matplotlib<a id="4"></a>
Import matplotlib:
End of explanation
# let's create a new dataframe for these three countries
df_dsn = df_can.loc[['Denmark', 'Norway', 'Sweden'], :]
# let's take a look at our dataframe
df_dsn
Explanation: Waffle Charts <a id="6"></a>
A waffle chart is an interesting visualization that is normally created to display progress toward goals. It is commonly an effective option when you are trying to add interesting visualization features to a visual that consists mainly of cells, such as an Excel dashboard.
Let's revisit the previous case study about Denmark, Norway, and Sweden.
End of explanation
# compute the proportion of each category with respect to the total
total_values = sum(df_dsn['Total'])
category_proportions = [(float(value) / total_values) for value in df_dsn['Total']]
# print out proportions
for i, proportion in enumerate(category_proportions):
print (df_dsn.index.values[i] + ': ' + str(proportion))
Explanation: Unfortunately, unlike R, waffle charts are not built into any of the Python visualization libraries. Therefore, we will learn how to create them from scratch.
Step 1. The first step into creating a waffle chart is determing the proportion of each category with respect to the total.
End of explanation
width = 40 # width of chart
height = 10 # height of chart
total_num_tiles = width * height # total number of tiles
print ('Total number of tiles is ', total_num_tiles)
Explanation: Step 2. The second step is defining the overall size of the waffle chart.
End of explanation
# compute the number of tiles for each catagory
tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions]
# print out number of tiles per category
for i, tiles in enumerate(tiles_per_category):
print (df_dsn.index.values[i] + ': ' + str(tiles))
Explanation: Step 3. The third step is using the proportion of each category to determe it respective number of tiles
End of explanation
# initialize the waffle chart as an empty matrix
waffle_chart = np.zeros((height, width))
# define indices to loop through waffle chart
category_index = 0
tile_index = 0
# populate the waffle chart
for col in range(width):
for row in range(height):
tile_index += 1
# if the number of tiles populated for the current category is equal to its corresponding allocated tiles...
if tile_index > sum(tiles_per_category[0:category_index]):
# ...proceed to the next category
category_index += 1
# set the class value to an integer, which increases with class
waffle_chart[row, col] = category_index
print ('Waffle chart populated!')
Explanation: Based on the calculated proportions, Denmark will occupy 129 tiles of the waffle chart, Norway will occupy 77 tiles, and Sweden will occupy 194 tiles.
Step 4. The fourth step is creating a matrix that resembles the waffle chart and populating it.
End of explanation
waffle_chart
Explanation: Let's take a peek at how the matrix looks like.
End of explanation
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
Explanation: As expected, the matrix consists of three categories and the total number of each category's instances matches the total number of tiles allocated to each category.
Step 5. Map the waffle chart matrix into a visual.
End of explanation
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add gridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
Explanation: Step 6. Prettify the chart.
End of explanation
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add gridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# compute cumulative sum of individual categories to match color schemes between chart and legend
values_cumsum = np.cumsum(df_dsn['Total'])
total_values = values_cumsum[len(values_cumsum) - 1]
# create legend
legend_handles = []
for i, category in enumerate(df_dsn.index.values):
label_str = category + ' (' + str(df_dsn['Total'][i]) + ')'
color_val = colormap(float(values_cumsum[i])/total_values)
legend_handles.append(mpatches.Patch(color=color_val, label=label_str))
# add legend to chart
plt.legend(handles=legend_handles,
loc='lower center',
ncol=len(df_dsn.index.values),
bbox_to_anchor=(0., -0.2, 0.95, .1)
)
Explanation: Step 7. Create a legend and add it to chart.
End of explanation
def create_waffle_chart(categories, values, height, width, colormap, value_sign=''):
# compute the proportion of each category with respect to the total
total_values = sum(values)
category_proportions = [(float(value) / total_values) for value in values]
# compute the total number of tiles
total_num_tiles = width * height # total number of tiles
print ('Total number of tiles is', total_num_tiles)
# compute the number of tiles for each catagory
tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions]
# print out number of tiles per category
for i, tiles in enumerate(tiles_per_category):
print (df_dsn.index.values[i] + ': ' + str(tiles))
# initialize the waffle chart as an empty matrix
waffle_chart = np.zeros((height, width))
# define indices to loop through waffle chart
category_index = 0
tile_index = 0
# populate the waffle chart
for col in range(width):
for row in range(height):
tile_index += 1
# if the number of tiles populated for the current category
# is equal to its corresponding allocated tiles...
if tile_index > sum(tiles_per_category[0:category_index]):
# ...proceed to the next category
category_index += 1
# set the class value to an integer, which increases with class
waffle_chart[row, col] = category_index
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add dridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# compute cumulative sum of individual categories to match color schemes between chart and legend
values_cumsum = np.cumsum(values)
total_values = values_cumsum[len(values_cumsum) - 1]
# create legend
legend_handles = []
for i, category in enumerate(categories):
if value_sign == '%':
label_str = category + ' (' + str(values[i]) + value_sign + ')'
else:
label_str = category + ' (' + value_sign + str(values[i]) + ')'
color_val = colormap(float(values_cumsum[i])/total_values)
legend_handles.append(mpatches.Patch(color=color_val, label=label_str))
# add legend to chart
plt.legend(
handles=legend_handles,
loc='lower center',
ncol=len(categories),
bbox_to_anchor=(0., -0.2, 0.95, .1)
)
Explanation: And there you go! What a good looking delicious waffle chart, don't you think?
Now it would very inefficient to repeat these seven steps every time we wish to create a waffle chart. So let's combine all seven steps into one function called create_waffle_chart. This function would take the following parameters as input:
categories: Unique categories or classes in dataframe.
values: Values corresponding to categories or classes.
height: Defined height of waffle chart.
width: Defined width of waffle chart.
colormap: Colormap class
value_sign: In order to make our function more generalizable, we will add this parameter to address signs that could be associated with a value such as %, $, and so on. value_sign has a default value of empty string.
End of explanation
width = 40 # width of chart
height = 10 # height of chart
categories = df_dsn.index.values # categories
values = df_dsn['Total'] # correponding values of categories
colormap = plt.cm.coolwarm # color map class
Explanation: Now to create a waffle chart, all we have to do is call the function create_waffle_chart. Let's define the input parameters:
End of explanation
create_waffle_chart(categories, values, height, width, colormap)
Explanation: And now let's call our function to create a waffle chart.
End of explanation
# install wordcloud
!conda install -c conda-forge wordcloud==1.4.1 --yes
# import package and its set of stopwords
from wordcloud import WordCloud, STOPWORDS
print ('Wordcloud is installed and imported!')
Explanation: There seems to be a new Python package for generating waffle charts called PyWaffle, but it looks like the repository is still being built. But feel free to check it out and play with it.
Word Clouds <a id="8"></a>
Word clouds (also known as text clouds or tag clouds) work in a simple way: the more a specific word appears in a source of textual data (such as a speech, blog post, or database), the bigger and bolder it appears in the word cloud.
Luckily, a Python package already exists in Python for generating word clouds. The package, called word_cloud was developed by Andreas Mueller. You can learn more about the package by following this link.
Let's use this package to learn how to generate a word cloud for a given text document.
First, let's install the package.
End of explanation
# download file and save as alice_novel.txt
!wget --quiet https://ibm.box.com/shared/static/m54sjtrshpt5su20dzesl5en9xa5vfz1.txt -O alice_novel.txt
# open the file and read it into a variable alice_novel
alice_novel = open('alice_novel.txt', 'r').read()
print ('File downloaded and saved!')
Explanation: Word clouds are commonly used to perform high-level analysis and visualization of text data. Accordinly, let's digress from the immigration dataset and work with an example that involves analyzing text data. Let's try to analyze a short novel written by Lewis Carroll titled Alice's Adventures in Wonderland. Let's go ahead and download a .txt file of the novel.
End of explanation
stopwords = set(STOPWORDS)
Explanation: Next, let's use the stopwords that we imported from word_cloud. We use the function set to remove any redundant stopwords.
End of explanation
# instantiate a word cloud object
alice_wc = WordCloud(
background_color='white',
max_words=2000,
stopwords=stopwords
)
# generate the word cloud
alice_wc.generate(alice_novel)
Explanation: Create a word cloud object and generate a word cloud. For simplicity, let's generate a word cloud using only the first 2000 words in the novel.
End of explanation
# display the word cloud
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
Explanation: Awesome! Now that the word cloud is created, let's visualize it.
End of explanation
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
# display the cloud
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
Explanation: Interesting! So in the first 2000 words in the novel, the most common words are Alice, said, little, Queen, and so on. Let's resize the cloud so that we can see the less frequent words a little better.
End of explanation
stopwords.add('said') # add the words said to stopwords
# re-generate the word cloud
alice_wc.generate(alice_novel)
# display the cloud
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
Explanation: Much better! However, said isn't really an informative word. So let's add it to our stopwords and re-generate the cloud.
End of explanation
# download image
!wget --quiet https://ibm.box.com/shared/static/3mpxgaf6muer6af7t1nvqkw9cqj85ibm.png -O alice_mask.png
# save mask to alice_mask
alice_mask = np.array(Image.open('alice_mask.png'))
print('Image downloaded and saved!')
Explanation: Excellent! This looks really interesting! Another cool thing you can implement with the word_cloud package is superimposing the words onto a mask of any shape. Let's use a mask of Alice and her rabbit. We already created the mask for you, so let's go ahead and download it and call it alice_mask.png.
End of explanation
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_mask, cmap=plt.cm.gray, interpolation='bilinear')
plt.axis('off')
plt.show()
Explanation: Let's take a look at how the mask looks like.
End of explanation
# instantiate a word cloud object
alice_wc = WordCloud(background_color='white', max_words=2000, mask=alice_mask, stopwords=stopwords)
# generate the word cloud
alice_wc.generate(alice_novel)
# display the word cloud
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
Explanation: Shaping the word cloud according to the mask is straightforward using word_cloud package. For simplicity, we will continue using the first 2000 words in the novel.
End of explanation
df_can.head()
Explanation: Really impressive!
Unfortunately, our immmigration data does not have any text data, but where there is a will there is a way. Let's generate sample text data from our immigration dataset, say text data of 90 words.
Let's recall how our data looks like.
End of explanation
total_immigration = df_can['Total'].sum()
total_immigration
Explanation: And what was the total immigration from 1980 to 2013?
End of explanation
max_words = 90
word_string = ''
for country in df_can.index.values:
# check if country's name is a single-word name
if len(country.split(' ')) == 1:
repeat_num_times = int(df_can.loc[country, 'Total']/float(total_immigration)*max_words)
word_string = word_string + ((country + ' ') * repeat_num_times)
# display the generated text
word_string
Explanation: Using countries with single-word names, let's duplicate each country's name based on how much they contribute to the total immigration.
End of explanation
# create the word cloud
wordcloud = WordCloud(background_color='white').generate(word_string)
print('Word cloud created!')
# display the cloud
fig = plt.figure()
fig.set_figwidth(14)
fig.set_figheight(18)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
Explanation: We are not dealing with any stopwords here, so there is no need to pass them when creating the word cloud.
End of explanation
# install seaborn
!pip install seaborn
# import library
import seaborn as sns
print('Seaborn installed and imported!')
Explanation: According to the above word cloud, it looks like the majority of the people who immigrated came from one of 15 countries that are displayed by the word cloud. One cool visual that you could build, is perhaps using the map of Canada and a mask and superimposing the word cloud on top of the map of Canada. That would be an interesting visual to build!
Regression Plots <a id="10"></a>
Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. You can learn more about seaborn by following this link and more about seaborn regression plots by following this link.
In lab Pie Charts, Box Plots, Scatter Plots, and Bubble Plots, we learned how to create a scatter plot and then fit a regression line. It took ~20 lines of code to create the scatter plot along with the regression fit. In this final section, we will explore seaborn and see how efficient it is to create regression lines and fits using this library!
Let's first install seaborn
End of explanation
# we can use the sum() method to get the total population per year
df_tot = pd.DataFrame(df_can[years].sum(axis=0))
# change the years to type float (useful for regression later on)
df_tot.index = map(float,df_tot.index)
# reset the index to put in back in as a column in the df_tot dataframe
df_tot.reset_index(inplace = True)
# rename columns
df_tot.columns = ['year', 'total']
# view the final dataframe
df_tot.head()
Explanation: Create a new dataframe that stores that total number of landed immigrants to Canada per year from 1980 to 2013.
End of explanation
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot)
Explanation: With seaborn, generating a regression plot is as simple as calling the regplot function.
End of explanation
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot, color='green')
Explanation: This is not magic; it is seaborn! You can also customize the color of the scatter plot and regression line. Let's change the color to green.
End of explanation
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+')
Explanation: You can always customize the marker shape, so instead of circular markers, let's use '+'.
End of explanation
plt.figure(figsize=(15, 10))
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+')
Explanation: Let's blow up the plot a little bit so that it is more appealing to the sight.
End of explanation
plt.figure(figsize=(15, 10))
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration') # add x- and y-labels
ax.set_title('Total Immigration to Canada from 1980 - 2013') # add title
Explanation: And let's increase the size of markers so they match the new size of the figure, and add a title and x- and y-labels.
End of explanation
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
Explanation: And finally increase the font size of the tickmark labels, the title, and the x- and y-labels so they don't feel left out!
End of explanation
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('ticks') # change background to white background
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
Explanation: Amazing! A complete scatter plot with a regression fit with 5 lines of code only. Isn't this really amazing?
If you are not a big fan of the purple background, you can easily change the style to a white plain background.
End of explanation
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
Explanation: Or to a white background with gridlines.
End of explanation
### type your answer here
import folium
folium.Map(location=[-40.4637, -3.7492], zoom_start=6, tiles='Stamen Toner')
Explanation: Question: Use seaborn to create a scatter plot with a regression line to visualize the total immigration from Denmark, Sweden, and Norway to Canada from 1980 to 2013.
End of explanation |
11,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
intereference and compensate for subject head movements.
See maxwell for more details.
Step1: Set parameters
Step2: Preprocess with Maxwell filtering
Step3: Select events to extract epochs from, pick M/EEG channels, and plot evoked | Python Code:
import mne
from mne.preprocessing import maxwell_filter
data_path = mne.datasets.sample.data_path()
Explanation: Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
intereference and compensate for subject head movements.
See maxwell for more details.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
Explanation: Set parameters
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
Explanation: Preprocess with Maxwell filtering
End of explanation
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6))
evoked = epochs.average()
evoked.plot(window_title=kind, ylim=dict(grad=(-200, 250),
mag=(-600, 700)), time_unit='s')
Explanation: Select events to extract epochs from, pick M/EEG channels, and plot evoked
End of explanation |
11,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
initialize the Cosmological models
Step1: Define proxy modelling
Use a mass proxy, define the probability for observing a proxy given a mass and redhsift
$$
P(\log\lambda|M,z) = N(\mu(M,z), \sigma^2(M,z))
$$
the mean is
$$
\mu(M,z) = \mu_0 + a_\mu^M\log_{10}\frac{M}{M_0} + a_\mu^z\log_{10}\frac{1+z}{1+z_0}
$$
variance is
$$
\sigma(M,z) = \sigma_0 + a_\sigma^M\log_{10}\frac{M}{M_0} + a_\sigma ^z\log_{10}\frac{1+z}{1+z_0}
$$
Step2: initialize the ClusterAbundance object
Step3: Creating a DataClusterNCount and initializing it using resample
Here we are using the seed 1234.
Step4: Binning data | Python Code:
#CCL cosmology
cosmo_ccl = ccl.Cosmology(Omega_c = 0.30711 - 0.048254, Omega_b = 0.048254, h = 0.677, sigma8 = 0.8822714165197718, n_s=0.96, Omega_k = 0, transfer_function='eisenstein_hu')
#ccl_cosmo_set_high_prec (cosmo_ccl)
cosmo_numcosmo, dist, ps_lin, ps_nln, hmfunc = create_nc_obj (cosmo_ccl)
psf = hmfunc.peek_psf ()
Explanation: initialize the Cosmological models
End of explanation
#CosmoSim_proxy model
#M_0, z_0
theta_pivot = [3e14/0.71, 0.6]
#\mu_0, a_\mu^z, a_\mu^M
theta_mu = [3.19, -0.7, 2]
#\sigma_0, a_\sigma^z, a_\sigma^M
theta_sigma = [0.33, 0.,-0.08]
area = (0.25)*4*np.pi / 100.0
lnRl = 1.0
lnRu = 2.0
zl = 0.25
zu = 1.0
#Numcosmo_proxy model
cluster_z = nc.ClusterRedshift.new_from_name("NcClusterRedshiftNodist{'z-min': <%20.15e>, 'z-max':<%20.15e>}" % (zl, zu))
cluster_m = nc.ClusterMass.new_from_name("NcClusterMassAscaso{'M0':<%20.15e>,'z0':<%20.15e>,'lnRichness-min':<%20.15e>, 'lnRichness-max':<%20.15e>}" % (3e14/(0.71),0.6, lnRl, lnRu))
cluster_m.param_set_by_name('mup0', 3.19)
cluster_m.param_set_by_name('mup1', 2/np.log(10))
cluster_m.param_set_by_name('mup2', -0.7/np.log(10))
cluster_m.param_set_by_name('sigmap0', 0.33)
cluster_m.param_set_by_name('sigmap1', -0.08/np.log(10))
cluster_m.param_set_by_name('sigmap2', 0/np.log(10))
Explanation: Define proxy modelling
Use a mass proxy, define the probability for observing a proxy given a mass and redhsift
$$
P(\log\lambda|M,z) = N(\mu(M,z), \sigma^2(M,z))
$$
the mean is
$$
\mu(M,z) = \mu_0 + a_\mu^M\log_{10}\frac{M}{M_0} + a_\mu^z\log_{10}\frac{1+z}{1+z_0}
$$
variance is
$$
\sigma(M,z) = \sigma_0 + a_\sigma^M\log_{10}\frac{M}{M_0} + a_\sigma ^z\log_{10}\frac{1+z}{1+z_0}
$$
End of explanation
#Numcosmo Cluster Abundance
#First we need to define the multiplicity function here we will use the tinker
mulf = nc.MultiplicityFuncTinker.new()
mulf.set_linear_interp (True)
mulf.set_mdef(nc.MultiplicityFuncMassDef.CRITICAL)
mulf.set_Delta(200)
#Second we need to construct a filtered power spectrum
hmf = nc.HaloMassFunction.new(dist,psf,mulf)
hmf.set_area(area)
ca = nc.ClusterAbundance.new(hmf,None)
mset = ncm.MSet.new_array([cosmo_numcosmo,cluster_m,cluster_z])
Explanation: initialize the ClusterAbundance object
End of explanation
ncount = Nc.DataClusterNCount.new (ca, "NcClusterRedshiftNodist", "NcClusterMassAscaso")
ca.prepare(cosmo_numcosmo,cluster_z,cluster_m)
mset = Ncm.MSet.new_array ([cosmo_numcosmo, cluster_z, cluster_m])
rng = Ncm.RNG.seeded_new (None, 1234)
ncount.init_from_sampling (mset, area, rng)
print (ncount.desc)
Explanation: Creating a DataClusterNCount and initializing it using resample
Here we are using the seed 1234.
End of explanation
#Bins definition
z_edges = np.linspace(zl, zu, 5)
proxy_edges = np.linspace(lnRl, lnRu, 4)
ncount.del_bins ()
for rich_l, rich_u in zip(proxy_edges, proxy_edges[1:]):
for zl, zu in zip(z_edges,z_edges[1:]):
lnM_obs_lb = Ncm.Vector.new_array ([rich_l])
lnM_obs_ub = Ncm.Vector.new_array ([rich_u])
z_obs_lb = Ncm.Vector.new_array ([zl])
z_obs_ub = Ncm.Vector.new_array ([zu])
ncount.add_bin (lnM_obs_lb, lnM_obs_ub, z_obs_lb, z_obs_ub)
ncount.bin_data ()
ncount.catalog_save ("ncount_ascaso.fits", True)
ncount.catalog_load ("ncount_ascaso.fits")
Explanation: Binning data
End of explanation |
11,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Limpieza de datos sobre edificios con certificacion LEED
1. Introduccion
EL United States Green Building Council (USGBG) tiene una base de datos de edificios que cuentan con certificación LEED alrededor del mundo. La pagina web de USGBG cuenta con una interfaz para hacer consultas directamente a su base de datos, sin embargo no cuenta con una API o una URL directa para descarga masiva por lo que es necesario enviar el query a la base de datos desde la URL de USBG
Step1: El archivo tal como se descarga, a pesar de ser tabulados de excel, envia un mensaje de error cuando se intenta abrir directamente como se descargó. Por lo tanto, antes de procesarlo es necesario abrirlo en excel y guardarlo con formato .xlsx
Step2: La base de datos es un listado de edificios que incluye para cada edificio
Step3: Todos los datos son listas, pero "street" tiene 2 elementos. Entonces para el script lo que voy a hacer será eliminar todos los saltos de linea y concatenar el texto de todos los elementos de la lista
Step4: Reemplaza los enters en cada lista
(Voy a saltarme este paso porque lo que me interesa en realidad es el Codigo Postal, pero dejo el codigo por si lo ocupo en el futuro)
def listtotext(x)
Step5: A partir de los Codigos Postales ya es posible identificar la ciudad y municipio a la que pertenece cada edificio. Para esto, vamos a utilizar la base de datos de codigos postales del SEPOMEX que se descargó en otra minería de datos
Step6: Con la base de datos del SEPOMEX ya es posible unir ambos datasets para obtener las claves municipales de cada edificio
Step7: Quedan 70 filas en donde no fue posible identificar la clave Municipal
Step8: Casos Particulares
Estos 70 registros tienen 33 claves unicas de C que requieren ser asignadas individualmente para conocer la CVE_MUN de cada edificio. Para esto, haremos un script que permita revisar cada clave para realizar la investigación necesaria y asignarle una CVE_MUN
Step9: En el siguiente diccionario recopila las CVE_MUN que se asignarán a los códigos postales que requieren asignacion individual. Los códigos cuyo valor es None se asignarán mas adelante
Step10: El siguiente diccionario incluye códigos postales que requieren ser corregidos
Step11: Asignacion de codigos postales
Step12: Algunos edificios, marcados con los codigos postales 00000 y 12345 (Intuyo que por desidia del capturista) se tendrán que asignar individualmente
Step13: El dataset contiene dos edificios en el dataset que no corresponden a México
Step14: Se eliminarán del dataset los siguientes edificios
Step15: El primero por estar repetido y el resto por que no están en los Estados Unidos Mexicanos.
Step16: Los edificios que requieren correccion de codigos postales son los siguientes | Python Code:
# Librerias utilizadas
import pandas as pd
import sys
import os
import csv
from lxml import html
import requests
import time
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
Explanation: Limpieza de datos sobre edificios con certificacion LEED
1. Introduccion
EL United States Green Building Council (USGBG) tiene una base de datos de edificios que cuentan con certificación LEED alrededor del mundo. La pagina web de USGBG cuenta con una interfaz para hacer consultas directamente a su base de datos, sin embargo no cuenta con una API o una URL directa para descarga masiva por lo que es necesario enviar el query a la base de datos desde la URL de USBG:
https://www.usgbc.org/projects/list?page=17&keys=Mexico
Después de esperar a que la base de datos interprete el query, regresa el archivo "leed_projects.xls" que quedará guardado como "D:\PCCS\00_RawData\01_CSV\LEED\leed_projects.xls"
2. Estandarizacion del dataset
End of explanation
path = r'D:\PCCS\00_RawData\01_CSV\LEED\leed_projects.xlsx'
raw_data = pd.read_excel(path)
raw_data.index.name = 'Building'
raw_data.head()
# Eliminar columnas que no pertenecen a Mexico
print('La tabla tiene {} registros'.format(len(raw_data)))
x = 'United States [us]'
raw_data = raw_data[raw_data['Country'] != x]
print('Quitando los registros donde el país es "{}", la tabla queda con {} registros'.format(x, len(raw_data)))
x = 'Colombia'
raw_data = raw_data[raw_data['Country'] != x]
print('Quitando los registros donde el país es "{}", la tabla queda con {} registros'.format(x, len(raw_data)))
x = 'United States'
raw_data = raw_data[raw_data['Country'] != x]
print('Quitando los registros donde el país es "{}", la tabla queda con {} registros'.format(x, len(raw_data)))
raw_data.head()
Explanation: El archivo tal como se descarga, a pesar de ser tabulados de excel, envia un mensaje de error cuando se intenta abrir directamente como se descargó. Por lo tanto, antes de procesarlo es necesario abrirlo en excel y guardarlo con formato .xlsx
End of explanation
# Descarga el HTML de la pagina
page = requests.get('https://www.usgbc.org/projects/reforma-180')
tree = html.fromstring(page.content)
# Obten variables desde la estructura
street = tree.xpath('//span[@itemprop="streetAddress"]/text()')
locality = tree.xpath('//span[@itemprop="addressLocality"]/text()')
postalcode = tree.xpath('//span[@itemprop="postalCode"]/text()')
country = tree.xpath('//span[@itemprop="addressCountry"]/text()')
''.join(street).replace('\n', '')
# A ver, que datos sacaste?
print('len({}), type({}) - {}'.format(len(street), type(street), street))
print('len({}), type({}) - {}'.format(len(locality), type(locality), locality))
print('len({}), type({}) - {}'.format(len(postalcode), type(postalcode), postalcode))
print('len({}), type({}) - {}'.format(len(country), type(country), country))
Explanation: La base de datos es un listado de edificios que incluye para cada edificio:
- El nombre del edificio
- Una URL de referencia
- La fecha de la certificacion del edificio
- La ciudad, estado y país en el que se ubica el edificio
- El sistema de calificación bajo el ecual se certificó el edificio
- La versión de la certificación
- El nivel alcanzado por el edificio con la certificación.
Debido a que las columnas de Ciudad, estado y país no están realizadas bajo ningun estándar, es necesario asignar a cada renglón las claves geoestadísticas municipales de 5 dígitos correspondientes al municipio en el que se ubica el edificio.
Esto se hará manualmente pues cada renglón tiene que ser interpretado individualmente.
Durante la revision me di cuenta que si bien la tabla no tiene una clave para identificar cada ciudad y municipio, la liga de cada edificio nos lleva a una ficha del municipio que usualmente sí contiene un código postal; y desde el código postal es posible obtener el municipio y el estado.
A continuacion se hace la revision de una pagina para conocer su estructura y hacer un webscrapping desde esta estructura, esperando que sea igual en todas las fichas:
End of explanation
# Script para extraer datos de fichas a partir de la URL
def webcrawler(x):
time.sleep(0.05)
url = x
try:
page = requests.get(x)
tree = html.fromstring(page.content)
except: # Regresa false si no logras entrar a la URL
street = False
locality = False
postalcode = False
country = False
return [street, locality, postalcode, country]
# Saca los datos del tree. Regresa None si no encontraste
try:
street = ''.join(tree.xpath('//span[@itemprop="streetAddress"]/text()'))
except:
street = None
try:
locality = tree.xpath('//span[@itemprop="addressLocality"]/text()')
except:
locality = None
try:
postalcode = tree.xpath('//span[@itemprop="postalCode"]/text()')
except:
postalcode = None
try:
country = tree.xpath('//span[@itemprop="addressCountry"]/text()')
except:
country = None
return [street, locality, postalcode, country]
# Pon al crawler a hacer su chamba (Pero no si el archivo ya existe)
archivoraw = r'D:\PCCS\00_RawData\01_CSV\LEED\crawl_leed.xlsx'
if os.path.isfile(archivoraw):
print('NO SE REALIZÓ EL WEBCRAWL PORQUE YA SE TIENEN LOS DATOS EN \n {}'.format(archivoraw))
print('*** Mejor importa el archivo para no gastar tantos recursos ***')
else:
raw_data['crawl'] = raw_data.Path.apply(webcrawler)
Explanation: Todos los datos son listas, pero "street" tiene 2 elementos. Entonces para el script lo que voy a hacer será eliminar todos los saltos de linea y concatenar el texto de todos los elementos de la lista
End of explanation
raw_data.head()
# Guarda una copia de raw_data por si es necesario ocupar este dataset de nuevo,
# que no se tenga que hacer nuevamente el webcrawiling porque consume mucho tiempo
writer = pd.ExcelWriter(archivoraw)
raw_data.to_excel(writer, sheet_name = 'DATOS')
writer.save()
# Crea una copia de trabajo de raw_data
datasetfinal = raw_data
# Crea una columna única con los datos de dirección y código postal extraídos con el crawler.
datasetfinal['address'] = datasetfinal.crawl.apply(lambda x: x[0].replace('\n', ''))
# raw_data['city'] = raw_data.crawl.apply(lambda x: x[1][0].replace('/n', ''))
datasetfinal['CP'] = datasetfinal.crawl.apply(lambda x: str(x[2][0]))
# raw_data['city'] = raw_data.crawl.apply(lambda x: x[3][0].replace('/n', ''))
datasetfinal.head(2)
Explanation: Reemplaza los enters en cada lista
(Voy a saltarme este paso porque lo que me interesa en realidad es el Codigo Postal, pero dejo el codigo por si lo ocupo en el futuro)
def listtotext(x):
templist = []
for element in x:
if element == None or element == False:
templist.append(element)
else:
templist.append(''.join(x).replace('\n', ''))
return templist
End of explanation
bd_sepo = r'D:\PCCS\01_Dmine\Datasets\SEPOMEX\sepomex_CP_CVEMUN.xlsx'
SEPOMEX = pd.read_excel(bd_sepo, dtype={'CVE_MUN':'str', 'CP':'str'})
SEPOMEX.head(3)
Explanation: A partir de los Codigos Postales ya es posible identificar la ciudad y municipio a la que pertenece cada edificio. Para esto, vamos a utilizar la base de datos de codigos postales del SEPOMEX que se descargó en otra minería de datos:
End of explanation
datasetfinal.head()
# Copiar CVE_MUN del dataset en base al codigo postal
datasetfinal = datasetfinal.reset_index().merge(SEPOMEX, on='CP', how='left').set_index('Building')
datasetfinal.head()
Explanation: Con la base de datos del SEPOMEX ya es posible unir ambos datasets para obtener las claves municipales de cada edificio
End of explanation
len(datasetfinal[datasetfinal['CVE_MUN'].isnull()])
Explanation: Quedan 70 filas en donde no fue posible identificar la clave Municipal
End of explanation
mira = ['City', 'State', 'CP', 'address', 'CVE_MUN'] # El diccionario 'mira' se utilizará en adelante para imprimir subsets de la informacion
sinmun = datasetfinal[datasetfinal['CVE_MUN'].isnull()][mira]
sinmun.head()
len(sinmun['CP'].unique())
Explanation: Casos Particulares
Estos 70 registros tienen 33 claves unicas de C que requieren ser asignadas individualmente para conocer la CVE_MUN de cada edificio. Para esto, haremos un script que permita revisar cada clave para realizar la investigación necesaria y asignarle una CVE_MUN
End of explanation
# Diccionario creado en donde key = 'CP' y value = 'CVE_MUN'
defmuns = {'00000': None,
'00100': '09010',
'00502': '15024',
'00604': '15121',
'00702': '15051',
'01006': '09010',
'01152': '09010',
'01209': '09004',
'01300': '09004',
'03130': '09014',
'03210': '09014',
'05300': '09004',
'05490': '15104',
'05940': '15013',
'08424': '14094',
'11010': '09016',
'11111': '14098',
'11570': '09016',
'12345': None,
'21118': '02002',
'22320': '02004',
'23410': '03008',
'23479': '03008',
'31240': '08019',
'46685': '14006',
'48219': '16053',
'56277': '15099',
'66601': '19006',
'67114': '19026',
'76232': '22014',
'77780': '23009',
'78341': '24028',
'87131': None}
Explanation: En el siguiente diccionario recopila las CVE_MUN que se asignarán a los códigos postales que requieren asignacion individual. Los códigos cuyo valor es None se asignarán mas adelante
End of explanation
# Diccionario en donde key = Codigo postal listado en el dataset; value = Codigo postal correcto
deberiaser = {'00100': '45620',
'00502': '54830',
'00604': '54713',
'00702': '52004',
'03130': '03103',
'11111': '45620',
'48219': '58218'}
Explanation: El siguiente diccionario incluye códigos postales que requieren ser corregidos
End of explanation
# Reemplazar las CVE_MUN identificadas en el dataset final
datasetfinal['CVE_MUN'] = datasetfinal['CP'].map(defmuns).fillna(datasetfinal['CVE_MUN'])
Explanation: Asignacion de codigos postales
End of explanation
sinmun.loc[sinmun['CP'].isin(['00000', '12345'])]
# Diccionario con edificios que se asignaran individualmente
# Para este diccionario key = Nombre del edificio, value = CVE_MUN que se asignará a este edificio
buildings = {
'Grainger Mexico HQ': '19039',
'La Concha Pearl': '03003',
#'Schneider Electric at COK': '66629', # Este edificio esta repetido, por lo que no se le asignará nada y se eliminará al final
'Bank of America-Reforma 115 5th floor': '09016',
'Vesta Corporate Headquarters': '09016',
'Air Traffic Control Tower': '15101', # Estoy considerando que esta es la Torre de Control del NAICM
'Passenger Terminal Building': '15101', # El edificio del NAICM
'Area Control Center': '15101', # Infraestructura del NAICM
'Corporativo TRIO': '09004',
'Casa GF': '19019',
'Eurocenter 2': '09004',
'ROUZ TOWER': '09014',
'Periferico Sur Parque Industrial': '14098'
}
# Hay un edificio duplicado. El duplicado se eliminará mas adelante
datasetfinal.loc['Schneider Electric at COK'][mira]
# Reemplazar valores individuales en el dataset.
for k, v in buildings.items():
building = datasetfinal.loc[k].name
CVEMUN_prev = datasetfinal.loc[k]['CVE_MUN']
datasetfinal.at[k, 'CVE_MUN'] = v
print('Edificio:{} - la CVE_MUN {} se reemplazó por {}'.format(building, CVEMUN_prev, datasetfinal.at[k, 'CVE_MUN']))
Explanation: Algunos edificios, marcados con los codigos postales 00000 y 12345 (Intuyo que por desidia del capturista) se tendrán que asignar individualmente
End of explanation
sinmun[sinmun['CP'] == '87131']
Explanation: El dataset contiene dos edificios en el dataset que no corresponden a México:
End of explanation
datasetfinal[datasetfinal['CVE_MUN'].isnull()][mira]
Explanation: Se eliminarán del dataset los siguientes edificios:
End of explanation
datasetfinal = datasetfinal.dropna(subset=['CVE_MUN'])
datasetfinal.head(3)
Explanation: El primero por estar repetido y el resto por que no están en los Estados Unidos Mexicanos.
End of explanation
datasetfinal[datasetfinal['CP'].isin(list(deberiaser.keys()))][mira]
# Corregir codigos postales erróneos
datasetfinal['CP'] = datasetfinal['CP'].map(deberiaser).fillna(datasetfinal['CP'])
datasetfinal[mira].head()
# Renombrar columnas para crear variables únicas
columns={
'address':'direccion',
'Path': 'URL',
'Certification date': 'usgbc_fecha_cert',
'Rating system':'usgbc_sis_val',
'Version': 'usgbc_ver_sisv',
'Certification level': 'usgbc_nv_cert',
}
datasetfinal = datasetfinal.rename(columns=columns)
datasetfinal.head(2)
# Descripciones de columnas
variables = {
'direccion': 'Ubicacion (Calle y numero)',
'CVE_MUN': 'Clave geoestadística de 5 digitos a nivel municipal, de acuerdo con el Catálogo Único de Claves de Áreas Geoestadísticas Estatales, Municipales y Localidades de INEGI',
'usgbc_fecha_cert': 'Fecha de certificacion como edificio LEED por el United States Green Building Council',
'usgbc_sis_val': 'Sistema de valoracion aplicado por el United States Green Building Council al edificio',
'usgbc_ver_sisv': 'Version del Sistema de valoracion aplicado por el United States Green Building Council al edificio',
'usgbc_nv_cert': 'Nivel de certificacion como edificio LEED alcanzado por el edificio',
'CP': 'Codigo Postal',
'URL': 'Uniform Resource Locator, referencia a recurso en línea'
}
# Convertir descripciones a dataframe
variables = pd.DataFrame.from_dict(variables, orient='index', dtype=None)
variables.columns = ['Descripcion']
variables = variables.rename_axis('Mnemonico')
variables.head()
# Eliminar columnas que ya no se utilizarán y reordenar
setfinal = [
'direccion',
'CVE_MUN',
'usgbc_fecha_cert',
'usgbc_sis_val',
'usgbc_ver_sisv',
'usgbc_nv_cert',
'CP',
'URL']
datasetfinal = datasetfinal[setfinal]
datasetfinal.head()
metadatos = {
'Nombre del Dataset': 'Edificios con Certificación LEED',
'Descripcion del dataset': 'Edificios que han recibido algún nivel de certificación de Liderazgo en Energía y desarrollo Ambiental' \
' (LEED, por sus siglas en ingles) Otorgado por el Consejo de edificios Verdes de Estados Unidos (USGBC' \
' por sus suglas en inglés)',
'Disponibilidad Temporal': '2007 - 2018',
'Periodo de actualizacion': 'No Definido',
'Nivel de Desagregacion': 'Edificio',
'Notas': 's/n',
'Fuente': 'United States Green Buildings Council',
'URL_Fuente': 'https://www.usgbc.org/projects/list?page=17&keys=Mexico',
'Dataset base': None
}
# Metadatos a dataframe para exportar
metadatos = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None)
metadatos.columns = ['Descripcion']
metadatos = metadatos.rename_axis('Metadato')
metadatos
# Guardar el dataset
file = r'D:\PCCS\01_Dmine\Datasets\LEED\PCCS_leed_projects.xlsx'
writer = pd.ExcelWriter(file)
datasetfinal.to_excel(writer, sheet_name = 'DATOS')
metadatos.to_excel(writer, sheet_name = 'METADATOS')
variables.to_excel(writer, sheet_name = 'VARIABLES')
writer.save()
print('---------------TERMINADO---------------')
Explanation: Los edificios que requieren correccion de codigos postales son los siguientes:
End of explanation |
11,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ronica Reddick
&
Nick Pulito
in association with
"Those Data Bootcamp Guys" -- Professors Backus and Coleman
present
"3 Guys Named Chris"
Scene 1
Step1: Scene 2
Step2: Chris Hemsworth
“The Heartthrob”
Age
Step3: Our data looks good! The axes are a little strange, but we just want to make sure we have data we can work with!
Chris Pratt
“The Everyman”
Age
Step4: Now that we've got that sorted out, let's take a look at all three Chrises together. How do their box office titles stack up with one another over time?
Step5: In the graph above, we color coded our Chris contingency as follows
Step6: In the above, we take a look at the box office grosses for the top 10 films for each Chris. Here, we start to wonder if maybe Evans has a more consistent box office performance. Of his top 10 filims, 9 are in the $200 million range, a stat unmatched by our other two gentlemen.
This is an interesting insight, but what does it look like over time?
Step7: Buoyed by franchise films in the last five years, Chris Evans has been a steady player, but hasn't excelled outside the Marvel universe franchises. All his biggest hits are as a member of a franchise / ensemble. Evans's Marvel hits since 2011 have performed well, though non-Marvel titles have largely been blips on the radar.
Step8: Hemsworth had a very rough 2015. He featured prominently in 4 films, only one of which was a box office success (another Marvel Avengers installment). After a breakout 2012, are the tides turning after major flops like In the Heart of the Sea?
Step9: Pratt may have been a slower starter than our other leading gentlemen, but his 2014 breakout Guardians of the Galaxy cemented his status as leading man potential, and 2015's Jurassic World broke tons of box office records. As a non-Marvel film (though a franchise reboot), Jurassic World is unique in that it may be a standalone hit for Pratt, and everyone will be closely watching his box office performance in whatever leading man project he chooses next.
Step10: We love this data cut. Here, we take a comparative look of our Chrises over time. Keeping our colors consistent, Evans is purple, Hemsworth is red, Pratt is orange.
One slight issue; movies where both Hemsworth and Evans were cast (Avengers) -- the graph chooses just one color. Here's a flipped view
Step11: Whoa! Where did Hemsworth go?
What these two cuts show us is that Evans and Hemsworth are both heavily reliant on their Marvel franchise hits, where they are sharing the limelight, whereas Pratt has been more of a solo vehicle, especially in more recent years.
Scene 3
Step12: Arnold Schwarzenegger
Heyday
Step13: Tom Cruise
Heyday
Step14: Scene 4
Step15: LEGEND | Python Code:
#This guided coding excercise requires associated .csv files: CE1.csv, CH1.csv, CP1.csv, Arnold1.csv, Bruce1.csv, and Tom1.csv
#make sure you have these supplemental materials ready to go in your active directory before proceeding
#Let's start coding! We first need to make sure our preliminary packages are in order. We imported the following...
#some may have ended up superfluous, but we figured it was better to cover our bases!
import pandas as pd
import sys
import matplotlib as mpl
import matplotlib.pyplot as plt
import sys
import os
import datetime as dt
import csv
import requests, io
from bs4 import BeautifulSoup
%matplotlib inline
print('\nPython version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Requests version: ', requests.__version__)
print("Today's date:", dt.date.today())
Explanation: Ronica Reddick
&
Nick Pulito
in association with
"Those Data Bootcamp Guys" -- Professors Backus and Coleman
present
"3 Guys Named Chris"
Scene 1: "The Set-Up"
Hollywood hunks come and go, but every so often a star builds a lasting career out of blowing stuff up. Currently, there is no shortage of beef cake on the silver screen with Chris Evans, Chris Hemsworth, and Chris Pratt all regularly starring in blockbuster films. There is no denying the bankability of the Chrises, but which Chris has staying power?
The now defunct Grantland podcast had a “market correction” theory they applied to Hollywood actors. The idea is that there’s only room in the market for one A list celebrity of a particular type and that over time the market will choose its favorite.The hosts would compare two Hollywood actors with similar “types” and predict which one would still have a career in 20 years.
Using data from Box Office Mojo we decided to test the market correction theory on the Chrises by comparing the box office numbers of their biggest hits to those of heroes from the days of yore: Tom Cruise, Arnold Schwarzenegger, and Bruce Willis. We were looking for patterns in the box office receipts of the old guard that may shed some light on who which Chris will be on top in 2035, and to see if any of the box office heroes of yesteryear had a little more staying power than the others.
End of explanation
# data scraped from Box Office Mojo, the authoritative source for Hollywood Box Office Data
# chris evans
url = 'http://www.boxofficemojo.com/people/chart/?view=Actor&id=chrisevans.htm'
evans = pd.read_html(url)
print('Ouput has type', type(evans), 'and length', len(evans))
print('First element has type', type(evans[0]))
#we have a list of dataframes, and the cut of data we want is represented by the below
evans[2]
ce=evans[2]
print("type=", type(ce)," ", "length=", len(ce), "shape=", ce.shape)
print(ce)
ce.to_csv("ce.csv")
#since scraped dataset is small, and had a tricky double index, we decided to export to csv and do a quick cleanup there
#removed indices; cleaned titles; cleaned date
#Clean File saved as CE1.csv
#this is the path for my machine; you'll have to link to the CE1.csv file that you've saved on your machine
path='C:\\Users\\Nick\\Desktop\\Data_Bootcamp\\Final Project\\CE1.csv'
CE = pd.read_csv(path)
print(type(CE), "shape is", CE.shape, "types:", CE.dtypes)
print(CE) #this is going to be much better for us to work with
#this looks good! let's test and make sure the data makes sense with a simple plot:
CE.plot.scatter('Release Year', 'Adjusted Gross')
#we love what we see, let's repeat it for our other leading gentlemen
Explanation: Scene 2: "The Chris Contenders"
Methodology
To dive into which Chris will have staying power in years to come, we looked to authoritative Hollywood Data Source BoxOfficeMojo.com. A bit of simple webscraping gave us film titles broken out by actor, with adjusted box office revenues in tow.
We wanted to aggregate data for our "Three Chrises" and compare it to 3 Hollywood legends who have had variable staying power over the years: Bruce Willis, Tom Cruise, and Arnold Schwarzenegger.
Digging up data on our leading gentlemen
Cells that follow show our process for scraping and organizing the data for the Chris contenders.
Chris Evans
“The All American Hero”
Age: 34
Height: 6’
Known for: Captain America ($267,656,500); The Avengers; Fantastic Four
Legit Roles: Snowpiercer
Biggest Hit: Marvel’s The Avengers $659,640,800
End of explanation
# same process for our second leading Chris
# chris hemsworth
url = 'http://www.boxofficemojo.com/people/chart/?view=Actor&id=chrishemsworth.htm'
hemsworth = pd.read_html(url)
print('Ouput has type', type(hemsworth), 'and length', len(hemsworth))
print('First element has type', type(hemsworth[0]))
hemsworth[3]
ch=hemsworth[3]
print("type=", type(ch)," ", "length=", len(ch), "shape=", ch.shape)
print(ch)
ch.to_csv("ch.csv")
#since scraped dataset is small, and had a tricky double index, we decided to export to csv and do a quick cleanup there
#Cleaned File saved as CH1.csv
path='C:\\Users\\Nick\\Desktop\\Data_Bootcamp\\Final Project\\CH1.csv'
#again, this is the path on my machine, you'll want to make sure you adjust to wherever you saved down CH1
CH = pd.read_csv(path)
print(type(CH), "shape is", CH.shape, "types:", CH.dtypes)
CH.plot.scatter('Release Year', 'Adjusted Gross')
Explanation: Chris Hemsworth
“The Heartthrob”
Age: 32
Height: 6’ 3”
Known for: Thor; The Avengers; Snow White and the Huntsman
Legit Roles: Rush
Biggest Hit: Marvel’s The Avengers $659,640,800
Biggest Thor Movie: $212,276,600
End of explanation
# Chris number three, coming through!
# chris pratt
url = 'http://www.boxofficemojo.com/people/chart/?view=Actor&id=chrispratt.htm'
pratt = pd.read_html(url)
print('Ouput has type', type(pratt), 'and length', len(pratt))
print('First element has type', type(pratt[0]))
pratt[3]
cp=pratt[3]
print("type=", type(cp)," ", "length=", len(cp), "shape=", cp.shape)
print(cp)
cp.to_csv("cp.csv")
#since scraped dataset is small, and had a tricky double index, we decided to export to csv and do a quick cleanup there
#Cleaned File saved as CP1.csv
path='C:\\Users\\Nick\\Desktop\\Data_Bootcamp\\Final Project\\CP1.csv'
#remember to adjust path to where you've saved the .csv down
CP = pd.read_csv(path)
print(type(CP), "shape is", CP.shape, "types:", CP.dtypes)
CP.plot.scatter('Release Year', 'Adjusted Gross')
Explanation: Our data looks good! The axes are a little strange, but we just want to make sure we have data we can work with!
Chris Pratt
“The Everyman”
Age: 36
Height: 6’ 2”
Known for: Guardians of the Galaxy ($353,303,500); Jurassic World (1 + one in pre); Parks & Rec (TV)
Legit Roles: Her, Moneyball
Biggest Role: Jurassic World $678,242,100
End of explanation
plt.scatter(CE['Release Year'], CE['Adjusted Gross'],
color="purple")
plt.scatter(CH['Release Year'], CH['Adjusted Gross'],
color="red")
plt.scatter(CP['Release Year'], CP['Adjusted Gross'],
color="orange")
plt.title('Chris Film Box Office Share Over Time')
Explanation: Now that we've got that sorted out, let's take a look at all three Chrises together. How do their box office titles stack up with one another over time?
End of explanation
fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True, sharey=True)
CE['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[0], color='purple', title="Evans")
CH['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[1], color='red', title="Hemsworth")
CP['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[2], color='orange', title="Pratt")
Explanation: In the graph above, we color coded our Chris contingency as follows:
Chris Evans: Purple
Chris Hemsworth: Red
Chris Pratt: Orange
A few things stand out. First, we can see right away that Chris Evans has, to date, had the longest career at the box office, dating back to 2001. Does this maybe suggest some longevity right off the bat? We're not so quick to draw that conclusion, especially since his biggest box office hit is shared with Chris Hemsworth in the Marvel Avengers movie.
Looking back at our raw data, we can also note that Pratt seems to have had the biggest breakout hit with his 2015 with Jurassic World, one of the top grossing films of all time, where he was the sole leading man.
This data gives us one view, but what other cuts might we want to look at?
End of explanation
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='pink')
plt.title('Chris Evans')
Explanation: In the above, we take a look at the box office grosses for the top 10 films for each Chris. Here, we start to wonder if maybe Evans has a more consistent box office performance. Of his top 10 filims, 9 are in the $200 million range, a stat unmatched by our other two gentlemen.
This is an interesting insight, but what does it look like over time?
End of explanation
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.title("Chris Hemsworth")
Explanation: Buoyed by franchise films in the last five years, Chris Evans has been a steady player, but hasn't excelled outside the Marvel universe franchises. All his biggest hits are as a member of a franchise / ensemble. Evans's Marvel hits since 2011 have performed well, though non-Marvel titles have largely been blips on the radar.
End of explanation
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title("Chris Pratt")
Explanation: Hemsworth had a very rough 2015. He featured prominently in 4 films, only one of which was a box office success (another Marvel Avengers installment). After a breakout 2012, are the tides turning after major flops like In the Heart of the Sea?
End of explanation
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='purple')
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title('Chris Film Box Office Share Over Time')
Explanation: Pratt may have been a slower starter than our other leading gentlemen, but his 2014 breakout Guardians of the Galaxy cemented his status as leading man potential, and 2015's Jurassic World broke tons of box office records. As a non-Marvel film (though a franchise reboot), Jurassic World is unique in that it may be a standalone hit for Pratt, and everyone will be closely watching his box office performance in whatever leading man project he chooses next.
End of explanation
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='purple')
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title('Chris Film Box Office Share Over Time')
Explanation: We love this data cut. Here, we take a comparative look of our Chrises over time. Keeping our colors consistent, Evans is purple, Hemsworth is red, Pratt is orange.
One slight issue; movies where both Hemsworth and Evans were cast (Avengers) -- the graph chooses just one color. Here's a flipped view:
End of explanation
#Movie scraping and data arranging like we did before
#Bruce Willis
url = 'http://www.boxofficemojo.com/people/chart/?id=brucewillis.htm'
willis = pd.read_html(url)
print('Ouput has type', type(willis), 'and length', len(willis))
print('First element has type', type(willis[0]))
willis[2]
bruce=willis[2]
bruce.to_csv("Bruce.csv") #Converting dataframe into a csv file
#editing and cleaning as needed, resaved as Bruce1.csv
path='/Users/Nick/Desktop/data_bootcamp/Final Project/Bruce1.csv'
BWillis = pd.read_csv(path)
print(type(BWillis), BWillis.shape, BWillis.dtypes)
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
BWillis.plot.scatter('Release Year', 'Adjusted Gross')
#That's a lot of films! Let's narrow:
BW=BWillis.head(11)
print(BW)
#we'll come back to this later, but let's get our other leading men in the frame!
Explanation: Whoa! Where did Hemsworth go?
What these two cuts show us is that Evans and Hemsworth are both heavily reliant on their Marvel franchise hits, where they are sharing the limelight, whereas Pratt has been more of a solo vehicle, especially in more recent years.
Scene 3: The "OGs"
In order to determine which Chris has staying power we pulled data on Hollywood stars of yore (Bruce Willis, Arnold Schwarzenegger, and Tom Cruise) for comparison. Given the volume of data on the older stars, we isolated the top ten grossing films for each hero.
Bruce Willis
Heyday: The late 80s to the late 90s
Known for: Die Hard franchise
Biggest Movie: The Sixth Sense $494,028,900
Type: Leading Man/Action Hero Hybrid
End of explanation
#here we go again!
#Arnold Schwarzenegger
url = 'http://www.boxofficemojo.com/people/chart/?id=arnoldschwarzenegger.htm'
schwarz = pd.read_html(url)
print('Ouput has type', type(schwarz), 'and length', len(schwarz))
print('First element has type', type(schwarz[0]))
schwarz[2]
arnold=schwarz[2]
print("type=", type(arnold)," ", "length=", len(arnold))
arnold.shape
print(arnold)
arnold.to_csv("Arnold.csv")
path='/Users/Nick/Desktop/data_bootcamp/Final Project/Arnold1.csv'
ASchwarz = pd.read_csv(path)
print(type(ASchwarz), ASchwarz.shape, ASchwarz.dtypes)
print(ASchwarz)
ASchwarz.plot.scatter('Release Year', 'Adjusted Gross')
#let's scale back sample size again
AS=ASchwarz.head(11)
#we'll use this soon
Explanation: Arnold Schwarzenegger
Heyday: Mid 80s to the mid 90s
Known for: the Terminator franchise
Biggest Movie: Terminator 2: Judgement Day $417,471,700
Type: Beefcake w/comedic chops
End of explanation
#last but not least, our data for Tom Cruise
url = 'http://www.boxofficemojo.com/people/chart/?id=tomcruise.htm'
cruise = pd.read_html(url)
print('Ouput has type', type(cruise), 'and length', len(cruise))
print('First element has type', type(cruise[0]))
cruise[3]
Tom=cruise[3]
Tom.to_csv("Tom.csv")
path='/Users/Nick/Desktop/data_bootcamp/Final Project/Tom1.csv'
TCruise = pd.read_csv(path)
print(type(TCruise), TCruise.shape, TCruise.dtypes)
print(TCruise)
TCruise.plot.scatter('Release Year', 'Adjusted Gross')
#cutting down to the top 10
TC=TCruise.head(11)
Explanation: Tom Cruise
Heyday: Mid 80’s - early aughts
Known for: Mission Impossible franchise
Biggest Movie: Top Gun $$412,055,200
Type: Cocky leading man
End of explanation
#All of the old school action stars in one histogram. Representing share of box office cumulatively over time.
plt.bar(TC['Release Year'],
TC['Adjusted Gross'],
align='center',
color='Blue')
plt.bar(BW['Release Year'],
BW['Adjusted Gross'],
align='center',
color='Green')
plt.bar(AS['Release Year'],
AS['Adjusted Gross'],
align='center',
color='Yellow')
plt.title('"OG" Leading Box Office over Time')
Explanation: Scene 4: "The Final Showdown"
End of explanation
#As a reminder, here's what we are comparing against:
fig, ax = plt.subplots(nrows=3, ncols=1, sharex=True, sharey=True)
CE['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[0], color='purple', title="Evans")
CH['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[1], color='red', title="Hemsworth")
CP['Adjusted Gross'].head(10).plot(kind="bar",ax=ax[2], color='orange', title="Pratt")
plt.bar(CE['Release Year'], CE['Adjusted Gross'],
align='center',
color='purple')
plt.bar(CH['Release Year'], CH['Adjusted Gross'],
align='center',
color='red')
plt.bar(CP['Release Year'], CP['Adjusted Gross'],
align='center',
color='orange')
plt.title('Chris Film Box Office Share Over Time')
Explanation: LEGEND:
Tom Cruise = Blue
Bruce Willis = Green
Arnold Schwarzenegger = Yellow
End of explanation |
11,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
property
Python has a great concept called property, which makes the life of an object oriented programmer much simpler. Before defining and going into details of what a property in Python is, let us first build an intuition on why it would be needed in the first place.
Step1: Class with Getter and Setter
Step2: We can see above that new methods get_temperature() and set_temperature() were defined and furthermore, temperature was replaced with _temperature. An underscore (_) at the beginning is used to denote private variables in Python.
Python Way - Property
The pythonic way to deal with the above problem is to use property. Here is how we could have achieved it.
Step3: Deep in Property
Step4: Another example to | Python Code:
CONST = 10 # some constant
class Weather_balloon():
temp = 222
def convert_temp_to_f(self):
return self.temp * CONST
w = Weather_balloon()
w.temp = 122
print(w.convert_temp_to_f())
class Circle():
area = None
radius = None
def __init__(self, radius):
self.radius = radius
self.area = 3.14*radius*radius
c = Circle(10)
print(c.radius)
print(c.area)
c.area=222
print(c.radius)
print(c.area)
class Circle():
_area = None
_radius = None
def __init__(self, radius):
self.set_radius(radius)
def get_area(self):
return self._area
def get_radius(self):
return self._radius
def set_radius(self, radius):
self._radius = radius
self._area = 3.14*radius*radius
radius = property(get_radius, set_radius)
c = Circle(10)
print(c.radius)
print(c._area)
c.radius = 222
print(c.radius)
print(c._area)
import math
class Circle():
_area = None
_radius = None
def __init__(self, radius):
self.set_radius(radius)
def get_radius(self):
return self._radius
def set_radius(self, radius):
self._radius = radius
self._area = 3.14*radius*radius
radius = property(get_radius, set_radius)
@property
def area(self):
return self._area
@area.setter
def area(self, area):
self._area = area
self._radius = math.sqrt(self._area)/3.14
c = Circle(10)
print(c.radius)
print(c.area)
print("---")
c.radius=222
print(c.radius)
print(c.area)
c.area=154751
print(c.radius)
print(c.area)
class Celsius:
def __init__(self, temperature = 0):
self.temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
man = Celsius()
# set temperature
man.temperature = 37
# get temperature
print(man.temperature)
# get degrees Fahrenheit
print(man.to_fahrenheit())
##### print(Celsius.temperature)
##############
### Riddle ###
##############
class MyClass():
x = 0
y = 100
a = MyClass()
b = MyClass()
a.x = 2
print(id(a.y), id(b.y))
print(id(a.x), id(b.x))
print(b.x)
MyClass.x = 4
print(a.x)
print(b.x)
MyClass.x = 7
print(a.x)
print(b.x)
print("~~~~~~")
b.x = MyClass.y
MyClass.x = 4
print(b.x)
Explanation: property
Python has a great concept called property, which makes the life of an object oriented programmer much simpler. Before defining and going into details of what a property in Python is, let us first build an intuition on why it would be needed in the first place.
End of explanation
class Celsius:
def __init__(self, temperature = 0):
self.set_temperature(temperature)
def to_fahrenheit(self):
return (self.get_temperature() * 1.8) + 32
# new update
def get_temperature(self):
return self._temperature
def set_temperature(self, value):
if value < -273:
raise ValueError("Temperature below -273 is not possible")
self._temperature = value
Explanation: Class with Getter and Setter
End of explanation
class Celsius:
def __init__(self, temperature = 0):
self.temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
def get_temperature(self):
print("Getting value")
return self._temperature
def set_temperature(self, value):
if value < -273:
raise ValueError("Temperature below -273 is not possible")
print("Setting value")
self._temperature = value
temperature = property(get_temperature, set_temperature)
man = Celsius()
# set temperature
man.temperature = 137
# get temperature
print(man.temperature)
# get degrees Fahrenheit
print(man.to_fahrenheit())
##### print(Celsius.temperature)
Explanation: We can see above that new methods get_temperature() and set_temperature() were defined and furthermore, temperature was replaced with _temperature. An underscore (_) at the beginning is used to denote private variables in Python.
Python Way - Property
The pythonic way to deal with the above problem is to use property. Here is how we could have achieved it.
End of explanation
### Method 1
temperature = property(get_temperature, set_temperature)
### Method 2
# make empty property
temperature = property()
# assign getter
temperature = temperature.getter(get_temperature)
# assign setter
temperature = temperature.setter(set_temperature)
### Method 3
class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self._temperature * 1.8) + 32
@property
def temperature(self):
print("Getting value")
return self._temperature
@temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError("Temperature below -273 is not possible")
print("Setting value")
self._temperature = value
celc = Celsius()
celc.temperature = 100
print(celc.temperature)
# del(celc.temperature) # Need to explicitly define a deleter
# print(celc.temperature)
Explanation: Deep in Property
End of explanation
### Method 3
class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self._temperature * 1.8) + 32
@property
def temperature(self):
print("Getting value")
return self._temperature
@temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError("Temperature below -273 is not possible")
print("Setting value")
self._temperature = value
@temperature.deleter
def temperature(self):
print("deleting the property")
del(self._temperature)
celc = Celsius()
celc.temperature = 100
print(celc.temperature)
del(celc.temperature)
print(celc.temperature) # This property is no longer valid thus will error out
Explanation: Another example to
End of explanation |
11,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MLE 모수 추정의 예
베르누이 분포의 모수 추정
이 과정을 스스로 쓸 줄 알아야 돼
각각의 시도 $x_i$에 대한 확률은 베르누이 분포
$$ P(x | \theta ) = \text{Bern}(x | \theta ) = \theta^x (1 - \theta)^{1-x}$$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1
Step1: 카테고리 분포의 모수 추정
K가 4인 경우로 가정해서 일일이 더하기하면 좀 더 이해가 잘 될 것이다.
여기까지는 이산 분포
각각의 시도 $x_i$에 대한 확률은 카테고리 분포
$$ P(x | \theta ) = \text{Cat}(x | \theta) = \prod_{k=1}^K \theta_k^{x_k} $$
$$ \sum_{k=1}^K \theta_k = 1 $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1
Step2: 정규 분포의 모수 추정
여기서는 연속 분포. 정규만 할 거야
각각의 시도 $x_i$에 대한 확률은 가우시안 정규 분포
$$ P(x | \theta ) = N(x | \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1
Step3: 다변수 정규 분포의 모수 추정
이건 저번에 한 거
이건 외우지 마
MLE for Multivariate Gaussian Normal Distribution
각각의 시도 $x_i$에 대한 확률은 다변수 정규 분포
$$ P(x | \theta ) = N(x | \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1 | Python Code:
theta0 = 0.6
x = sp.stats.bernoulli(theta0).rvs(1000)
N0, N1 = np.bincount(x, minlength=2)
N = N0 + N1
theta = N1 / N
theta
Explanation: MLE 모수 추정의 예
베르누이 분포의 모수 추정
이 과정을 스스로 쓸 줄 알아야 돼
각각의 시도 $x_i$에 대한 확률은 베르누이 분포
$$ P(x | \theta ) = \text{Bern}(x | \theta ) = \theta^x (1 - \theta)^{1-x}$$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} $$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \big{ {x_i} \log\theta + (1-x_i)\log(1 - \theta) \big} \
&=& \sum_{i=1}^N {x_i} \log\theta + \left( N-\sum_{i=1}^N x_i \right) \log( 1 - \theta ) \
\end{eqnarray}
$$
$x = 1$(성공) 또는 $x= 0$ (실패) 이므로
전체 시도 횟수 $N$
그 중 성공 횟수 $N_1 = \sum_{i=1}^N {x_i}$
따라서 Log-Likelihood는
$$
\begin{eqnarray}
\log L
&=& N_1 \log\theta + (N-N_1) \log(1 - \theta) \
\end{eqnarray}
$$
Log-Likelihood Derivative
$$
\begin{eqnarray}
\dfrac{\partial \log L}{\partial \theta}
&=& \dfrac{\partial}{\partial \theta} \big{ N_1 \log\theta + (N-N_1) \log(1 - \theta) \big} = 0\
&=& \dfrac{N_1}{\theta} - \dfrac{N-N_1}{1-\theta} = 0 \
\end{eqnarray}
$$
$$
\dfrac{N_1}{\theta} = \dfrac{N-N_1}{1-\theta}
$$
$$
\dfrac{1-\theta}{\theta} = \dfrac{N-N_1}{N_1}
$$
$$
\dfrac{1}{\theta} - 1 = \dfrac{N}{N_1} - 1
$$
$$
\theta= \dfrac{N_1}{N}
$$
End of explanation
theta0 = np.array([0.1, 0.3, 0.6])
x = np.random.choice(np.arange(3), 1000, p=theta0)
N0, N1, N2 = np.bincount(x, minlength=3)
theta = np.array([N0, N1, N2]) / N
theta
Explanation: 카테고리 분포의 모수 추정
K가 4인 경우로 가정해서 일일이 더하기하면 좀 더 이해가 잘 될 것이다.
여기까지는 이산 분포
각각의 시도 $x_i$에 대한 확률은 카테고리 분포
$$ P(x | \theta ) = \text{Cat}(x | \theta) = \prod_{k=1}^K \theta_k^{x_k} $$
$$ \sum_{k=1}^K \theta_k = 1 $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \sum_{k=1}^K {x_{i,k}} \log\theta_k \
&=& \sum_{k=1}^K \log\theta_k \sum_{i=1}^N {x_{i,k}}
\end{eqnarray}
$$
$x_k$가 나온 횟수 $N_k = \sum_{i=1}^N {x_{i,k}}$이라고 표시
따라서 Log-Likelihood는
$$
\begin{eqnarray}
\log L
&=& \sum_{k=1}^K \log\theta_k N_k
\end{eqnarray}
$$
추가 조건
$$ \sum_{k=1}^K \theta_k = 1 $$
Log-Likelihood Derivative with Lagrange multiplier
$$
\begin{eqnarray}
\dfrac{\partial \log L}{\partial \theta_k}
&=& \dfrac{\partial}{\partial \theta_k} \left{ \sum_{k=1}^K \log\theta_k N_k + \lambda \left(1- \sum_{k=1}^K \theta_k\right) \right} = 0 \
\dfrac{\partial \log L}{\partial \lambda}
&=& \dfrac{\partial}{\partial \lambda} \left{ \sum_{k=1}^K \log\theta_k N_k + \lambda \left(1- \sum_{k=1}^K \theta_k \right) \right} = 0\
\end{eqnarray}
$$
$$
\dfrac{N_1}{\theta_1} = \dfrac{N_2}{\theta_2} = \cdots = \dfrac{N_K}{\theta_K} = \lambda
$$
$$
\sum_{k=1}^K N_k = N
$$
$$
\lambda \sum_{k=1}^K \theta_k = \lambda = N
$$
$$
\theta_k = \dfrac{N_k}{N}
$$
End of explanation
mu0 = 1
sigma0 = 2
x = sp.stats.norm(mu0, sigma0).rvs(1000)
xbar = x.mean()
s2 = x.std(ddof=1)
xbar, s2
Explanation: 정규 분포의 모수 추정
여기서는 연속 분포. 정규만 할 거야
각각의 시도 $x_i$에 대한 확률은 가우시안 정규 분포
$$ P(x | \theta ) = N(x | \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\right)$$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \left{ -\dfrac{1}{2}\log(2\pi\sigma^2) - \dfrac{(x_i-\mu)^2}{2\sigma^2} \right} \
&=& -\dfrac{N}{2} \log(2\pi\sigma^2) - \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2
\end{eqnarray}
$$
Log-Likelihood Derivative
$$
\begin{eqnarray}
\dfrac{\partial \log L}{\partial \mu}
&=& \dfrac{\partial}{\partial \mu} \left{ \dfrac{N}{2} \log(2\pi\sigma^2) + \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \right} = 0 \
\dfrac{\partial \log L}{\partial \sigma^2}
&=& \dfrac{\partial}{\partial \sigma^2} \left{ \dfrac{N}{2} \log(2\pi\sigma^2) + \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \right} = 0\
\end{eqnarray}
$$
$$
\dfrac{2}{2\sigma^2}\sum_{i=1}^N (x_i-\mu) = 0
$$
$$
N \mu = \sum_{i=1}^N x_i
$$
$$
\mu = \dfrac{1}{N}\sum_{i=1}^N x_i = \bar{x}
$$
$$
\dfrac{N}{2\sigma^2 } - \dfrac{1}{2(\sigma^2)^2}\sum_{i=1}^N (x_i-\mu)^2 = 0
$$
$$
\sigma^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i-\mu)^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i-\bar{x})^2 = s^2
$$
End of explanation
mu0 = np.array([0, 1])
sigma0 = np.array([[1, 0.2], [0.2, 4]])
x = sp.stats.multivariate_normal(mu0, sigma0).rvs(1000)
xbar = x.mean(axis=0)
S2 = np.cov(x, rowvar=0)
print(xbar)
print(S2)
Explanation: 다변수 정규 분포의 모수 추정
이건 저번에 한 거
이건 외우지 마
MLE for Multivariate Gaussian Normal Distribution
각각의 시도 $x_i$에 대한 확률은 다변수 정규 분포
$$ P(x | \theta ) = N(x | \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x_i-\mu)^T \Sigma^{-1} (x_i-\mu) \right)$$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \left{ -\log((2\pi)^{D/2} |\Sigma|^{1/2}) - \dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right} \
&=& C -\dfrac{N}{2} \log|\Sigma| - \dfrac{1}{2} \sum (x-\mu)^T \Sigma^{-1} (x-\mu)
\end{eqnarray}
$$
precision matrix $\Lambda = \Sigma^{-1}$
$$
\begin{eqnarray}
\log L
&=& C + \dfrac{N}{2} \log|\Lambda| - \dfrac{1}{2} \sum(x-\mu)^T \Lambda (x-\mu)
\end{eqnarray}
$$
$$ \dfrac{\partial L}{\partial \mu} = - \dfrac{\partial}{\partial \mu} \sum_{i=1}^N (x_i-\mu)^T \Lambda (x_i-\mu) = \sum_{i=1}^N 2\Lambda (x_i - \mu) = 0 $$
$$ \mu = \dfrac{1}{N}\sum_{i=1}^N x_i $$
$$ \dfrac{\partial L}{\partial \Lambda} = \dfrac{\partial}{\partial \Lambda} \dfrac{N}{2} \log|\Lambda| - \dfrac{\partial}{\partial \Lambda} \dfrac{1}{2} \sum_{i=1}^N \text{tr}( (x_i-\mu)(x_i-\mu)^T\Lambda) =0 $$
$$ \dfrac{N}{2} \Lambda^{-T} = \dfrac{1}{2}\sum_{i=1}^N (x_i-\mu)(x_i-\mu)^T $$
$$ \Sigma = \dfrac{1}{N}\sum_{i=1}^N (x_i-\mu)(x_i-\mu)^T $$
End of explanation |
11,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Getting Started with gensim
This section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example.
Core Concepts and Simple Example
At a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.
Corpus
A corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised.
For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.
Step1: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenise our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
Step2: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
Step3: Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to
Step4: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts
Step5: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors
Step6: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
Model
Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors" | Python Code:
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
Explanation: # Getting Started with gensim
This section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example.
Core Concepts and Simple Example
At a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.
Corpus
A corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised.
For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.
End of explanation
# Create a set of frequent words
stoplist = set('for a of the and to in'.split(' '))
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in raw_corpus]
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
Explanation: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenise our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
End of explanation
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
Explanation: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
End of explanation
print(dictionary.token2id)
Explanation: Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:
End of explanation
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
Explanation: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts:
End of explanation
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
Explanation: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors:
End of explanation
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" string
tfidf[dictionary.doc2bow("system minors".lower().split())]
Explanation: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
Model
Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors":
End of explanation |
11,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Krisk also introduced a very simplistic for you to resync data or create a reproducible charts. Consider this plot,
Step1: Executing this code below
Step2: Would let you to modify the plot above instead of returning new plot. As you can see this is useful when you combine it to ipywidgets to interact with your data.
Step3: You can also replot entirely the plot you have in the cell where the variable is
Step4: Finally, there is read_df method for reproducible charts. You have a plot, and using it to just replace the data. This is especially useful when you have a beautiful chart already, and sharing to others just by replacing the data. Note that this is similar resync_data. But instead of replacing the cell where previous chart contained, it create new cell. | Python Code:
p = kk.bar(df[df.year == 1952],'continent',y='pop', how='mean')
p.set_size(width=800)
Explanation: Krisk also introduced a very simplistic for you to resync data or create a reproducible charts. Consider this plot,
End of explanation
p.resync_data(df[df.year == 2007])
Explanation: Executing this code below
End of explanation
def resync(year):
return p.resync_data(df[df.year == year])
interact(resync,year=IntSlider(min=df.year.min(),max=df.year.max(),step=5,value=1952))
Explanation: Would let you to modify the plot above instead of returning new plot. As you can see this is useful when you combine it to ipywidgets to interact with your data.
End of explanation
p.replot(kk.line(df,'continent'))
Explanation: You can also replot entirely the plot you have in the cell where the variable is
End of explanation
p.read_df(df)
Explanation: Finally, there is read_df method for reproducible charts. You have a plot, and using it to just replace the data. This is especially useful when you have a beautiful chart already, and sharing to others just by replacing the data. Note that this is similar resync_data. But instead of replacing the cell where previous chart contained, it create new cell.
End of explanation |
11,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 5
Problem 5-2
Step1: Description
Assume that the motor of Problem 5-1 is operating at rated conditions.
Step2: (a)
What are the magnitudes and angles of $\vec{E}_A$ and $\vec{I}_A$ , and $I_F$?
(b)
Suppose the load is removed from the motor.
What are the magnitudes and angles of $\vec{E}_A$ and $\vec{I}_A$ now?
SOLUTION
(a)
The line current flow at rated conditions is
Step3: Because the motor is $\Delta$-connected, the corresponding phase current is
Step4: The angle of the current is
Step5: The internal generated voltage $\vec{E}_A$ is
Step6: The field current is directly proportional to $|\vec{E}_A|$, which = 480V when $I_F = 4 A$. The required field current is
Step7: (b)
When the load is removed from the motor the magnitude of $|\vec{E}_A|$ remains unchanged but the torque
angle goes to $\delta = 0°$ . The resulting armature current is | Python Code:
%pylab notebook
%precision %.4g
import cmath
Explanation: Excercises Electric Machinery Fundamentals
Chapter 5
Problem 5-2
End of explanation
Vt = 480 # [V]
PF = 0.8
fse = 60 # [Hz]
p = 8.0
Pout = 400 * 746 # [W]
Xs = 0.6 # [Ohm]
Explanation: Description
Assume that the motor of Problem 5-1 is operating at rated conditions.
End of explanation
Pin = Pout
il = Pin / (sqrt(3) * Vt * PF)
il # [A]
Explanation: (a)
What are the magnitudes and angles of $\vec{E}_A$ and $\vec{I}_A$ , and $I_F$?
(b)
Suppose the load is removed from the motor.
What are the magnitudes and angles of $\vec{E}_A$ and $\vec{I}_A$ now?
SOLUTION
(a)
The line current flow at rated conditions is:
$$I_L = \frac{P}{\sqrt{3}V_TPF}$$
End of explanation
ia = il / sqrt(3)
ia # [A]
Explanation: Because the motor is $\Delta$-connected, the corresponding phase current is:
End of explanation
Ia_angle = arccos(PF)
Ia_angle /pi *180 # [degrees]
Ia = ia * (cos(Ia_angle) + sin(Ia_angle)*1j)
print('''
Ia = {:.0f} A ∠{:.2f}°
=================='''.format(abs(Ia), Ia_angle / pi *180))
Explanation: The angle of the current is:
End of explanation
EA = Vt - Xs * 1j * Ia
EA_angle = arctan(EA.imag/EA.real)
print('''
Ea = {:.0f} V ∠{:.1f}°
=================='''.format(abs(EA), EA_angle / pi *180))
Explanation: The internal generated voltage $\vec{E}_A$ is:
$$\vec{E}A = \vec{V}\phi - jX_S\vec{I}_A$$
End of explanation
If1 = 4 # [A]
Ea1 = 480 # [V]
Ea2 = abs(EA)
If2 = (Ea2/Ea1) * If1
print('''
If2 = {:.2f} A
============'''.format(If2))
Explanation: The field current is directly proportional to $|\vec{E}_A|$, which = 480V when $I_F = 4 A$. The required field current is:
$$\frac{|\vec{E}{A2}|}{|\vec{E}{A1}|} = \frac{I_{F2}}{I_{F1}}$$
End of explanation
delta_b = 0*pi/180 # [rad]
EA_b = abs(EA) *(cos(delta_b) + sin(delta_b)*1j)
EA_b_angle = arctan(EA_b.imag/EA_b.real)
Ia_b = (Vt - EA_b) / (Xs*1j)
Ia_b_angle = arctan(Ia_b.imag/Ia_b.real) # poosible warning might occur because of division by zero
print('''
EA_b = {:.1f} ∠{:>2.0f}°
Ia_b = {:.1f} ∠{:>2.0f}°
=================='''.format(abs(EA_b), EA_b_angle/pi*180,
abs(Ia_b), Ia_b_angle/pi*180))
Explanation: (b)
When the load is removed from the motor the magnitude of $|\vec{E}_A|$ remains unchanged but the torque
angle goes to $\delta = 0°$ . The resulting armature current is:
$$\vec{I}A = \frac{\vec{V}\phi - \vec{E}_A}{jX_S}$$
End of explanation |
11,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimensionality Reduction
The sheer size of data in the modern age is not only a challenge for computer hardware but also a main bottleneck for the performance of many machine learning algorithms. The main goal of a PCA analysis is to identify patterns in data; PCA aims to detect the correlation between variables. If a strong correlation between variables exists, the attempt to reduce the dimensionality only makes sense. In a nutshell, this is what PCA is all about
Step1: Load Data
Step2: Framework
Step3: Principal Components Analysis
Often, the desired goal is to reduce the dimensions of a d-dimensional dataset by projecting it onto a (k)-dimensional subspace (where k < d) in order to increase the computational efficiency while retaining most of the information. An important question is “what is the size of k that represents the data ‘well’?”
Later, we will compute eigenvectors (the principal components) of a dataset and collect them in a projection matrix. Each of those eigenvectors is associated with an eigenvalue which can be interpreted as the “length” or “magnitude” of the corresponding eigenvector. If some eigenvalues have a significantly larger magnitude than others that the reduction of the dataset via PCA onto a smaller dimensional subspace by dropping the “less informative” eigenpairs is reasonable.
Exercise 1 - Explore the Iris Data Set
Original Data. Background Info.
Step14: Exercise 2 - Build a PCA Class
General Steps for PCA (walkthrough in R if you get stuck)
Step15: Exercise 3 - Try it out on the Iris Data Set
Plot the individual explained variance vs. cumulative explained variance.
Plot the Iris data set on the new 2-dimensional feature subspace.
Step16: Exercise 4 - Check via Scikit-Learn
This exercise was purely academic. You will always use an optimized version of PCA in practice.
Step17: The main component of the Iris data is the Sepal Length, which captures 92.5% of dataset variance. One would be justified in removing all the other dimensions. | Python Code:
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn import datasets
from sklearn.decomposition import PCA
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
Explanation: Dimensionality Reduction
The sheer size of data in the modern age is not only a challenge for computer hardware but also a main bottleneck for the performance of many machine learning algorithms. The main goal of a PCA analysis is to identify patterns in data; PCA aims to detect the correlation between variables. If a strong correlation between variables exists, the attempt to reduce the dimensionality only makes sense. In a nutshell, this is what PCA is all about: Finding the directions of maximum variance in high-dimensional data and project it onto a smaller dimensional subspace while retaining most of the information.
Imports
End of explanation
iris = sns.load_dataset('iris')
iris.loc[:, 'species'] = (iris.loc[:, 'species']
.astype('category'))
iris.info()
iris.head()
Explanation: Load Data
End of explanation
ax_formatter = {
'billions': FuncFormatter(lambda x, position: f'{x * 1e-9:.0f}'),
'millions': FuncFormatter(lambda x, position: f'{x * 1e-6:.0f}'),
'percent_convert': FuncFormatter(lambda x, position: f'{x * 100:.0f}%'),
'percent': FuncFormatter(lambda x, position: f'{x * 100:.0f}%'),
'thousands': FuncFormatter(lambda x, position: f'{x * 1e-3:.0f}'),
}
names = (
'Sepal Length',
'Sepal Width',
'Petal Length',
'Petal Width',
)
column_names = [x.replace(' ', '_').lower()
for x in names]
size = {
'label': 14,
'legend': 12,
'title': 20,
'super_title': 24,
}
Explanation: Framework
End of explanation
fig = plt.figure('Iris Violin Plot',
figsize=(12, 5), facecolor='white',
edgecolor='black')
rows, cols = (1, 2)
ax0 = plt.subplot2grid((rows, cols), (0, 0))
ax1 = plt.subplot2grid((rows, cols), (0, 1), sharey=ax0)
sns.boxplot(data=iris, width=0.4, ax=ax0)
sns.violinplot(data=iris, inner='quartile', ax=ax1)
for ax in (ax0, ax1):
ax.set_xlabel('Characteristics', fontsize=size['label'])
ax.set_xticklabels(names)
ax.set_ylabel('Centimeters $(cm)$', fontsize=size['label'])
plt.suptitle('Iris Dataset', fontsize=size['title']);
fig = plt.figure('Iris Data Distribution Plots', figsize=(10, 15),
facecolor='white', edgecolor='black')
rows, cols = (4, 2)
ax0 = plt.subplot2grid((rows, cols), (0, 0))
ax1 = plt.subplot2grid((rows, cols), (0, 1))
ax2 = plt.subplot2grid((rows, cols), (1, 0))
ax3 = plt.subplot2grid((rows, cols), (1, 1))
ax4 = plt.subplot2grid((rows, cols), (2, 0))
ax5 = plt.subplot2grid((rows, cols), (2, 1))
ax6 = plt.subplot2grid((rows, cols), (3, 0))
ax7 = plt.subplot2grid((rows, cols), (3, 1))
n_bins = 40
for n, ax, data in zip(range(4), (ax0, ax2, ax4, ax6), column_names):
iris[data].plot(kind='hist', alpha=0.5, bins=n_bins, color=f'C{n}',
edgecolor='black', label='_nolegend_', ax=ax)
ax.axvline(iris[data].mean(), color='crimson', label='Mean',
linestyle='--')
ax.axvline(iris[data].median(), color='black', label='Median',
linestyle='-.')
ax.set_title(names[n])
ax.set_ylabel('Count', fontsize=size['label'])
for n, ax, data in zip(range(4), (ax1, ax3, ax5, ax7), column_names):
sns.distplot(iris[data], axlabel=False, bins=n_bins,
hist_kws={'alpha': 0.5, 'color': f'C{n}',
'edgecolor': 'black'},
kde_kws={'color': 'darkblue', 'label': 'KDE'},
ax=ax)
ax.set_title(names[n])
ax.set_ylabel('Density', fontsize=size['label'])
for ax in (ax0, ax1, ax2, ax3, ax4, ax5, ax6, ax7):
ax.legend(fontsize=size['legend'])
ax.set_xlabel('Centimeters ($cm$)', fontsize=size['label'])
plt.tight_layout()
plt.suptitle('Iris Data Distribution Plots',
fontsize=size['super_title'], y=1.03);
grid = sns.pairplot(iris,
diag_kws={'alpha': 0.5, 'bins': 30, 'edgecolor': 'black'},
hue='species', markers=['o', 's', 'D'],
plot_kws={'alpha': 0.7})
grid.fig.suptitle('Iris Dataset Correlation',
fontsize=size['super_title'], y=1.03)
handles = grid._legend_data.values()
labels = grid._legend_data.keys()
grid._legend.remove()
grid.fig.legend(bbox_to_anchor=(1.02, 0.5), fontsize=size['legend'],
handles=handles,
labels=[x.capitalize() for x in labels],
loc='center right')
for n in range(4):
grid.axes[3, n].set_xlabel(names[n], fontsize=size['label'])
grid.axes[n, 0].set_ylabel(names[n], fontsize=size['label'])
plt.show();
Explanation: Principal Components Analysis
Often, the desired goal is to reduce the dimensions of a d-dimensional dataset by projecting it onto a (k)-dimensional subspace (where k < d) in order to increase the computational efficiency while retaining most of the information. An important question is “what is the size of k that represents the data ‘well’?”
Later, we will compute eigenvectors (the principal components) of a dataset and collect them in a projection matrix. Each of those eigenvectors is associated with an eigenvalue which can be interpreted as the “length” or “magnitude” of the corresponding eigenvector. If some eigenvalues have a significantly larger magnitude than others that the reduction of the dataset via PCA onto a smaller dimensional subspace by dropping the “less informative” eigenpairs is reasonable.
Exercise 1 - Explore the Iris Data Set
Original Data. Background Info.
End of explanation
class K2PCA:
Class to perform a Principal Component Analysis.
:Attributes:
- **categories**: *pd.Series* categories of data
- **covariance**: *np.array* covariance matrix of normalized data
- **data**: *pd.DataFrame* original data
- **eigen_val**: *np.array* covariance matrix eigenvalues
- **eigen_vec**: *np.array* covariance matrix eigenvectors
- **n_components**: *int* number of priciple components to return
- **normalize**: *np.array* normalized data
- **trans_data**: *pd.DataFrame* original data transformed into the \
two dimensional component space
- **variance**: *np.array* percentage of variance by feature
def __init__(self, data, categories, n_components=None):
self.categories = categories
self.data = data
self.eigen_val = None
self.eigen_vec = None
self.n_components = n_components
self.trans_data =None
self._covariance = None
self._normalize = None
self._variance = None
@property
def covariance(self):
self.calc_normalize()
self.calc_covariance()
return self._covariance
@property
def normalize(self):
self.calc_normalize()
return self._normalize
@property
def variance(self):
self.calc_variance()
return self._variance
def __repr__(self):
return f'K2PCA(data={self.data}, n_components{self.n_components})'
def fit(self):
Standardize the data and determine the sorted eigenvalues and eigenvectors.
self.calc_normalize()
self.calc_covariance()
self.calc_eigen()
def calc_covariance(self):
Calculate the covariance matrix.
self._covariance = np.cov(self._normalize, rowvar=False)
def calc_eigen(self):
Calculate the covariance eigenvalues and eigenvectors.
.. note:: NumPy eig returns the eigen vector as a column
if self._covariance is None:
self.calc_covariance()
self.eigen_val, self.eigen_vec = (np.linalg
.eig(self._covariance))
idx = self.eigen_val.argsort()[::-1]
self.eigen_val = self.eigen_val[idx]
self.eigen_vec = self.eigen_vec[:,idx].T
def calc_normalize(self, data=None):
Standardize the data.
:param pd.DataFrame data: data to be normalized
if data is None:
data = self.data
self._normalize = (sklearn.preprocessing
.StandardScaler()
.fit_transform(data))
def calc_variance(self):
Calculate the percentage of variance by feature.
if self.eigen_val is None:
self.calc_eigen()
self._variance = self.eigen_val / pca.eigen_val.sum()
def filter_components(self):
Return the first n components specified by n_components attribute.
if self.eigen_val is None:
self.calc_eigen()
if self.n_components is not None:
self.eigen_val = self.eigen_val[idx][:self.n_components]
self.eigen_vec = self.eigen_vect[idx][:self.n_components]
def plot_variance(self, save=False):
Plot the feature percentage of variance per component.
:param bool save: if True the figure will be saved
if self._variance is None:
self.calc_variance()
var_pct = pd.Series(self._variance)
cum_var_pct = var_pct.cumsum()
ax = (pd.concat([var_pct, cum_var_pct], axis=1)
.rename(index={x: x + 1 for x in range(var_pct.size)})
.plot(kind='bar', alpha=0.5, edgecolor='black',
figsize=(10, 5)))
ax.set_title('Dataset Components', fontsize=size['title'])
ax.legend(['Individual Variance', 'Cumulative Variance'],
fontsize=size['legend'])
ax.set_xlabel('Principal Components',
fontsize=size['label'])
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=0)
ax.set_ylabel('Percent (%)', fontsize=size['label'])
ax.yaxis.set_major_formatter(ax_formatter['percent'])
for patch in ax.patches:
height = patch.get_height()
ax.text(x=patch.get_x() + patch.get_width() / 2,
y=height + 0.01,
s=f'{height * 100:1.1f}%',
ha='center')
if save:
plt.savefig(f'variance_pct.png', bbox_inches='tight',
bbox_extra_artists=[size['super_title']])
else:
plt.show()
def plot_transform_2d(self):
Plot the original data in the 2D PCA space.
if self.trans_data is None:
self.transform_2d()
grid = sns.lmplot(x='comp_1', y='comp_2', data=pca.trans_data,
hue='categories', fit_reg=False,
markers=['o', 's', 'd'], size=6)
grid.fig.suptitle('Principal Components',
fontsize=size['title'], y=1.05)
handles = grid._legend_data.values()
labels = grid._legend_data.keys()
grid._legend.remove()
grid.fig.legend(bbox_to_anchor=(1.02, 0.94),
fontsize=size['legend'], handles=handles,
labels=[x.capitalize() for x in labels],
loc='center right')
grid.axes[0, 0].set_xlabel('PCA $1^{st}$ Component',
fontsize=size['label'])
grid.axes[0, 0].set_ylabel('PCA $2^{nd}$ Component',
fontsize=size['label'])
def transform_2d(self):
Transform the original data into the 2D PCA space.
if self.eigen_vec is None:
self.calc_eigen()
trans = self.eigen_vec[:2].T
self.trans_data = (pd.DataFrame(self._normalize.dot(trans),
columns=['comp_1', 'comp_2'])
.assign(categories=self.categories))
self.trans_data.loc[:, 'comp_2'] = self.trans_data.comp_2 * -1
Explanation: Exercise 2 - Build a PCA Class
General Steps for PCA (walkthrough in R if you get stuck):
1. Standardize the data.
2. Obtain the Eigenvectors and Eigenvalues from the covariance matrix or correlation matrix, or perform Singular Vector Decomposition.
3. Sort eigenvalues in descending order and choose the k eigenvectors that correspond to the k largest eigenvalues where k is the number of dimensions of the new feature subspace (k ≤ d).
4. Construct the projection matrix W from the selected k eigenvectors.
5. Transform the original dataset X via W to obtain a k-dimensional feature subspace Y.
The class should be able to:
- Calculate the principal components with an optional parameter
- Project onto a 2-dimensional feature space
End of explanation
pca = K2PCA(iris.drop('species', axis=1), iris.species)
pca.fit()
pca.plot_variance()
pca.plot_transform_2d()
Explanation: Exercise 3 - Try it out on the Iris Data Set
Plot the individual explained variance vs. cumulative explained variance.
Plot the Iris data set on the new 2-dimensional feature subspace.
End of explanation
pca = PCA(n_components=4)
pca.fit(sklearn.preprocessing
.StandardScaler()
.fit_transform(iris.drop('species', axis=1)))
var_pct = pd.Series(pca.explained_variance_ratio_)
cum_var_pct = var_pct.cumsum()
ax = (pd.concat([var_pct, cum_var_pct], axis=1)
.plot(kind='bar', alpha=0.5, edgecolor='black',
figsize=(10, 5)))
ax.set_title('Iris Data Components', fontsize=size['title'])
ax.legend(['Individual Variance', 'Cumulative Variance'],
fontsize=size['legend'])
ax.set_xlabel('Components', fontsize=size['label'])
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=0)
ax.set_ylabel('Percent (%)', fontsize=size['label'])
ax.yaxis.set_major_formatter(ax_formatter['percent'])
for patch in ax.patches:
height = patch.get_height()
ax.text(x=patch.get_x() + patch.get_width() / 2,
y=height + 0.01,
s=f'{height * 100:1.1f}%',
ha='center')
plt.show();
Explanation: Exercise 4 - Check via Scikit-Learn
This exercise was purely academic. You will always use an optimized version of PCA in practice.
End of explanation
components = 2
pca = PCA(n_components=components)
X = iris.drop('species', axis=1)
X_std = (sklearn.preprocessing
.StandardScaler()
.fit_transform(X))
y = pca.fit_transform(X_std)
y = (pd.DataFrame(y, columns=['first', 'second'])
.assign(species=iris.species))
grid = sns.lmplot(x='first', y='second', data=y,
hue='species', fit_reg=False,
markers=['o', 's', 'd'], size=6)
grid.fig.suptitle('Principal Components',
fontsize=size['title'], y=1.05)
handles = grid._legend_data.values()
labels = grid._legend_data.keys()
grid._legend.remove()
grid.fig.legend(bbox_to_anchor=(1.02, 0.94), fontsize=size['legend'],
handles=handles,
labels=[x.capitalize() for x in labels],
loc='center right')
grid.axes[0, 0].set_xlabel('PCA $1^{st}$ Component',
fontsize=size['label'])
grid.axes[0, 0].set_ylabel('PCA $2^{nd}$ Component',
fontsize=size['label'])
plt.show();
Explanation: The main component of the Iris data is the Sepal Length, which captures 92.5% of dataset variance. One would be justified in removing all the other dimensions.
End of explanation |
11,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introdução ao Numpy
master
Step1: Observe que mesmo no retorno de uma função, a cópia explícita pode não acontecer. Veja o exemplo a
seguir de uma função que apenas retorna a variável de entrada
Step2: Cópia rasa
A cópia rasa é muito útil e extensivamente utilizada. É usada quando se quer indexar o array original
através da mudança de dimensionalidade ou do
refatiamento, porém sem a necessidade de realizar uma cópia dos dados raster. Desta forma consegue-se
uma otimização no acesso ao array n-dimensional. Existem várias formas onde a cópia rasa acontece,
sendo as principais
Step3: Slice - Fatiamento
O exemplo a seguir mostra a cópia rasa no uso de fatiamento. No exemplo, todos os elementos de linhas
e colunas pares são modificados para 1. CUIDADO | Python Code:
import numpy as np
a = np.arange(6)
b = a
print "a =\n",a
print "b =\n",b
b.shape = (2,3) # mudança no shape de b,
print "\na shape =",a.shape # altera o shape de a
b[0,0] = -1 # mudança no conteúdo de b
print "a =\n",a # altera o conteudo de a
print "\nid de a = ",id(a) # id é um identificador único de objeto
print "id de b = ",id(b) # a e b possuem o mesmo id
print 'np.may_share_memory(a,b):',np.may_share_memory(a,b)
Explanation: Introdução ao Numpy
master:tutorial_numpy_1_index Tutorial Numpy - index
Copiando variáveis ndarray
Índice:
O ndarray foi projetado para acesso otimizado a uma grande quantidade de dados. Neste sentido, os conceitos
descritos a seguir sobre as três formas de cópias entre variáveis ditas sem cópia, cópia rasa (shallow) e
cópia profunda (deep) são fundamentais para uma codificação eficiente. Podemos dizer que um ndarray possui
o cabeçalho que contém dados pelas informações sobre o tipo do elemento, a dimensionalidade (shape) e
passo ou deslocamento para o próximo elemento (strides) e os dados raster em si. A tabela
a seguir mostra a situação do cabeçalho e dos dados nos três tipos de cópias.
|Tipo | Cabeçalho: Type, Shape, Strides| Dados raster | Exemplo |
|---------------------|--------------------------------|-------------------|---------------------------|
|Sem cópia, apenas ref| apontador original | apontador original| a = b |
|Cópia rasa | novo | apontador original|b = a.reshape, slicing, a.T|
|Cópia profunda | novo | novo |a = b.copy() |
Sem cópia explícita, apenas referência
No caso abaixo, usaremos o comando normal de igual como atribuição do array a para o array b.
Verifica-se que tanto o shape como os dados de b são os mesmos de a. Tudo se passa como b
fosse apenas um apontador para a. Qualquer modificação em b é refletida em a.
End of explanation
def cc(a):
return a
b = cc(a)
print "id de a = ",id(a)
print "id de b = ",id(b)
print 'np.may_share_memory(a,b):',np.may_share_memory(a,b)
Explanation: Observe que mesmo no retorno de uma função, a cópia explícita pode não acontecer. Veja o exemplo a
seguir de uma função que apenas retorna a variável de entrada:
End of explanation
a = np.arange(30)
print "a =\n", a
b = a.reshape( (5, 6))
print "b =\n", b
b[:, 0] = -1
print "a =\n", a
c = a.reshape( (2, 3, 5) )
print "c =\n", c
print 'c.base is a:',c.base is a
print 'np.may_share_memory(a,c):',np.may_share_memory(a,c)
Explanation: Cópia rasa
A cópia rasa é muito útil e extensivamente utilizada. É usada quando se quer indexar o array original
através da mudança de dimensionalidade ou do
refatiamento, porém sem a necessidade de realizar uma cópia dos dados raster. Desta forma consegue-se
uma otimização no acesso ao array n-dimensional. Existem várias formas onde a cópia rasa acontece,
sendo as principais:
no caso do reshape onde o número de elementos do ndarray é o mesmo, porém sua dimensionalidade
é alterada;
no caso de fatiamento onde um subarray é indexado;
no caso de transposição do array;
no caso de linearização do raster através do ravel(). entre outros.
Reshape
O exemplo a seguir mostra inicialmente a criação de um vetor unidimensional sequencial sendo "visto" de
forma bidimensional ou tridimensional.
End of explanation
import ia636 as ia
a = np.zeros( (5, 6))
print ia.iaimginfo(a)
b = a[::2,::2]
print ia.iaimginfo(b)
b[:,:] = 1.
print 'b=\n', b
print 'a=\n', a
print 'b.base is a:',b.base is a
print 'np.may_share_memory(a,b):',np.may_share_memory(a,b)
Este outro exemplo é uma forma atraente de processar uma coluna de uma matriz bidimensional,
porém é preciso CUIDADO, pois o uso de b deve ser com b[:] se for atribuído um novo valor para
ele, caso contrário, se fizermos b = arange(5), uma nova variável é criada.
.. code:: python
a = np.arange(25).reshape((5,5))
print 'a=\n',a
b = a[:,0]
print 'b=',b
b[:] = np.arange(5)
print 'b=',b
print 'a=\n',a
Transposto
----------
A operação matricial de transposição que troca linhas por colunas produz também um *view*
da imagem, sem necessidade de cópia:
.. code:: python
a = np.arange(24).reshape((4,6))
print 'a:\n',a
at = a.T
print 'at:\n',at
print 'np.may_share_memory(a,at):',np.may_share_memory(a,at)
Ravel
-----
Aplicando-se o método ``ravel()`` a um ``ndarray``, gera-se um *view* do raster
linearizado (i.e. uma única dimensão) do ``ndarray``.
.. code:: python
a = np.arange(24).reshape((4,6))
print 'a:\n',a
av = a.ravel()
print 'av.shape:',av.shape
print 'av:\n',av
print 'np.may_share_memory(a,av):',np.may_share_memory(a,av)
Cópia profunda
==============
Cria uma copia completa do array, do seu shape e conteúdo. A recomendação é utilizar a
função ``copy()`` para realizar a copia profunda, entretanto é possível conseguir a
copia profunda pelo ``np.array``.
.. code:: python
b = a.copy()
c = np.array(a, copy=True)
print "id de a = ",id(a)
print "id de b = ",id(b)
print "id de c = ",id(c)
Documentação Oficial Numpy
===========================
`http://wiki.scipy.org/Tentative_NumPy_Tutorial#head-1529ae93dd5d431ffe3a1001a4ab1a394e70a5f2 Copies and Views`
Explanation: Slice - Fatiamento
O exemplo a seguir mostra a cópia rasa no uso de fatiamento. No exemplo, todos os elementos de linhas
e colunas pares são modificados para 1. CUIDADO: quando é feita a atribuição de b = 1., é importante
que b seja referenciado como ndarray na forma b[:,:], caso contrário, se fizermos b = 1., uma nova
variável é criada.
End of explanation |
11,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nursery School Dataset
Team Members
Sharan
Srikanth
Kalyan
Nursery Database was derived from a hierarchical decision model originally developed to rank applications for nursery schools. It was used during several years in 1980's when there was excessive enrollment to these schools in Ljubljana, Slovenia, and the rejected applications frequently needed an objective explanation. The final decision depended on three subproblems
Step1: Summarizing the Dataset
1. Dimensions Of the DataSet.
We have some data and if we don't know how many features and instances our dataset has, it is of no use. So, we should find the dimensions of the dataset.
Step2: The above displayed result shows that the dataframe has 12960 rows and 9 colums. It means that we have 9 features and 12960 instances.
2. Looking at the data
It's a very good practice to look at your data before you proceed. We can look how our data is structured
Step3: 3. Summary of our data
We can look at the statical summary of every feature. For example, the average of a feature, mean etc.
Step4: If we look at the above result clearly, we can infer the following. Let us take example of the feature 'parent'
count
Step5: In the above summary data, we ca see that the class feature has 5 different observations. So, we should look at the data distribution of the expected output for every instance.
Here, as we can see, the recommended variable is very less. Almost negligible. very_recom variable is seen in 328 observations which is not too high but enough to train our machine.
The remaining three variables are almost equally distributed. So, we are good to go
Data Vizualization
Now that we have a rough idea of our dataset, we can understand it much better by vizualising it. Data can be vizualised by either Univariate plots or Multivariate plots.
Univariate plots helps us in understanding a particular feature.
Where as Multivariate plots helps us in understanding the relationship between different features.
1. Univariate plots
Now we will start with univariate plots to understand each feature in our dataset. Run the cell below to get the same information. | Python Code:
# Importing the libraries which we need now.
import pandas
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
%matplotlib inline
# Dataset from - https://archive.ics.uci.edu/ml/datasets/Nursery
df = pandas.read_table('nursery.txt', sep=',', header=None, names=['parents', 'has_nurs', 'form', 'children','housing',
'finance','social','health','class'])
Explanation: Nursery School Dataset
Team Members
Sharan
Srikanth
Kalyan
Nursery Database was derived from a hierarchical decision model originally developed to rank applications for nursery schools. It was used during several years in 1980's when there was excessive enrollment to these schools in Ljubljana, Slovenia, and the rejected applications frequently needed an objective explanation. The final decision depended on three subproblems: occupation of parents and child's nursery, family structure and financial standing, and social and health picture of the family. The model was developed within expert system shell for decision making DEX
Now We will read the data and load it into a dataframe in python.
End of explanation
# shape attribute gives the dimensions of a dataframe
print df.shape
Explanation: Summarizing the Dataset
1. Dimensions Of the DataSet.
We have some data and if we don't know how many features and instances our dataset has, it is of no use. So, we should find the dimensions of the dataset.
End of explanation
# Output printing out first 5 columns
df.head()
Explanation: The above displayed result shows that the dataframe has 12960 rows and 9 colums. It means that we have 9 features and 12960 instances.
2. Looking at the data
It's a very good practice to look at your data before you proceed. We can look how our data is structured
End of explanation
# The describe function prints the summary of the data
print(df.describe())
Explanation: 3. Summary of our data
We can look at the statical summary of every feature. For example, the average of a feature, mean etc.
End of explanation
# The group by function summarizes a particular feature.
print(df.groupby('class').size())
print("\n")
print(df.groupby('parents').size())
print("\n")
print(df.groupby('has_nurs').size())
print("\n")
print(df.groupby('form').size())
print("\n")
print(df.groupby('children').size())
print("\n")
print(df.groupby('housing').size())
print("\n")
print(df.groupby('finance').size())
print("\n")
print(df.groupby('social').size())
print("\n")
print(df.groupby('health').size())
Explanation: If we look at the above result clearly, we can infer the following. Let us take example of the feature 'parent'
count : 12960 (It gives the number of instances of that feature. Here all features have same count. So, we can infer that there are no missing values.)
unique : 3 (This number says that there 3 different categories of feature parent.)
top : usual (This gives the category which occured the highest number of times in a feature)
freq : 4320 (This gives the frequency of the top occurence category in that feature.)
4. Class distribution
We can find out how the data is distributed. Because sometimes we might get some data which may be skewed. This leads to unexpected behaviour of our prediction model.
End of explanation
plt.rcParams['figure.figsize'] = (5,5)
df['parents'].value_counts().plot(kind='bar')
plt.title('Parents')
plt.show()
df['has_nurs'].value_counts().plot(kind='bar')
plt.title('has_nurs')
plt.show()
df['form'].value_counts().plot(kind='bar')
plt.title('Form')
plt.show()
df['children'].value_counts().plot(kind='bar')
plt.title('Children')
plt.show()
df['housing'].value_counts().plot(kind='bar')
plt.title('Housing')
plt.show()
df['finance'].value_counts().plot(kind='bar')
plt.title('Finance')
plt.show()
df['social'].value_counts().plot(kind='bar')
plt.title('Social')
plt.show()
df['health'].value_counts().plot(kind='bar')
plt.title('Health')
plt.show()
df['class'].value_counts().plot(kind='bar')
plt.title('Class')
plt.show()
Explanation: In the above summary data, we ca see that the class feature has 5 different observations. So, we should look at the data distribution of the expected output for every instance.
Here, as we can see, the recommended variable is very less. Almost negligible. very_recom variable is seen in 328 observations which is not too high but enough to train our machine.
The remaining three variables are almost equally distributed. So, we are good to go
Data Vizualization
Now that we have a rough idea of our dataset, we can understand it much better by vizualising it. Data can be vizualised by either Univariate plots or Multivariate plots.
Univariate plots helps us in understanding a particular feature.
Where as Multivariate plots helps us in understanding the relationship between different features.
1. Univariate plots
Now we will start with univariate plots to understand each feature in our dataset. Run the cell below to get the same information.
End of explanation |
11,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connecting Job Titles by Their Similarity Scores
Step1: Data loading
Step2: After fixing some bugs with parsing job titles, we re-parsed job titles with problem by title_parse.py script. Thus, we need to reload newly parsed titles.
Step3: Helpers
Step4: Given the parsed titles, we now compute pairwise sims among them based on
Step5: First, we look at the top-10 domains with most no. of titles.
Step6: Among the top-10, Marketing is the one with least no. of titles. So we use it to test run the funcs for sim cals.
Marketing
Step7: 1st run
Step8: Ans
Step9: Other domains
We decide the priority for domains to consider based on the no. of job titles under domain. Thus we look at the distribution of job titles in domains.
Step10: On avg, no. of job titles per domain is $2.7$.
Step11: Re-compute sims after title standardization
Reload data after standardizing job titles
Job posts
Step12: Domains with at least 20 job titles
Step13: Domains with # titles in range [10, 20)
Step14: Domains with # titles in range [2, 10)
Step15: This is a large number. We need to break down more.
Step16: Should we group similar domain together?
We can spot similar domain names e.g. account and accounts, financial and finance. Should we group them together?
Similarity By Job's Primary Function
Step17: Similarity of Job Titles in one Industry
We investigate topic similarity of job titles in a given industry.
Challenges
Step18: Sort industries by number of job titles
Step19: Industries in 1st quarter
Step20: Similarity of Job Titles in Agriculture and Fishing
Some titles do not belong to Agriculture and Fishing, e.g., Signal Processing Engineer, Account Clerk, Accounts Analyst. Assigning the industry of the employer to the post (and its title) seems to have problem!!!
Similarity of Job Titles in Mining and Quarrying
Similarity of Job Titles in Electricity, Gas and Air-Conditioning Supply
Step21: Industries in 2nd Quarter
Step22: Industries in 3rd Quarter
Step23: Industries in 4th Quarter
Step24: Similarity By Job's Domain
Step25: Similarity By Job's Primary Function | Python Code:
import my_util as my_util; from my_util import *
import cluster_skill_helpers as cluster_skill_helpers
from cluster_skill_helpers import *
import os
import random
from time import time
import gc
# Turn on auto garbage collection
gc.enable()
HOME_DIR = 'd:/larc_projects/job_analytics/'
DATA_DIR = HOME_DIR + 'data/clean/'
SKILL_DAT = HOME_DIR + 'data/clean/skill_cluster/'
SKILL_RES = HOME_DIR + 'results/' + 'skill_cluster/new/'
JOB_PROF = SKILL_RES + 'job_prof/'
Explanation: Connecting Job Titles by Their Similarity Scores
End of explanation
title_stats = pd.read_csv(DATA_DIR + 'stats/job_titles.csv')
titles = title_stats['title']
print('# titles: %d' %len(titles))
# Bf standardizing job titles
df = pd.read_csv(SKILL_DAT + 'filter_doc_index.csv')
print df.shape
df.set_index('index', inplace=True)
with(open(SKILL_RES + 'doc_20topic_distr.mtx', 'r')) as f:
doc_topic_distr = mmread(f)
n_topic = doc_topic_distr.shape[1]
print('# topics: %d' %n_topic)
topic_df = pd.read_csv(SKILL_RES + 'lda/20_topics.csv')
labels = map(str.upper, topic_df['label'])
# Parsed titles:
parsed_titles = pd.read_csv(DATA_DIR + 'parsed_titles.csv')
print('# titles parsed: {}'.format(parsed_titles.shape[0]))
# Domains
domain_df = pd.read_csv(DATA_DIR + 'stats/domains.csv')
Explanation: Data loading:
End of explanation
title_df = pd.read_csv(DATA_DIR + 'titles_2posts_up.csv')
print('# parsed titles: %d' % len(title_df.title))
Explanation: After fixing some bugs with parsing job titles, we re-parsed job titles with problem by title_parse.py script. Thus, we need to reload newly parsed titles.
End of explanation
reload(my_util)
from my_util import *
reload(cluster_skill_helpers)
from cluster_skill_helpers import *
def vizPostPair(i, sim_df, labels, abbv_title=''):
fig = vizDists4Pair(sim_df.iloc[i], df, doc_topic_distr, labels)
fig.savefig(SKILL_RES + 'fig/{}_p{}.pdf'.format(abbv_title, i+1))
plt.show(); plt.close()
## Funcs for sim cals
def byRelevanceSims(domain='software', pri_func='engineer', min_post=2, verbose=False):
if domain:
chosen_titles = titlesWithAtLeast(titlesIn(domain, title_df),
min_post)
if pri_func:
chosen_titles = titlesWithAtLeast(titlesWith(pri_func, title_df),
min_post)
n_title = len(chosen_titles)
print('\t# titles with at least {} posts: {}'.format(min_post, n_title))
sims = simsAmong(chosen_titles, doc_topic_distr, df, verbose)
return sims
def calSims(domain='software', pri_func='engineer', verbose=False):
'''
@param: either domain or pri_func, not both
'''
t0 = time()
if domain:
print('Domain: {}'.format(domain))
sims = byRelevanceSims(domain=domain, pri_func=None, verbose=verbose)
if pri_func:
print('Primary function: {}'.format(pri_func))
sims = byRelevanceSims(domain=None, pri_func=pri_func, verbose=verbose)
elapse = round(time() - t0, 1)
print('\tFinished sim cals after {}s'.format(elapse))
if not sims.empty:
sims = sims.sort_values('topic_sim', ascending=False)
if domain: fname = domain_dir + 'sims/new/{}.csv'.format(clean(domain))
if pri_func: fname = func_dir + 'sims/new/{}.csv'.format(clean(pri_func))
sims.to_csv(fname, index=False)
print('\tSaved sims to file: {}'.format(fname))
return sims
def topkSim(title, k, sims):
s1 = sims[sims['t1'] == title]
s2 = sims[sims['t2'] == title]
# Hacking to swap two cols
s2 = s2.rename(columns={'t1': 'tmp', 't2':'t1'})
s2 = s2.rename(columns={'tmp': 't2'})
res = pd.concat([s1, s2])
res['title_n_sim'] = res['t2'] + '(' + map(str, res['topic_sim']) + ')'
res = res.sort_values('topic_sim', ascending=False)
return ','.join(list(res.head(k)['title_n_sim']))
def findTopkSimByDomain(domain, sims, k=2):
chosen_titles = set(sims['t1']).union(set(sims['t2']))
topk_res = pd.DataFrame({'title': chosen_titles})
col_topk = 'top_{}_sim'.format(k)
topk_res[col_topk] = topk_res['title'].apply(topkSim, k=k, sims=sims)
fname = domain_dir + 'topk/{}_top{}.csv'.format(clean(domain), k)
topk_res.to_csv(fname, index=False)
print('\tSaved top-{} similar title result'.format(k))
def findTopkSimByFunc(pri_func, sims, k=2):
chosen_titles = set(sims['t1']).union(set(sims['t2']))
topk_res = pd.DataFrame({'title': chosen_titles})
col_topk = 'top_{}_sim'.format(k)
topk_res[col_topk] = topk_res['title'].apply(topkSim, k=k, sims=sims)
fname = func_dir + 'topk/{}_top{}.csv'.format(clean(pri_func), k)
topk_res.to_csv(fname, index=False)
print('\tSaved top-{} similar title result'.format(k))
def domainsInRange(min_n_title, max_n_title):
q = '{} <= n_title < {}'.format(min_n_title, max_n_title)
domains = by_domain_agg.query(q)['domain']
msg = '# domains with # titles in range [{}, {}): {}'.format(min_n_title, max_n_title, len(domains))
print(msg)
return domains
Explanation: Helpers
End of explanation
# Get needed stats from agg
by_domain_agg = pd.read_csv(DATA_DIR + 'stats/domains.csv')
n_domain = by_domain_agg.shape[0]
print('# domains: {}'.format(n_domain))
Explanation: Given the parsed titles, we now compute pairwise sims among them based on:
+ domain
+ primary funcs.
Similarity By Job's Domain
End of explanation
by_domain_agg.head(10)
Explanation: First, we look at the top-10 domains with most no. of titles.
End of explanation
mkt_sims = calSims('marketing')
findTopkSimByDomain('marketing', mkt_sims)
Explanation: Among the top-10, Marketing is the one with least no. of titles. So we use it to test run the funcs for sim cals.
Marketing
End of explanation
# posts by Senior Marketing Engineer
sme_posts = df[df.title == 'Senior Marketing Engineer']
print('# posts by Senior Marketing Engineer: %d' %sme_posts.shape[0])
# posts by Senior Marketing Representative
smr_posts = df[df.title == 'Senior Marketing Representative']
print('# posts by Senior Marketing Representative: %d' %smr_posts.shape[0])
Explanation: 1st run: 'Senior Marketing Engineer' has highest sim with 'Senior Marketing Representative'!!! Why?
End of explanation
sim_dir = JOB_PROF + 'by_domain/sims/'
top9_domains = by_domain_agg.head(9)['domain']
for dom in top9_domains:
sims = calSims(dom)
findTopkSimByDomain(dom, sims, k=5)
Explanation: Ans: forgot to filter out titles with only 1 post.
We fixed this bug, re-run again and got rid of this issue. Now, we can run it for other domains in top-10.
Top-9 Domains:
End of explanation
by_domain_agg.describe().round(1)
Explanation: Other domains
We decide the priority for domains to consider based on the no. of job titles under domain. Thus we look at the distribution of job titles in domains.
End of explanation
fig = plt.figure()
plt.hist(by_domain_agg.n_title)
mean_n_title = round(by_domain_agg.n_title.mean(), 1)
xl = '# job titles' + r'$(\mu = {})$'.format(mean_n_title)
plt.xlabel(xl, fontsize=16);
plt.ylabel('# domains', fontsize=16)
plt.savefig(JOB_PROF + 'title_dist_by_domain.pdf')
plt.show(); plt.close()
Explanation: On avg, no. of job titles per domain is $2.7$.
End of explanation
df = pd.read_csv(SKILL_DAT + 'filter_doc_index.csv')
print df.shape
df.set_index('index', inplace=True)
title_stats = getTitleStats(df)
# title_stats.to_csv(DATA_DIR + 'stats/job_titles.csv', index=False)
title_stats.head()
print('# job titles after standardizing: %d' %title_stats.shape[0])
by_n_post.head()
# swr_sims = calSims(domain='software', pri_func=None)
viz(swr_sims, domain='software')
# mkt_sims = calSims(domain='marketing', pri_func=None, verbose=True)
# viz(mkt_sims, domain='marketing')
findTopkSimByDomain(domain='marketing', k=5, sims=mkt_sims)
Explanation: Re-compute sims after title standardization
Reload data after standardizing job titles
Job posts:
End of explanation
domain_20_ = by_domain_agg.query('20 <= n_title')['domain']
print('# domains with at least 20 job titles: %d' %len(domain_20_))
for dom in domain_20_:
sims = calSims(dom)
print('\tViz pairwise sims among job titles')
viz(sims, domain=dom)
# findTopkSimByDomain(dom, sims, k=5)
Explanation: Domains with at least 20 job titles:
End of explanation
domain_10_20 = domainsInRange(10, 20)
for dom in domain_10_20:
sims = calSims(dom)
# findTopkSimByDomain(dom, sims, k=5)
Explanation: Domains with # titles in range [10, 20):
End of explanation
domain_2_10 = domainsInRange(2, 10)
Explanation: Domains with # titles in range [2, 10):
End of explanation
dom_5_10 = domainsInRange(5, 10)
# 1st half
for dom in dom_5_10[:64] :
sims = calSims(dom)
findTopkSimByDomain(dom, sims, k=3)
# 2nd half
for dom in dom_5_10[76:]: # 64
sims = calSims(dom)
if not sims.empty: findTopkSimByDomain(dom, sims, k=3)
dom_2_5 = domainsInRange(2, 5)
list(dom_2_5).index('personnel / human resource')
# 1st half
for dom in dom_2_5[100:300]:
sims = calSims(dom)
if not sims.empty: findTopkSimByDomain(dom, sims, k=2)
for dom in dom_2_5[300:]:
sims = calSims(dom)
if not sims.empty: findTopkSimByDomain(dom, sims, k=2)
Explanation: This is a large number. We need to break down more.
End of explanation
by_func_agg = pd.read_csv(DATA_DIR + 'stats/pri_funcs.csv')
by_func_agg.describe().round(1)
n_func = by_func_agg.shape[0]
print('# primary functions: %d' %n_func)
by_func_agg.sort_values('n_title', inplace=True)
# by_func_agg.query('pri_func == "developer"')
# by_func_agg.query('pri_func == "teacher"')
by_func_agg.query('pri_func == "programmer"')
# Functions with largest no. of titles
by_func_agg.tail()
chosen_funcs = by_func_agg.query('n_title >= 2')['pri_func']
print('# funcs with at least 2 job titles: %d' %len(chosen_funcs))
chosen_funcs.tail()
teacher_sims = calSims(pri_func='teacher', domain=None)
findTopkSimByFunc('teacher', teacher_sims, k=5)
def functionInRange(min_n_title, max_n_title=None):
if max_n_title:
q = '{} <= n_title < {}'.format(min_n_title, max_n_title)
else:
q = '{} <= n_title'.format(min_n_title)
funcs = list(by_func_agg.query(q)['pri_func'])
text = '#pri-funcs having # titles in range [{}, {}): {}'
if max_n_title:
msg = text.format(min_n_title, max_n_title, len(funcs))
else:
msg = text.format(min_n_title, np.infty, len(funcs))
print(msg)
return funcs
func_2_5 = functionInRange(2, 5)
func_5_100 = functionInRange(5, 100)
print('\tViz pairwise sims among job titles')
viz(sims, pri_func=pri_func)
for func in func_5_100:
sims = calSims(pri_func=func, domain=None)
print('\tViz pairwise sims among job titles')
viz(sims, pri_func=func)
# findTopkSimByFunc(func, sims)
func_100_ = functionInRange(min_n_title=100)
supervisor_sims = pd.read_csv(func_dir + 'sims/supervisor.csv')
findTopkSimByFunc('supervisor', k=5, sims=supervisor_sims)
by_sec_func = parsed_titles.groupby('sec_func')
n_sec_func = by_sec_func.ngroups
print('# secondary funcs: %d' %n_sec_func)
Explanation: Should we group similar domain together?
We can spot similar domain names e.g. account and accounts, financial and finance. Should we group them together?
Similarity By Job's Primary Function
End of explanation
def calTitleSims(row):
'''
@param: row contains full name and short name of an industry
@return: pairwise sims bw titles with at least 2 posts in the industry
'''
def titleSims(industry='Education', short_name='edu'):
posts = df[df.industry == industry]
stats = getTitleStats(posts)
stats.to_csv(directory + 'title_stats.csv', index=False)
# only analyze titles with at least 2 posts in the industry
titles = list(stats.query('n_post >= 2')['title'])
sims = simsOfTitles(titles, industry)
return sims.sort_values('topic_sim', ascending=False)
industry = row['industry']; short_name = row['short_name']
print('\nIndustry %s:' %industry)
directory = JOB_PROF + '{}/'.format(short_name)
if not os.path.exists(directory): os.makedirs(directory)
title_sims = titleSims(industry, short_name)
title_sims.to_csv(directory + 'sims.csv', index=False)
print('Saved pairwise sims of job titles in {}'.format(industry))
by_industry = df.groupby('industry')
agg_df = by_industry.agg({'job_id': len, 'title': 'nunique', 'employer_id': 'nunique'})
agg_df.reset_index(inplace=True)
agg_df = agg_df.rename(columns={'employer_id': 'n_employer', 'job_id': 'n_post',
'title': 'n_title'})
Explanation: Similarity of Job Titles in one Industry
We investigate topic similarity of job titles in a given industry.
Challenges:
+ a job title can appear in several industries in different job posts and the content of the posts can vary a lot. How to define/limit to posts of a job title in a given industry?
End of explanation
agg_df.sort_values('n_title', ascending=False, inplace=True)
agg_df.to_csv(DATA_DIR + 'stats/industry_stat.csv', index=False)
agg_df = pd.read_csv(DATA_DIR + 'stats/industry_stat.csv')
Explanation: Sort industries by number of job titles:
End of explanation
q1 = agg_df.n_title.quantile(.25)
q1_industries = agg_df[agg_df.n_title <= q1]
def simWrapper(industries):
n_industry = industries.shape[0]
for i in range(n_industry):
calTitleSims(industries.iloc[i])
time.sleep(3)
simWrapper(q1_industries)
Explanation: Industries in 1st quarter
End of explanation
electric_sims = pd.read_csv(JOB_PROF + 'electric/sims.csv')
electric_sims.shape
fig, ax = plt.subplots()
vizTopicSim(electric_sims)
fig.subplots_adjust(bottom=0.2)
plt.savefig(electric_dir + 'sim_dist.pdf')
plt.show(); plt.close()
Explanation: Similarity of Job Titles in Agriculture and Fishing
Some titles do not belong to Agriculture and Fishing, e.g., Signal Processing Engineer, Account Clerk, Accounts Analyst. Assigning the industry of the employer to the post (and its title) seems to have problem!!!
Similarity of Job Titles in Mining and Quarrying
Similarity of Job Titles in Electricity, Gas and Air-Conditioning Supply
End of explanation
q2 = agg_df.n_title.quantile(.50)
q2_industries = agg_df.query('{} < n_title and n_title <= {}'.format(q1, q2))
q2_industries.reset_index(inplace=True)
del q2_industries['index']
q2_industries
calTitleSims(q2_industries.iloc[3])
Explanation: Industries in 2nd Quarter
End of explanation
q3 = agg_df.n_title.quantile(.75)
agg_df.query('{} < n_title and n_title <= {}'.format(q2, q3))
edu_sims = titleSims(industry='Education', short_name='edu')
edu_sims.to_csv(JOB_PROF + 'edu/sims.csv', index=False)
edu_sims.head(10)
Explanation: Industries in 3rd Quarter
End of explanation
title_df = pd.read_csv(DATA_DIR + 'stats/job_titles.csv')
titles = list(title_df['title'])
print('# titles: %d' %len(titles))
Explanation: Industries in 4th Quarter
End of explanation
domain_df = pd.read_csv(DATA_DIR + 'stats/domains.csv')
n_domain = domain_df.shape[0]
print('# domains: %d' %n_domain)
domains_2 = domain_df.query('n_title > 1')['domain']
print('# domains with at least 2 job titles: %d' %len(domains_2))
mkt_sims = byRelevanceSims('marketing')
Explanation: Similarity By Job's Domain
End of explanation
n_pri_func =
print('# primary functions: %d' %n_pri_func)
Explanation: Similarity By Job's Primary Function
End of explanation |
11,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Будем совсем неразумно обучаться на всем train'е, так как тогда мы переобучимся,
то есть наш алгоритм "подгониться" под закономерности, присущие только train'у,
а на реальных данных будет неистово лажать. Так что train разделим на две части
Step1: Для каждой картинки мы хотим найти вектор $(p_0, p_1)$, вероятностей такой, что $p_i$ - вероятность того, что картинка принадлежит классу $i$ ($0$ — outdoor, $1$ — indoor).
Реализуя логистическую регрессию, мы хотим приближать вероятности к их настоящему распределению.
Выражение выдает ответ вида $$ W x + b, $$
где $x$ — наш вектор картинки, а результат — числовой вектор размерности $2$ с какими-то числами. Для того, чтобы эти числа стали вероятностями от $0$ до $1$, реализуем функцию
$$
\text{softmax}(W, b, x) = \frac{e^{Wx+b}}{\sum(e^{Wx+b})},
$$
и полученные значения будут как раз давать в сумме 1, и ими мы будем приближать вероятности.
Оценивать качество нашей модели будем с помощью кросс-энтропии, см. https
Step2: Посчитаем среднюю квадратичную ошибку на тесте, чтобы прикинуть, что будет на Kaggle.
Step3: Теперь применяем найденную матрицу к исследумемым данным. | Python Code:
# Выделяем outdoor'ы и indoor'ы.
sample_out = sample[result[:, 0] == 1]
sample_in = sample[result[:, 1] == 1]
result_out = result[result[:, 0] == 1]
result_in = result[result[:, 1] == 1]
# Считаем размер indoor- и outdoor-частей в train'е.
train_size_in = int(sample_in.shape[0] * 0.75)
train_size_out = int(sample_out.shape[0] * 0.75)
# Разделяем outdoor'ы и indoor'ы на обучающую и тестовую часть.
x_train_out, x_test_out = np.split(sample_out, [train_size_out])
y_train_out, y_test_out = np.split(result_out, [train_size_out])
x_train_in, x_test_in = np.split(sample_in, [train_size_in])
y_train_in, y_test_in = np.split(result_in, [train_size_in])
# Делаем общий train и test, смешивая indoor'ы и outdoor'ы.
x_train = np.vstack([x_train_in, x_train_out])
y_train = np.vstack([y_train_in, y_train_out])
x_test = np.vstack([x_test_in, x_test_out])
y_test = np.vstack([y_test_in, y_test_out])
Explanation: Будем совсем неразумно обучаться на всем train'е, так как тогда мы переобучимся,
то есть наш алгоритм "подгониться" под закономерности, присущие только train'у,
а на реальных данных будет неистово лажать. Так что train разделим на две части:
на 75% будем обучаться, а на 25% проверять, что мы лажаем не неистово.
Если возьмем первые 25% от всего train'а, то может быть несбалансированное число
outdoor'ов и indoor'ов. Поэтому для train возьмем первые 75% outdoor'ов плюс
первые 75% indoor'ов. Тогда мы сохраним пропорции outdoor:indoor таким, какое оно
во всем train'е. Будет особенно клево, если и в исследуемых данных соблюдается так же
пропорция.
End of explanation
def softmax(W, x):
# Функция logsumexp более стабтильно вычисляет функцию экспонент, почти
# избавляя нас от проблемы переполнения.
p = np.dot(x, W.T)
return np.exp(p - scm.logsumexp(p, axis=1).reshape(-1, 1))
def loss(y, softmax, W, l):
# Формула из Википедии по ссылке выше c добавленным регуляризатором.
return np.mean(-np.sum(y * np.log(softmax), axis=1)) + l * np.trace(W @ W.T) / (2 * y.shape[0])
# Считаем средний по всем картинкам градиент.
# Градиент у нас будет не вектор, как мы привыкли, а матрица 2x3073.
def gradients(W, x, y, l):
p = softmax(W, x)
grads = (p - y).T @ x + l * W
return grads / x.shape[0] # По максимимум матричных вычислений!
# Выбор шага по правилу Армихо из семинарского листочка.
def armijo(W, x, y, l, alpha=0.5, beta=0.5):
s = 1
grad = gradients(W, x, y, l)
dW = -grad # Направление спуска.
loss_1 = loss(y_train, softmax(W + s * dW, x), W, l)
loss_0 = loss(y_train, softmax(W, x), W, l)
while loss_1 > loss_0 + alpha * s * (grad * dW).sum():
s = beta * s
loss_1 = loss(y_train, softmax(W + s * dW, x), W, l)
loss_0 = loss(y_train, softmax(W, x), W, l)
return s
def classify(x_train, x_test, y_train, y_test, iters, l):
# Как было замечено выше, W Размера 2 на 3072, а b размера 2, но мы приписываем b к W.
W = np.zeros((2, 3072))
b = np.zeros(2)
# Для приписывания запишем b как вектор столбец и воспользуемся функцией hstack.
b = b.reshape(b.size, 1)
W = np.hstack([W, b])
# Соответственно, нужно поменять x_train и x_test, добавив по 1 снизу.
fictious = np.ones((x_train.shape[0], 1))
x_train = np.hstack([x_train, fictious])
fictious = np.ones((x_test.shape[0], 1))
x_test = np.hstack([x_test, fictious])
# Будем записывать потери на каждом шаге спуска.
losses_train = [loss(y_train, softmax(W, x_train), W, l)]
losses_test = [loss(y_test, softmax(W, x_test), W, l)]
# Собственно, сам спуск.
for i in tqdm.tqdm(np.arange(iters)):
# Именно так - в Армихо подставляется alpha = l, а l = 0!
# Потому что я накосячил и не заметил! =)
eta = armijo(W, x_train, y_train, 0, l)
W = W - eta * gradients(W, x_train, y_train, l)
losses_train.append(loss(y_train, softmax(W, x_train), W, l))
losses_test.append(loss(y_test, softmax(W, x_test), W, l))
# На выходе имеется оптимальное значение W и массивы потерь.
return W, losses_train, losses_test
l = 0.04 # Сработает лучше, чем вообще без регуляризатора (l = 0).
# Нам хватит и 100 итераций, переобучение начинается достаточно быстро.
W, losses_train, losses_test = classify(x_train, x_test, y_train, y_test, 100, l)
plt.plot(losses_train, color='green', label='train')
plt.plot(losses_test, color='red', label='test')
plt.xlabel('Gradient descent iteration')
plt.ylabel('Loss')
plt.legend()
plt.show()
iters = np.argmin(losses_test) # На этой итиреации ошибка на тесте минимальна.
# Делаем столько итераций.
W, losses_train, losses_test = classify(x_train, x_test, y_train, y_test, iters, l)
Explanation: Для каждой картинки мы хотим найти вектор $(p_0, p_1)$, вероятностей такой, что $p_i$ - вероятность того, что картинка принадлежит классу $i$ ($0$ — outdoor, $1$ — indoor).
Реализуя логистическую регрессию, мы хотим приближать вероятности к их настоящему распределению.
Выражение выдает ответ вида $$ W x + b, $$
где $x$ — наш вектор картинки, а результат — числовой вектор размерности $2$ с какими-то числами. Для того, чтобы эти числа стали вероятностями от $0$ до $1$, реализуем функцию
$$
\text{softmax}(W, b, x) = \frac{e^{Wx+b}}{\sum(e^{Wx+b})},
$$
и полученные значения будут как раз давать в сумме 1, и ими мы будем приближать вероятности.
Оценивать качество нашей модели будем с помощью кросс-энтропии, см. https://en.wikipedia.org/wiki/Cross_entropy.
Сначала поймем, что $x$ - вектор размерности 3072, $W$ - матрица 2 на 3072, $b$ - вектор размерности 2.
Положим $x'i = x_i$ для $ i \leqslant 3072 $ и $x'{3073} = 1$. Получили вектор $x'$ размерности 3073. Положим $W'{i,j} = W{i,j}$ для $ i \leqslant 2, j \leqslant 3073$ и $W'_{i,3073}=b_i$ для $ i \leqslant 2 $.
Таким образом, к вектору $x$ просто дописали 1, а к матрице $W$ просто приписали вектор $b$ справа.
Заметим теперь, что в точности верно равенство: $Wx+b=W'x'$. Теперь забьем на вектор $b$ и будем считать, что у нас есть матрица 10 на 3073, элементы которой надо оценить. Далее везде считаем $W' = W$ и $x' = x$.
Градиентный спуск считается по формуле: $W_{k+1} = W_k - \eta_k \nabla L(W_k)$, где $\eta_k$ — шаг, а $L$ — функция $\text{loss}$. Значит, нам надо посчитать градиент функции $L$, то есь найти ее частные производные по всем 6146 переменным.
Вспомним, как определяется $L$. Обозначим через $y$ вектор вида $(1, 0)$ либо $(0, 1)$, где 1 на $k$-м месте, где $k - 1$ — тип исследуемой картинки. Размерность $y$ равна 2. Сам вектор $y$ олицетворяет ответ для данной картинки.
Тогда
$$ L(W) = -y_1 \ln \frac{e^{(Wx)1}}{e^{(Wx)_1} + e^{(Wx)_2}} -y{2} \ln \frac{e^{(Wx){2}}}{e^{(Wx)_1} + e^{(Wx)_2}} + \frac{\lambda}{2} \sum{i=1}^{2} \sum_{j=1}^{3073} W_{i,j}^2. $$
Последняя сумма — так называемый регуляризатор. Если у нас много признаков (у нас их 6146), то при логистической регресии может возникнуть переобучение. Добавляя все параметры в $\text{loss}$, мы не сможем получить неестественного результата, когда какие-то параметры очень маленькие, а какие-то очень большие, потому что большие будут сильно увеличивать регуляризатор, а функция минимизируется. Таким образом, более вероятно получение подходящего результата.
Это описано в курсе Machine Learning by Stanford University во втором уроке третьей недели. Ссылка: https://www.coursera.org/learn/machine-learning/lecture/4BHEy/regularized-logistic-regression.
Теперь найдем производную по $W_{i,j}$: $$
\frac{dL(W)}{dW_{i,j}} =
-y_1 \frac{e^{(Wx)1} + e^{(Wx)_2}}{e^{(Wx)_1}} \cdot
\frac{-e^{(Wx)_1} e^{(Wx)_i} x_j}
{e^{(Wx)_1} + e^{(Wx)_2}}
-y{2} \frac{e^{(Wx)1} + e^{(Wx)_2}}{e^{(Wx){2}}} \cdot
\frac{-e^{(Wx){2}} e^{(Wx)_i} x_j}
{e^{(Wx)_1} + e^{(Wx)_2}} -\
- y_i \frac{e^{(Wx)_1} + e^{(Wx)_2}}{e^{(Wx)_i}} \cdot
\frac{e^{(Wx)_i} x_j (e^{(Wx)_1} + e^{(Wx)_2})}
{(e^{(Wx)_1} + e^{(Wx)_2})^2}
+ \lambda W{i,j}. $$
Упростим немного: $$
\frac{dL(W)}{dW_{i,j}} =
\frac{ x_j e^{(Wx)i} (y_1 + y_2) }
{e^{(Wx)_1} + e^{(Wx)_2}}
-y_i x_j
+ \lambda W{i,j}. $$
Упрощая еще сильнее, приходим к окончательному ответу: $$
\frac{dL(W)}{dW_{i,j}} =\left( \frac{e^{(Wx)i}}{e^{(Wx)_1} + e^{(Wx)_2}} - y_i \right) x_j
+ \lambda W{i,j}.
$$
Соответственно, если $j = 3073$, то есть дифференцируем по переменным $ W_{1, 3073} = b_1, \ldots, W_{2, 3073} = b_2$, то коэффициент перед скобкой просто 1.
Перейдем к реализации.
End of explanation
# Добавляем 1 к выборке.
nx_test = np.hstack([x_test, np.ones(x_test.shape[0]).reshape(x_test.shape[0], 1)])
probabilities = softmax(W, nx_test) # Считаем вероятности.
recognized = np.argmax(probabilities, axis=1) # Что распознано.
answers = np.argmax(y_test, axis=1) # Правильные ответы.
np.sqrt(np.mean((recognized - answers) ** 2)) # Собственно, ошибка.
Explanation: Посчитаем среднюю квадратичную ошибку на тесте, чтобы прикинуть, что будет на Kaggle.
End of explanation
# Добавляем 1 к выборке.
ntest = np.hstack([test, np.ones(test.shape[0]).reshape(test.shape[0], 1)])
probabilities = softmax(W, ntest) # Считаем вероятности.
ress = np.argmax(probabilities, axis=1).reshape(-1, 1) # Что распознано.
# Осталось загнать все в табличку, чтобы ее записать в csv.
ids = np.arange(ress.size).reshape(-1, 1)
submit = np.hstack([ids, ress])
# Заполняем csv-шник.
import csv
with open('submission.csv', 'w', newline='') as csvfile:
submission = csv.writer(csvfile, delimiter=',')
submission.writerow(['id', 'res'])
submission.writerows(submit)
Explanation: Теперь применяем найденную матрицу к исследумемым данным.
End of explanation |
11,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Exo from 'Intro to Pandas'
Step1: we decide to only take the 'confirmed' cases and not the suspected or probable ones since 'suspected' and 'probable' are very subjective terms and may not be the same over the 3 countries.
Step2: A problem is that the columnames and the variables are not the same over the 3 countries. So we harmonize it somewhat.
Step3: Handle missing data
Step4: Then we can replace all NaN values with 0. We don't know anything about that data to put something else, and removing is no option since there would not be much left if we removed all rows/cols that contain at least one NaN
Step5: not all values are numerical (most are objects)
Step6: make all types numerical (excluding the date and variable columns)
Step7: Now we can summ over all cities and store it in a 'Total' column.
Note that all countries have a 'National' or 'total' column, but they are inconsistent with the sumed values in each city, so we ignore it.
Step8: Show the data
Step9: Move the variables into the columns
Step10: Then we need to merge the data
Step11: add 'country' col to distinguish the dataframes when they are put together
Step12: Concat the dataframes
Step13: And sort the data | Python Code:
# load all data and parse the 'date' column
def load_data():
sl_files=glob.glob('Data/ebola/sl_data/*.csv')
guinea_files=glob.glob('Data/ebola/guinea_data/*.csv')
liberia_files=glob.glob('Data/ebola/liberia_data/*.csv')
sl = pd.concat((pd.read_csv(file, parse_dates=['date']) for file in sl_files), ignore_index=True)
guinea = pd.concat((pd.read_csv(file , parse_dates=['Date']) for file in guinea_files), ignore_index=True)
liberia = pd.concat((pd.read_csv(file , parse_dates=['Date']) for file in liberia_files), ignore_index=True)
return (sl, guinea, liberia)
(sl, guinea, liberia) = load_data()
# look at the sl data
sl.columns
sl['variable'].unique()
Explanation: Advanced Exo from 'Intro to Pandas'
End of explanation
sl_variables_to_use = ['new_confirmed', 'death_confirmed']
# look at the guinea data
guinea.columns
guinea['Description'].unique()
guinea_variables_to_use = ['New cases of confirmed', 'New deaths registered today (confirmed)']
# look at the liberia data
liberia.columns
liberia['Variable'].unique()
liberia_variables_to_use = ['New case/s (confirmed)', 'Total death/s in confirmed cases']
def select_features(data, var_name, features):
return data[data[var_name].isin(features)]
# take the relevant variables
sl_relevant = select_features(sl, 'variable', sl_variables_to_use)
guinea_relevant = select_features(guinea, 'Description', guinea_variables_to_use)
liberia_relevant = select_features(liberia, 'Variable', liberia_variables_to_use)
Explanation: we decide to only take the 'confirmed' cases and not the suspected or probable ones since 'suspected' and 'probable' are very subjective terms and may not be the same over the 3 countries.
End of explanation
# rename the columns
var_name = 'vars'
sl_relevant.rename(columns={'variable': var_name}, inplace=True)
guinea_relevant.rename(columns={'Description': var_name, 'Date': 'date'}, inplace=True)
liberia_relevant.rename(columns={'Variable': var_name, 'Date': 'date'}, inplace=True)
#rename the variables
new_infected = 'new_infected'
new_death= 'new_death'
sl_relevant[var_name][sl_relevant[var_name] == sl_variables_to_use[0]] = new_infected
sl_relevant[var_name][sl_relevant[var_name] == sl_variables_to_use[1]] = new_death
guinea_relevant[var_name][guinea_relevant[var_name] == guinea_variables_to_use[0]] = new_infected
guinea_relevant[var_name][guinea_relevant[var_name] == guinea_variables_to_use[1]] = new_death
liberia_relevant[var_name][liberia_relevant[var_name] == liberia_variables_to_use[0]] = new_infected
liberia_relevant[var_name][liberia_relevant[var_name] == liberia_variables_to_use[1]] = new_death
# rename the data
sl_clean = sl_relevant.copy()
guinea_clean = guinea_relevant.copy()
liberia_clean = liberia_relevant.copy()
Explanation: A problem is that the columnames and the variables are not the same over the 3 countries. So we harmonize it somewhat.
End of explanation
#remove al rows and columns that consist only of NaNs
def remove_rows_and_cols_with_only_nan(data):
return data.dropna(axis=1, how='all').dropna(axis=0, thresh=3)
sl_clean = remove_rows_and_cols_with_only_nan(sl_clean)
guinea_clean = remove_rows_and_cols_with_only_nan(guinea_clean)
liberia_clean = remove_rows_and_cols_with_only_nan(liberia_clean)
Explanation: Handle missing data
End of explanation
# replace all NaNs with 0 (inplace)
sl_clean.fillna(value=0, inplace=True)
guinea_clean.fillna(value=0, inplace=True)
liberia_clean.fillna(value=0, inplace=True)
Explanation: Then we can replace all NaN values with 0. We don't know anything about that data to put something else, and removing is no option since there would not be much left if we removed all rows/cols that contain at least one NaN
End of explanation
sl_clean.dtypes
Explanation: not all values are numerical (most are objects)
End of explanation
def change_to_numeric(data):
col_list = list(data.columns)
col_list.remove('date')
col_list.remove(var_name)
data[col_list] = data[col_list].apply(pd.to_numeric)
change_to_numeric(sl_clean)
change_to_numeric(guinea_clean)
change_to_numeric(liberia_clean)
Explanation: make all types numerical (excluding the date and variable columns)
End of explanation
# create a total colon
def add_and_fill_total_col(data, ignore_cols_list):
col_list = list(data.columns)
for c in ignore_cols_list:
col_list.remove(c)
data['total'] = data[col_list].sum(axis=1)
add_and_fill_total_col(sl_clean, ['date', var_name, 'National'])
add_and_fill_total_col(guinea_clean, ['date', var_name, 'Totals'])
add_and_fill_total_col(liberia_clean, ['date', var_name, 'National'])
# remove unused cols:
sl_clean = sl_clean[['date', var_name, 'total']]
guinea_clean = guinea_clean[['date', var_name, 'total']]
liberia_clean = liberia_clean[['date', var_name, 'total']]
#rename data again
sl_final = sl_clean.copy()
liberia_final = liberia_clean.copy()
guinea_final = guinea_clean.copy()
Explanation: Now we can summ over all cities and store it in a 'Total' column.
Note that all countries have a 'National' or 'total' column, but they are inconsistent with the sumed values in each city, so we ignore it.
End of explanation
liberia_final.head()
guinea_final.head()
sl_final.head()
Explanation: Show the data
End of explanation
# create infected and death cols
def create_inf_death_cols(data):
inf = data[data['vars'] == new_infected]
inf[new_infected] = inf['total']
death = data[data['vars'] == new_death]
death[new_death] = death['total']
res = data.join(inf[new_infected], how='outer')
return res.join(death[new_death], how='outer')
sl_final = create_inf_death_cols(sl_final)
liberia_final = create_inf_death_cols(liberia_final)
guinea_final = create_inf_death_cols(guinea_final)
sl_final.head()
# remove vars & total col
sl_final = sl_final.drop(var_name, 1).drop('total', 1)
liberia_final = liberia_final.drop(var_name, 1).drop('total', 1)
guinea_final = guinea_final.drop(var_name, 1).drop('total', 1)
sl_final.head()
Explanation: Move the variables into the columns
End of explanation
# group by date to merge the cols
liberia_final = liberia_final.groupby('date', as_index=False).sum()
sl_final = sl_final.groupby('date', as_index=False).sum()
guinea_final = guinea_final.groupby('date', as_index=False).sum()
Explanation: Then we need to merge the data
End of explanation
sl_final['country'] = 'sl'
guinea_final['country'] = 'guinea'
liberia_final['country'] = 'liberia'
guinea_final.head()
liberia_final.head()
sl_final.head()
Explanation: add 'country' col to distinguish the dataframes when they are put together
End of explanation
final_data = pd.concat([sl_final, guinea_final, liberia_final], ignore_index=True)
Explanation: Concat the dataframes
End of explanation
final_data.sort_values(by='date').set_index(['date', 'country'])
Explanation: And sort the data:
End of explanation |
11,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Please find torch implementation of this notebook here
Step1: Get data
We use a binarized version of MNIST.
Step2: Training
Step3: We use add-one smoothing for class conditional Bernoulli distributions.
Step4: Testing | Python Code:
import numpy as np
try:
import torchvision
except ModuleNotFoundError:
%pip install -qq torchvision
import torchvision
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
!mkdir figures # for saving plots
key = jax.random.PRNGKey(1)
# helper function to show images
def show_images(imgs, num_rows, num_cols, titles=None, scale=1.5):
# modified from https://raw.githubusercontent.com/d2l-ai/d2l-en/master/d2l/torch.py
figsize = (num_cols * scale, num_rows * scale)
_, axes = plt.subplots(num_rows, num_cols, figsize=figsize)
axes = axes.flatten()
for i, (ax, img) in enumerate(zip(axes, imgs)):
img = np.array(img)
ax.imshow(img)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
if titles:
ax.set_title(titles[i])
return axes
Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/09/naive_bayes_mnist_torch.ipynb
<a href="https://colab.research.google.com/github/arpitvaghela/probml-notebooks/blob/main/notebooks-d2l/naive_bayes_mnist_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Naive Bayes classifiers
We show how to implement Naive Bayes classifiers from scratch.
We use binary features, and 2 classes.
Based on sec 18.9 of http://d2l.ai/chapter_appendix-mathematics-for-deep-learning/naive-bayes.html.
End of explanation
mnist_train = torchvision.datasets.MNIST(
root="./temp",
train=True,
transform=lambda x: jnp.array([jnp.array(x) / 255]),
download=True,
)
mnist_test = torchvision.datasets.MNIST(
root="./temp",
train=False,
transform=lambda x: jnp.array([jnp.array(x) / 255]),
download=True,
)
print(mnist_train)
image, label = mnist_train[2]
print(type(image))
print(image.shape)
print(type(label))
print(label)
image[0, 15:20, 15:20] # not binary (pytorch rescales to 0:1)
[jnp.min(image), jnp.max(image)]
print(mnist_train[0][0].shape) # (1,28,28)
indices = [0, 1]
xx = jnp.stack([mnist_train[i][0] for i in indices])
print(xx.shape)
xx = jnp.stack([mnist_train[i][0] for i in indices], axis=1)
print(xx.shape)
xx = jnp.stack([mnist_train[i][0] for i in indices], axis=1).squeeze(0)
print(xx.shape)
# convert from torch.tensor to numpy, extract subset of indices, optionally binarize
def get_data(data, indices=None, binarize=True):
N = len(data)
if indices is None:
indices = range(0, N)
X = jnp.stack([data[i][0] for i in indices], axis=1).squeeze(0) # (N,28,28)
if binarize:
X = X > 0.5
y = jnp.array([data[i][1] for i in indices])
return X, y
indices = range(0, 10)
images, labels = get_data(mnist_train, indices, False)
print([images.shape, labels.shape])
print(images[0, 15:20, 15:20]) # not binary
_ = show_images(images, 1, 10)
indices = range(0, 10)
images, labels = get_data(mnist_train, indices, True)
print([images.shape, labels.shape])
print(images[0, 15:20, 15:20]) # binary
_ = show_images(images, 1, 10)
X_train, y_train = get_data(mnist_train)
X_test, y_test = get_data(mnist_test)
print(X_train.shape)
print(type(X_train))
print(X_train[0, 15:20, 15:20])
Explanation: Get data
We use a binarized version of MNIST.
End of explanation
n_y = jnp.zeros(10)
for y in range(10):
n_y = n_y.at[y].set((y_train == y).sum())
P_y = n_y / n_y.sum()
P_y
# Training set is not equally balanced across classes...
print(jnp.unique(y_train))
from collections import Counter
cnt = Counter(np.asarray(y_train))
print(cnt.keys())
print(cnt.values())
Explanation: Training
End of explanation
n_x = jnp.zeros((10, 28, 28))
for y in range(10):
n_x = n_x.at[y].set(X_train[y_train == y].sum(axis=0))
# using pseudo counts of 1
# P_xy = (n_x + 1) / (n_y + 1).reshape(10, 1, 1)
P_xy = (n_x + 1) / (n_y + 2).reshape(10, 1, 1)
print(P_xy.shape)
print(type(P_xy))
show_images(P_xy, 1, 10)
plt.tight_layout()
plt.savefig("nbc_mnist_centroids.pdf", dpi=300)
Explanation: We use add-one smoothing for class conditional Bernoulli distributions.
End of explanation
log_P_xy = jnp.log(P_xy)
log_P_xy_neg = jnp.log(1 - P_xy)
log_P_y = jnp.log(P_y)
def bayes_pred_stable(x):
# x = x.unsqueeze(0) # (28, 28) -> (1, 28, 28)
x = jnp.expand_dims(x, 0) # (28, 28) -> (1, 28, 28)
p_xy = log_P_xy * x + log_P_xy_neg * (1 - x) # select the 0 and 1 pixels
p_xy = p_xy.reshape(10, -1).sum(axis=1) # p(x|y)
return p_xy + log_P_y
def predict(X):
return jnp.array([jnp.argmax(bayes_pred_stable(x)) for x in X])
# image, label = mnist_test[0]
image = X_test[0]
label = y_test[0]
py = bayes_pred_stable(image)
print(py)
print("ytrue ", label, "yhat ", np.argmax(py))
print(predict([image]))
indices = range(0, 10)
X1, y1 = get_data(mnist_test, indices, True)
preds = predict(X1)
show_images(X1, 1, 10, titles=[str(d) for d in preds])
plt.tight_layout()
plt.savefig("nbc_mnist_preds.pdf", dpi=300)
indices = range(5, 10)
X1, y1 = get_data(mnist_test, indices, True)
preds = predict(X1)
show_images(X1, 1, 5, titles=[str(d) for d in preds])
plt.tight_layout()
plt.savefig("nbc_mnist_preds.pdf", dpi=300)
indices = range(30, 40)
X1, y1 = get_data(mnist_test, indices, True)
preds = predict(X1)
_ = show_images(X1, 1, 10, titles=[str(d) for d in preds])
preds = predict(X_test)
float(jnp.count_nonzero(preds == y_test)) / len(y_test) # test accuracy
Explanation: Testing
End of explanation |
11,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>
<img src="http
Step1:
Step2: know the difference
Let ${y_{n}}{n\in\mathbb{N}}$ be a sequence, where $y{n}=f(n)$ for some function $f$. Assume that each coefficient $y_{n}$ is not known; on the contrary, assume that there exists a known sequence ${g_{n}}{n\in\mathbb{N}}$ which satisfies
Step3: $(x)_{i}$ application
Step4: ${{n}\choose{k}}$ application
Step5: Previous thm holds by the following argument.
$$
\begin{split}
\Delta{{x}\choose{j}} = {{x+1}\choose{j}} - {{x}\choose{j}} &= \frac{(x+1){j}}{(j){j}}-\frac{(x){j}}{(j){j}}\
&= \frac{ (x+1)x\cdots(x-j+2) -x\cdots(x-j+2)(x-j+1) }{(j){j}}\
&= \frac{ x\cdots(x-j+2)(x+1 -(x-j+1)) }{(j){j}}\
&= \frac{ x\cdots(x-j+2) }{(j-1)!} = \frac{ (x){(j-1)} }{(j-1){(j-1)}} = {{x}\choose{j-1}}\
\end{split}
$$
therefore, to find $\Delta^{-1}{{x}\choose{j}}$ we are required to provide a term $t_{x}$ such that application of $\Delta$ to it yields ${{x}\choose{j}}$. So choose $t_{x}={{x}\choose{j+1}}$, according to above identity.
Step6: powers
in $\mathbb{R}$
In $\mathbb{R}$ the $n$-th power of the symbol $x$ satisfies
Step7: The following identity links the two kinds of powers
Step8: Proof. By induction on $n$.
Base case $n=1$, so $x = \mathcal{S}{1,1}(x){1}=x$, which holds.
Assume the theorem true for $n$ and show for $n+1$, so
Step9: $\mathbb{R} \rightarrow \mathbb{N}$, via Stirling numbers of the first type
It is possible to revert the previous argument and find a characterization for $(x){i}$ using powers $x^{i}$ as follows
Step10: Stirling's matrices of numbers are inverses the one of the other, namely
Step11: therefore their product yields the identity matrix | Python Code:
from sympy import *
from sympy.abc import n, i, N, x, k, y
init_printing()
%run src/commons.py
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
<small>
<br>October 22, 2016: Horner method, binomial anti-difference
<br>October 9, 2016: computing sums, Stirling numbers, thms coding
<br>October 7, 2016: operators and powers theory
</small>
</div>
</p>
<br>
<div align="center">
<b>Abstract</b><br>
In this document we collect notes and exercises for the first chapter of the course.
</div>
End of explanation
delta, antidifference, I, E, f = symbols(r'\Delta \Delta^{-1} I E f', cls=Function) # combinators
I_eval_rule = define(I(x), x) # identity
E_eval_rule = define(E(x),x+1) # forward shift
I_eval_rule, E_eval_rule
delta_eval_rule = define(delta(f(x)), f(x+1)-f(x),) # forward difference def
delta_pow_rule = define(delta(f(x))**k, delta(delta(f(x))**(k-1)),) # repeated applications of differentiation
delta_eval_rule, delta_pow_rule
delta_EI_conv = define(delta(x), E(x)-I(x)) # conversion via combinators
delta_EI_conv
def rewrite(eq, rule, **kwds):
return eq.replace(query=rule.lhs, value=rule.rhs, **kwds)
s = Wild('s')
rewrite(delta_eval_rule, define(f, Lambda([s], ff(s,n))))
f_wild = WildFunction('f')
D=define(delta(f_wild**k), delta(delta(f_wild**(k-1))))
rewrite(delta_pow_rule, D)
class ForwardDifference(Function):
def _latex(self, *_):
if len(self.args) < 2:
func, args = self.args[0].func, self.args[0].args
v = args[0]
else:
func, args = self.args[0].func, self.args[0].args
v = self.args[1]
D = Function(r'\Delta_{{{}}}'.format(latex(v)))
expr = D(func(*args))
return latex(expr)
def doit(self):
if len(self.args) < 2:
func, args = self.args[0].func, self.args[0].args
v = args[0]
else:
func, args = self.args[0].func, self.args[0].args
v = self.args[1]
return func(*map(lambda a: a.subs({v:v+1}, simulataneous=True),args)) - func(*args)
def _e2val_power(self, k):
if k.is_Number:
return ForwardDifference(Pow(ForwardDifference(self.args), k-1, evaluate=False))
else:
return super()._eval_power(k)
Explanation:
End of explanation
g = IndexedBase('g')
n = IndexedBase('n')
f = Function('f')
anti_difference = Function('\Delta^{-1}')
def accept_replacing(thm_ctor):
def replacing(subs=lambda *args: {}, **kwds):
weq, variables = thm_ctor(**kwds)
mapping = subs(*variables) if callable(subs) else subs
for k,v in mapping.items():
weq = weq.replace(k, v, simultaneous=True)
return weq, [mapping.get(v, v) for v in variables]
return replacing
@accept_replacing
def summation_antidifference_thm():
(n, sup), inf = symbols('n N'), IndexedBase('n')[0]
eq = Eq(Sum(g[n], (n, inf, sup-1)),
Subs(anti_difference(g[n]), n, sup) -
Subs(anti_difference(g[n]), n, inf))
return eq, (g, n, inf, sup)
@accept_replacing
def antidifference_of_ff_thm():
(n, i), w = symbols('n i'), IndexedBase('w')
eq = Eq(anti_difference(ff(n, i)), ff(n, i+1)/(i+1)+w[n])
return eq, (x, n, w)
@accept_replacing
def antidifference_of_binomial_thm():
(n, k), w = symbols('n k'), IndexedBase('w')
eq = Eq(anti_difference(binomial(n, k)), binomial(n, k+1)+w[n])
return eq, (x, n, w)
@accept_replacing
def constant_sequence_thm():
variables = w, i, j = IndexedBase('w'), *symbols('i j')
eq = Eq(w[i], w[j])
return eq, (w, i, j)
def doit(thm, lhs=True, rhs=True):
eq, *variables = thm
return Eq(eq.lhs.doit() if lhs else eq.lhs, eq.rhs.doit() if rhs else eq.rhs), variables
def rewrite(thm, rule, include_rule_vars=False):
eq, *rest = thm
try:
rw, *others = rule # so, `rule` can be a thm too
except:
rw, *others = rule, []
augmented = []
augmented.extend(*rest)
if include_rule_vars: augmented.extend(*others)
return eq.replace(rw.lhs, rw.rhs, simultaneous=True), augmented
thm = eq, (g, n, inf, sup) = summation_antidifference_thm()
thm
Explanation: know the difference
Let ${y_{n}}{n\in\mathbb{N}}$ be a sequence, where $y{n}=f(n)$ for some function $f$. Assume that each coefficient $y_{n}$ is not known; on the contrary, assume that there exists a known sequence ${g_{n}}{n\in\mathbb{N}}$ which satisfies:
$$
\begin{equation}
\Delta y{n} = y_{n+1}-y_{n}=g_{n}
\end{equation}
$$
By finite summation on both sides:
$$
\sum_{n=n_{0}}^{N-1}{\Delta y_{n}} = \sum_{n=n_{0}}^{N-1}{g_{n}}
$$
many terms in the lhs disappear, so:
$$
y_{N}-y_{n_{0}} = \sum_{n=n_{0}}^{N-1}{g_{n}}
$$
therefore, if the initial term $y_{n_{0}}$ is given, we can compute any term $y_{N}$ by:
$$
y_{N} = y_{n_{0}} + \sum_{n=n_{0}}^{N-1}{g_{n}}
$$
since each term $g_{n}$ is known by hypothesis and the summation can be done.
a little generalization
Consider an additional known sequence ${p_{n}}{n\in\mathbb{N}}$ and we're required to find a solution for equation: $ y{n+1} = p_{n}y_{n} + g_{n} $. So define an helper sequence ${P_{n}}{n\in\mathbb{N}}$ such that $P{n_{0}}=1$ and $P_{n}=p_{n-1}P_{n-1}$, therefore $P_{n}=\prod_{k=n_{0}}^{n-1}{p_{k}}$ holds by induction. Now study the following:
$$
\frac{y_{n+1}}{P_{n+1}} = \frac{p_{n}y_{n}}{P_{n+1}} + \frac{g_{n}}{P_{n+1}}
= \frac{y_{n}}{P_{n}} + \frac{g_{n}}{P_{n+1}}
$$
calling $z_{n}=\frac{y_{n}}{P_{n}}$ and $q_{n}=\frac{g_{n}}{P_{n+1}}$, it yields: $z_{n+1} = z_{n}+q_{n}$, with initial condition $z_{n_{0}} = \frac{y_{n_{0}}}{P_{n_{0}}} = y_{n_{0}}$. So we've a recurrence in a simpler form, whose solution is a sequence ${z_{n}}{n\in\mathbb{N}}$ such that $z{n} = z_{n_{0}} + \sum_{i=n_{0}}^{n-1}{q_{i}}$. By backward substitution:
$$
\begin{split}
\frac{y_{n}}{P_{n}} &= y_{n_{0}} + \sum_{i=n_{0}}^{n-1}{\frac{g_{i}}{P_{i+1}}} \
y_{n} &= P_{n}y_{n_{0}} + \sum_{i=n_{0}}^{n-1}{\frac{P_{n}g_{i}}{P_{i+1}}} \
y_{n} &= \left(\prod_{k=n_{0}}^{n-1}{p_{k}}\right)y_{n_{0}} +
\sum_{i=n_{0}}^{n-1}{\left(\prod_{k=i+1}^{n-1}{p_{k}}\right)g_{i}} \
\end{split}
$$
which is the closed form for coefficients of solution sequence ${y_{n}}_{n\in\mathbb{N}}$.
Horner method
Let $p\in\prod_{n}$ be a polynomial over coefficients ${b_{n}\in\mathbb{C}}{n\in\mathbb{N}}$, defined as $p(x)=\sum{i=0}^{n}{b_{i}x^{n-i}}$. Define the difference equation $y_{i} = xy_{i-1} + b_{i}$, for $i\in{1,\ldots,n}$, with initial condition $y_{0}=b_{0}$; therefore, $y_{n}=p(x)$ holds.
In order to see this, recognize that we have a recurrence of the last form where $p_{i}=x$ and $g_{i}=b_{i+1}$ forall $i$ in the domain, therefore its solution has the generic coefficient $y_{n}$ which satisfies:
$$
y_{n} = \left(\prod_{k=n_{0}}^{n-1}{x}\right)b_{0} +
\sum_{i=n_{0}}^{n-1}{\left(\prod_{k=i+1}^{n-1}{x}\right)b_{i+1}}
= x^{n}b_{0} + \sum_{i=0}^{n-1}{x^{n-1-(i+1)+1}b_{i+1}} =
= x^{n}b_{0} + \sum_{i=1}^{n}{x^{n-i}b_{i}} = \sum_{i=0}^{n}{x^{n-i}b_{i}} = p(x)
$$
as required.
$\Delta$ operator relations
On the other hand, assume that no initial condition $y_{n_{0}}$ is given, we obtain $y_{n}$ on the lhs by application of the anti-difference operator $\Delta^{-1}$ on the left in both members $\Delta y_{n}=g_{n}$ so
$y_{n} = \Delta^{-1}g_{n}$. Use this identity as rewriting rule and apply it to the former equation, obtaining $\Delta\Delta^{-1}g_{n}=g_{n}$, therefore the relation $\Delta\Delta^{-1}=I$ on operators holds.
Moreover, let ${w_{n}}{n\in\mathbb{N}}$ be a constant sequence, so we can augment:
$$
y{n} = \Delta^{-1}g_{n} + w_{n} = \Delta^{-1}\Delta y_{n} + w_{n}
$$
because $\Delta w_{n}=0$, therefore the relation $\Delta^{-1}\Delta = I - K$, where $K$ is the constant operator, holds.
computing sums via $\Delta^{-1}$
Let $g_{n}=\Delta y_{n}$ and assume to not have a closed formula in $n$ for coefficients $y_{n}$, but to know that $y_{n}=\Delta^{-1}g_{n}$ holds. Apply summation on both members and manipulating on the rhs:
$$
\sum_{n=n_{0}}^{N-1}{g_{n}} = \sum_{n=n_{0}}^{N-1}{\Delta y_{n}}
= y_{N}-y_{n_{0}} = y_{n} \big|{n{0}}^{N} = \Delta^{-1}g_{n} \big|{n{0}}^{N} = \Delta^{-1}g_{n} \big|{n=N} - \Delta^{-1}g{n} \big|{n=n{0}}
$$
therefore, if we have an unknown sequence ${g_{n}}{n\in\mathbb{N}}$, which has a closed-form of
$\Delta^{-1}g{n}$ as a term which support substitution of symbol $n$, so the sum $\sum_{n=n_{0}}^{N-1}{g_{n}}$ can be easily computed by a difference, as done in the fundamental theorem of calculus.
End of explanation
local_thm = Eq(g[n], ff(n,i)), (g, n, i)
local_thm
inst_thm = rewrite(thm, local_thm)
inst_thm
ant_ff_thm = antidifference_of_ff_thm(subs={})
ant_ff_thm
ready_thm = eq, *_ = rewrite(inst_thm, ant_ff_thm)
ready_thm
done_thm = doit(ready_thm, lhs=False)
done_thm
Explanation: $(x)_{i}$ application
End of explanation
local_thm = Eq(g[n], binomial(n,k)), (g, n, k)
local_thm
inst_thm = rewrite(thm, local_thm)
inst_thm
ant_binomial_thm = antidifference_of_binomial_thm(subs={})
ant_binomial_thm
Explanation: ${{n}\choose{k}}$ application
End of explanation
ready_thm = eq, *_ = rewrite(inst_thm, ant_ff_thm)
ready_thm
done_thm = doit(ready_thm, lhs=False)
done_thm
ff(x+1,i)-ff(x,i)
Explanation: Previous thm holds by the following argument.
$$
\begin{split}
\Delta{{x}\choose{j}} = {{x+1}\choose{j}} - {{x}\choose{j}} &= \frac{(x+1){j}}{(j){j}}-\frac{(x){j}}{(j){j}}\
&= \frac{ (x+1)x\cdots(x-j+2) -x\cdots(x-j+2)(x-j+1) }{(j){j}}\
&= \frac{ x\cdots(x-j+2)(x+1 -(x-j+1)) }{(j){j}}\
&= \frac{ x\cdots(x-j+2) }{(j-1)!} = \frac{ (x){(j-1)} }{(j-1){(j-1)}} = {{x}\choose{j-1}}\
\end{split}
$$
therefore, to find $\Delta^{-1}{{x}\choose{j}}$ we are required to provide a term $t_{x}$ such that application of $\Delta$ to it yields ${{x}\choose{j}}$. So choose $t_{x}={{x}\choose{j+1}}$, according to above identity.
End of explanation
from sympy.functions.combinatorial.numbers import stirling
Explanation: powers
in $\mathbb{R}$
In $\mathbb{R}$ the $n$-th power of the symbol $x$ satisfies:
$$
\begin{split}
x^{0}&=1 \
x \neq0 &\rightarrow x^{-n}=\frac{1}{x^{n}}\
\frac{\partial x^{n}}{\partial{x}} &= n x^{n-1} \
\frac{\partial^{-1} x^{n}}{\partial{x}} = \int x^{n}\partial x &= \frac{x^{n+1}}{n+1}+c
\end{split}
$$
for some $c\in\mathbb{R}$. In $\mathbb{N}$ its counterpart is the falling factorial function in the variable $x$ defined as:
$$(x){n} = \underbrace{x(x-1)(x-2)\cdots(x-n+1)}{n\text{ terms}}$$
in $\mathbb{N}$
We apply operator $\Delta$ to derive an identity about forward differences of $(x){n}$:
$$
\begin{split}
\Delta (x){n} &= (x+1){n} - (x){n} \
&= (x+1)x(x-1)\cdots(x-n+2) - x(x-1)\cdots(x-n+2)(x-n+1) \
&= (x){(n-1)}(x+1 -(x-n+1)) \
&=n(x){(n-1)}
\end{split}
$$
Previous identity allows us to recover the anti-difference of $(x){n}$: it requires to find a sequence ${g{n}}{n\in\mathbb{N}}$ such that $\Delta g{n} = (x){n}$, namely $$g{n}=\frac{(x){(n+1)}}{n+1}+w{n}=\Delta^{-1}(x){n}$$ where ${w{n}}_{n\in\mathbb{N}}$ is a constant sequence.
Moreover, in order to provide corresponding identities for the left ones, we reason according to:
$$
(x){m+n} = \underbrace{x(x-1)\cdots(x-m+1)}{(x){m}}\underbrace{(x-m)(x-m-1)\cdots(x-m-n+1)}{(x-m){n}}
$$
substitution $m=0$ yields $(x){n}=(x){0}(x){n}$ therefore $(x){0}=1$. On the other hand, substitution $m=-n$ yields $(x){0}=(x){-n}(x+n){n}$. So:
$$
(x+n){n} \neq 0 \rightarrow (x){-n} = \frac{1}{(x+n)_{n}} = \frac{1}{(x+n)(x+n-1)\cdots(x+1)}
$$
requiring $x\not\in{-1, -2, \ldots, -n}$.
properties
$(x)_{n}$ is monic polynomial of degree $n$ with roots ${0, 1, \ldots, n-1}$
$\Delta(x){n}\in\Pi{n-1}$
$\Delta^{-1}(x){n}\in\Pi{n+1}$
$k < j \rightarrow (k)_{j} = k(k-1)\cdots(k-k)\cdots(k-j+1)=0$
$(k)_{k} = k(k-1)\cdots(k-(k-1)+1)(k-k+1)=k!$
$(k){k}=(k){(k-1)}$
and, finally:
$$\frac{(k){j}}{(j){j}} = \frac{k(k-1)\cdots(k-j+1)}{j!}=\frac{k!}{j!(k-j)!}={{k}\choose{j}}$$
$\mathbb{R} \leftarrow \mathbb{N}$, via Stirling numbers of the second type
End of explanation
stirling_matrix_second_kink = Matrix(11,11,lambda i, j: stirling(i,j, kind=2, signed=False))
stirling_matrix_second_kink
m = Mul(stirling_matrix_second_kink, Matrix(11,1,lambda i, _: ff(x, i, evaluate=False)),evaluate=False)
#.applyfunc(lambda i: i.as_poly(x).as_expr())
Eq(m, Matrix(11,1,lambda i, _: x**i), evaluate=False)
_.lhs.doit()
Explanation: The following identity links the two kinds of powers:
$$
x^{n} = \sum_{i=1}^{n}{\mathcal{S}{n,i} (x){i}}
$$
where coefficients $\mathcal{S}{n,i}$ are Stirling's numbers of the second kind, defined according to the following recurrence relation $\mathcal{S}{n+1, i} = \mathcal{S}{n, i-1} + i\mathcal{S}{n, i}$, for $i\in{2,\ldots,n}$, with initial conditions $\mathcal{S}{n, 1} = \mathcal{S}{n, n} = 1$.
In the following matrix we report the upper chunk of the infinite matrix generated by the recurrence relation; for the sake of clarity, according to Python indexing which is zero-based, we include the very first row and column, which yields $\mathcal{S}_{0, 0}=1$ and $0$ everywhere else.
End of explanation
def power_summation_thm():
n, S, (N, x, k, i) = IndexedBase('n'), IndexedBase('\mathcal{S}'), symbols('N x k i')
inf, sup = n[0], N-1
return (Eq(Sum(x**k, (x, inf, sup)), Sum(s[k,i]*((ff(sup+1, i+1)/(i+1))-(ff(inf, i+1)/(i+1))), (i, 1, k))),
[n, S, inf, sup, N, x, k, i])
def expand_Sum(aSumExpr):
generic_sum_coeff, (sum_index, starting_bound, ending_bound) = aSumExpr.args
summands = [generic_sum_coeff.subs(sum_index, n) for n in range(starting_bound, ending_bound+1)]
result = Add(*summands, evaluate=False)
return result
def stirling_row(row, indexed=None, *args, **kwds):
return {indexed[row, i] if indexed else (row,i):stirling(row, i, *args, **kwds) for i in range(row+1)}
def do_powers_summation(power, bottom=1, top=Symbol('n'), expand=True):
eq, (n, S, inf, sup, N, x, k, i) = power_summation_thm()
inst_eq = eq.subs({inf:bottom, k:power, N:top+1}, simultaneous=True)
rhs = expand_Sum(inst_eq.rhs)
rhs = rhs.subs(stirling_row(power, S)).factor() if expand else rhs
return Eq(inst_eq.lhs, rhs)
power_summation_thm()
do_powers_summation(power=1, bottom=1)
do_powers_summation(power=2, bottom=1)
do_powers_summation(power=3, bottom=1)
do_powers_summation(power=3, bottom=1, expand=False)
ten_powers = do_powers_summation(power=10)
ten_powers
ten_powers.replace(Symbol('n'),20).doit()
Explanation: Proof. By induction on $n$.
Base case $n=1$, so $x = \mathcal{S}{1,1}(x){1}=x$, which holds.
Assume the theorem true for $n$ and show for $n+1$, so:
$$
\begin{split}
x^{n+1} = x\cdot x^{n} &= x\sum_{i=1}^{n}{\mathcal{S}{n,i} (x){i}}
= \sum_{i=1}^{n}{\mathcal{S}{n,i} (x-i+i) (x){i}}
= \sum_{i=1}^{n}{\mathcal{S}{n,i}\left(\underbrace{(x-i) (x){i}}{(x){(i+1)}} + i (x){i} \right)}\
&= \sum{i=2}^{n+1}{\mathcal{S}{n,i-1} (x){i}} + \sum_{i=1}^{n}{i \mathcal{S}{n,i} (x){i}}
= \underbrace{\mathcal{S}{n,1}}{\mathcal{S}{n+1,1}} (x){1} +
\sum_{i=2}^{n}{\underbrace{\left(\mathcal{S}{n,i-1}+i\mathcal{S}{n,i}\right)}{\mathcal{S}{n+1,i}} (x){i}} +
\underbrace{\mathcal{S}{n,n}}{\mathcal{S}{n+1,n+1}} (x){n+1}
= \sum{i=1}^{n+1}{\mathcal{S}{n+1,i} (x){i}}
\end{split}
$$
as required. $\blacksquare$
The just proved identity allows us to easily compute summations of the form $\sum_{x=n_{0}}^{N-1}{x^{k}}$, for some $k\in\mathbb{N}$. One way to do this is to use the result seen some cells above, where it is required to know $\Delta^{-1}x^{k}$:
$$
\sum_{x=n_{0}}^{N-1}{x^{k}} = \Delta^{-1}x^{k} \big|{x=N} - \Delta^{-1}x^{k} \big|{x=n_{0}}
$$
but such anti-difference is unknown. Therefore put the last identity in:
$$
\sum_{x=n_{0}}^{N-1}{x^{k}} = \sum_{x=n_{0}}^{N-1}{\sum_{i=1}^{k}{\mathcal{S}{k,i} (x){i}}} =
\sum_{i=1}^{k}{\mathcal{S}{k,i}\sum{x=n_{0}}^{N-1}{ (x){i}}} = \sum{i=1}^{k}{\mathcal{S}{k,i}\left(\frac{{\left(N\right)}{\left(i + 1\right)}}{i + 1} - \frac{{\left(n_{0}\right)}{\left(i + 1\right)}}{i + 1}\right)}
$$
where we recognize $\sum{n=n_{0}}^{N - 1} {\left(n\right)}{i} = w{N} - w_{n_{0}} + \frac{{\left(N\right)}{\left(i + 1\right)}}{i + 1} - \frac{{\left(n{0}\right)}{\left(i + 1\right)}}{i + 1}$ where $w{N}-w_{n_{0}}=0$ since ${w_{n}}_{n\in\mathbb{n}}$ is a constant sequence.
applications
End of explanation
stirling_matrix_first_kink = Matrix(11,11,lambda i, j: stirling(i,j, kind=1, signed=True))
stirling_matrix_first_kink
Explanation: $\mathbb{R} \rightarrow \mathbb{N}$, via Stirling numbers of the first type
It is possible to revert the previous argument and find a characterization for $(x){i}$ using powers $x^{i}$ as follows:
$$
\sum{i=1}^{n}{\mathcal{s}{n,i} x^{i}} = (x){n}
$$
where coefficients $\mathcal{s}_{n,i}$ are Stirling's numbers of the first kind, tabulated in the following matrix:
End of explanation
stirling_matrix_second_kink**(-1)
Explanation: Stirling's matrices of numbers are inverses the one of the other, namely:
End of explanation
stirling_matrix_second_kink*stirling_matrix_first_kink
Explanation: therefore their product yields the identity matrix:
End of explanation |
11,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cloud Computing
Basics
What is Cloud Computing?
On-demand services, delivered over the network.
Relevant Services
Step1: Setting up a Cloud Service
Step2: Create Storage Account
Step3: Working with Containers
Step4: Working with Blobs
Step5: Cleaning Up
Step6: Miscellaneous
Material
Introduction to Microsoft Azure Workshop
Videos
Introduction to Microsoft Azure
Microsoft Azure Data Center
Resources
Cloud Academy
AWS Educate
Formatting | Python Code:
# standard library
import os
import time
import shutil
# Load Python SDK
from azure import *
from azure.servicemanagement import *
from azure.storage import *
# Subscription details
subscription_id = '1a61650c-ada5-4173-a8da-2a4ffcfab747'
certificate_path = 'mycert.pem'
# Initialize connection
sms = ServiceManagementService(subscription_id, certificate_path)
Explanation: Cloud Computing
Basics
What is Cloud Computing?
On-demand services, delivered over the network.
Relevant Services:
Infrastructure Services
Virtual Machines
Data Services
Storage Accounts
Why in Economic Research?
Big Compute
Estimation
Simulation
Perturbation
Big Data
Storage
Collection
Collaboration
Providers
Google Compute Engine: https://cloud.google.com
Amazon Web Services: https://aws.amazon.com
Microsoft Azure: http://azure.microsoft.com
Microsoft Azure
Resources
Microsoft Azure: http://azure.microsoft.com
Python Development: http://azure.microsoft.com/python
Microsoft Research: http://research.microsoft.com/azure
Training Events
Online Tutorials
Webinars
Award Program
<img src="images/ManagementPortal.png">
Tutorials
Virtual Machines
What to expect?
Creating
Microsoft Azure Management Portal
Accessing and Working
Basics of SSH
Storage Accounts
What to expect?
Creating
Microsoft Azure Management Portal
Managing
Microsoft Azure SDK for Python
Microsoft Azure SDK for Python
Set of Python packages that make it easy to access the Microsoft Azure components such as Service Management, Storage, and Service Bus.
Website: https://github.com/Azure/azure-sdk-for-python
<img src="images/BlobServiceConcepts.png">
Using the Microsoft Azure SDK for Python
This tutorial builds on the material provided on several websites of the Microsoft Azure ecosystem. Additional tutorials and documentation is avilable at the Python Developer Center
First of all, we need to obtain access to our account using Management Certificates.
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer
End of explanation
cs_name, cs_label = 'softEcon', 'softEcon'
cs_desc, cs_location = 'Cloud Service for this lecture.', 'West US'
sms.create_hosted_service(cs_name, cs_label, cs_desc, cs_location)
Explanation: Setting up a Cloud Service
End of explanation
sa_name, sa_label = 'lecture2', 'lecture2'
sa_desc, sa_location = 'Storage Account for this lecture.', 'West US'
sms.create_storage_account(sa_name, sa_desc, sa_label, location=sa_location)
# Get key
sa_key = None
while sa_key is None:
try:
sa_key = sms.get_storage_account_keys(sa_name).storage_service_keys.primary
except WindowsAzureMissingResourceError:
pass
time.sleep(5)
Explanation: Create Storage Account
End of explanation
#Initialize connection to storate account
blob_service = BlobService(sa_name, sa_key)
# Specify container
cn_names = ['movies', 'pictures']
# Create containers
for cn_name in cn_names:
# Public
blob_service.create_container(cn_name, x_ms_blob_public_access='container')
Explanation: Working with Containers
End of explanation
# Create directory structure
DOWNLOAD_DIRECTORY, UPLOAD_DIRECTORY = 'downloads/', 'uploads/'
os.mkdir(UPLOAD_DIRECTORY), os.mkdir(DOWNLOAD_DIRECTORY)
# Create some data
bb_names = ['mov1.avi', 'img1.jpg', 'img2.jpg']
for bb_name in bb_names:
file_name = UPLOAD_DIRECTORY + bb_name
with open(file_name, 'w') as file_:
file_.write('This is some data.')
# Select container.
def get_container(bb_name):
''' Get appropriate container for file.
'''
# Sort by file extension
if 'avi' in bb_name:
cn_name = 'movies'
elif 'jpg' in bb_name:
cn_name = 'pictures'
else:
raise AssertionError
# Finishing
return cn_name
# Upload Blob into container
for bb_name in bb_names:
# Select container
cn_name = get_container(bb_name)
# Upload file to container
file_path = UPLOAD_DIRECTORY + bb_name
blob_service.put_block_blob_from_path(cn_name, bb_name, file_path)
# List content in container
blobs = blob_service.list_blobs(cn_name)
# Download Blob from container
for bb_name in bb_names:
# Select container
cn_name = get_container(bb_name)
# Construct file name
file_path = DOWNLOAD_DIRECTORY + bb_name
# Download Blob
blob_service.get_blob_to_path(cn_name, bb_name, file_path)
Explanation: Working with Blobs
End of explanation
# Delete Blobs
for bb_name in bb_names:
cn_name = get_container(bb_name)
blob_service.delete_blob(cn_name, bb_name)
# Deleting container
for cn_name in cn_names:
blob_service.delete_container(cn_name)
# Delete storage account
sms.delete_storage_account(sa_name)
# Delete cloud service
sms.delete_hosted_service(cs_name)
# Delete directories
shutil.rmtree(UPLOAD_DIRECTORY), shutil.rmtree(DOWNLOAD_DIRECTORY)
Explanation: Cleaning Up
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1K5apRH').read())
Explanation: Miscellaneous
Material
Introduction to Microsoft Azure Workshop
Videos
Introduction to Microsoft Azure
Microsoft Azure Data Center
Resources
Cloud Academy
AWS Educate
Formatting
End of explanation |
11,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGeM
Tutorial 1
Step1: We need to read a parameters file. If does not exist the FFDParameters() class creates a default prm file that you have to edit for your problem specifications.
Step2: The following is the parameters file for the case at hand. In particular, if you look at the Box info section, there is 3-by-2-by-2 lattice around an emisphere. Then we rotate it by 10 deg around y and 15 deg around z axis. In Parameters weights we just change the weights of the control point with indices 2, 0, 0.
[Box info]
# This section collects all the properties of the FFD bounding box.
# n control points indicates the number of control points in each direction (x, y, z).
# For example, to create a 2 x 3 x 2 grid, use the following
Step3: This is the original geometry
Step4: Finally we can actually perform the FFD with the freeform module. The inputs are the parameters and the points positions we extracted before.
Step5: Now we can write the modified stl file with the new points coordinates.
Step6: And here we have the final plot | Python Code:
%matplotlib inline
import pygem as pg
Explanation: PyGeM
Tutorial 1: Free Form Deformation on a sphere in stl file format
In this tutorial we will show the typical workflow. In particular we are going to parse the parameters file for the FFD, read an stl file of a sphere, perform the FFD and write the results on a new stl file.
First of all we just import pygem package and we set matplotlib for the notebook.
End of explanation
params = pg.params.FFDParameters()
params.read_parameters(filename='../tests/test_datasets/parameters_test_ffd_sphere.prm')
Explanation: We need to read a parameters file. If does not exist the FFDParameters() class creates a default prm file that you have to edit for your problem specifications.
End of explanation
stl_handler = pg.stlhandler.StlHandler()
mesh_points = stl_handler.parse('../tests/test_datasets/test_sphere.stl')
Explanation: The following is the parameters file for the case at hand. In particular, if you look at the Box info section, there is 3-by-2-by-2 lattice around an emisphere. Then we rotate it by 10 deg around y and 15 deg around z axis. In Parameters weights we just change the weights of the control point with indices 2, 0, 0.
[Box info]
# This section collects all the properties of the FFD bounding box.
# n control points indicates the number of control points in each direction (x, y, z).
# For example, to create a 2 x 3 x 2 grid, use the following: n control points: 2, 3, 2
n control points x: 3
n control points y: 2
n control points z: 2
# box lenght indicates the length of the FFD bounding box along the three canonical directions (x, y, z).
# It uses the local coordinate system.
# For example to create a 2 x 1.5 x 3 meters box use the following: lenght box: 2.0, 1.5, 3.0
box lenght x: 45.0
box lenght y: 90.0
box lenght z: 90.0
# box origin indicates the x, y, and z coordinates of the origin of the FFD bounding box. That is center of
# rotation of the bounding box. It corresponds to the point coordinates with position [0][0][0].
# See section "Parameters weights" for more details.
# For example, if the origin is equal to 0., 0., 0., use the following: origin box: 0., 0., 0.
box origin x: -20.0
box origin y: -55.0
box origin z: -45.0
# rotation angle indicates the rotation angle around the x, y, and z axis of the FFD bounding box in degrees.
# The rotation is done with respect to the box origin.
# For example, to rotate the box by 2 deg along the z direction, use the following: rotation angle: 0., 0., 2.
rotation angle x: 0
rotation angle y: 10
rotation angle z: 15
[Parameters weights]
# This section describes the weights of the FFD control points.
# We adopt the following convention:
# For example with a 2x2x2 grid of control points we have to fill a 2x2x2 matrix of weights.
# If a weight is equal to zero you can discard the line since the default is zero.
| x index | y index | z index | weight |
|---------|---------|---------|--------|
| 0 | 0 | 0 | 1.0 |
| 0 | 1 | 1 | 0.0 | --> you can erase this line without effects
| 0 | 1 | 0 | -2.1 |
| 0 | 0 | 1 | 3.4 |
# parameter x collects the displacements along x, normalized with the box lenght x.
parameter x: 2 0 0 0.8
# parameter y collects the displacements along y, normalized with the box lenght y.
parameter y: 2 0 0 -0.5
# parameter z collects the displacements along z, normalized with the box lenght z.
parameter z: 2 0 0 -1.0
Now we have to load the stl file on which we will perform the FFD. Since it is an stl we use the StlHandler() class. The parse method extract the points coordinate without touching the topology of the geometry.
End of explanation
stl_handler.plot(plot_file='../tests/test_datasets/test_sphere.stl')
Explanation: This is the original geometry:
End of explanation
free_form = pg.freeform.FFD(params, mesh_points)
free_form.perform()
new_mesh_points = free_form.modified_mesh_points
Explanation: Finally we can actually perform the FFD with the freeform module. The inputs are the parameters and the points positions we extracted before.
End of explanation
stl_handler.write(new_mesh_points, 'test_sphere_mod.stl')
Explanation: Now we can write the modified stl file with the new points coordinates.
End of explanation
stl_handler.plot(plot_file='test_sphere_mod.stl')
Explanation: And here we have the final plot:
End of explanation |
11,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<br><p style="font-family
Step1: This is a really large dataset, at least in terms of the number of rows. But with 6 columns, what does this hold?
Step2: Looks like it has different indicators for different countries with the year and value of the indicator.
How many UNIQUE country names are there ?
Step3: Are there same number of country codes ?
Step4: Are there many indicators or few ?
Step5: How many years of data do we have ?
Step6: What's the range of years?
Step7: <p style="font-family
Step8: Let's see how emissions have changed over time using MatplotLib
Step9: Turns out emissions per capita have dropped a bit over time, but let's make this graphic a bit more appealing before we continue to explore it.
Step10: Using Histograms to explore the distribution of values
We could also visualize this data as a histogram to better explore the ranges of values in CO2 production per year.
Step11: So the USA has many years where it produced between 19-20 metric tons per capita with outliers on either side.
But how do the USA's numbers relate to those of other countries?
Step12: For how many countries do we have CO2 per capita emissions data in 2011
Step13: So the USA, at ~18 CO2 emissions (metric tons per capital) is quite high among all countries.
An interesting next step, which we'll save for you, would be to explore how this relates to other industrialized nations and to look at the outliers with those values in the 40s!
<p style="font-family
Step14: So although we've seen a decline in the CO2 emissions per capita, it does not seem to translate to a decline in GDP per capita
ScatterPlot for comparing GDP against CO2 emissions (per capita)
First, we'll need to make sure we're looking at the same time frames
Step15: We have 3 extra years of GDP data, so let's trim those off so the scatterplot has equal length arrays to compare (this is actually required by scatterplot)
Step16: This doesn't look like a strong relationship. We can test this by looking at correlation.
Step17: A correlation of 0.07 is pretty weak, but you'll learn more about correlation in the next course.
You could continue to explore this to see if other countries have a closer relationship between CO2 emissions and GDP. Perhaps it is stronger for developing countries?
Want more ?
Matplotlib Examples Library
http | Python Code:
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
data = pd.read_csv(r'C:\Users\hrao\Documents\Personal\HK\Python\world-development-indicators\Indicators.csv')
data.shape
Explanation: <br><p style="font-family: Arial; font-size:3.75em;color:purple; font-style:bold">
Matplotlib: Exploring <br> <br> <br>Data Visualization</p><br><br>
<br><br><center><h1 style="font-size:2em;color:#2467C0">World Development Indicators</h1></center>
<br>
<table>
<col width="550">
<col width="450">
<tr>
<td><img src="https://upload.wikimedia.org/wikipedia/commons/4/46/North_South_divide.svg" align="middle" style="width:550px;height:360px;"/></td>
<td>
This week, we will be using an open dataset from <a href="https://www.kaggle.com">Kaggle</a>. It is <a href="https://www.kaggle.com/worldbank/world-development-indicators">The World Development Indicators</a> dataset obtained from the World Bank containing over a thousand annual indicators of economic development from hundreds of countries around the world.
<br>
<br>
This is a slightly modified version of the original dataset from <a href="http://data.worldbank.org/data-catalog/world-development-indicators">The World Bank</a>
<br>
<br>
List of the <a href="https://www.kaggle.com/benhamner/d/worldbank/world-development-indicators/indicators-in-data">available indicators</a> and a <a href="https://www.kaggle.com/benhamner/d/worldbank/world-development-indicators/countries-in-the-wdi-data">list of the available countries</a>.
</td>
</tr>
</table>
Step 1: Initial exploration of the Dataset
End of explanation
data.head(10)
Explanation: This is a really large dataset, at least in terms of the number of rows. But with 6 columns, what does this hold?
End of explanation
countries = data['CountryName'].unique().tolist()
len(countries)
Explanation: Looks like it has different indicators for different countries with the year and value of the indicator.
How many UNIQUE country names are there ?
End of explanation
# How many unique country codes are there ? (should be the same #)
countryCodes = data['CountryCode'].unique().tolist()
len(countryCodes)
Explanation: Are there same number of country codes ?
End of explanation
# How many unique indicators are there ? (should be the same #)
indicators = data['IndicatorName'].unique().tolist()
len(indicators)
Explanation: Are there many indicators or few ?
End of explanation
# How many years of data do we have ?
years = data['Year'].unique().tolist()
len(years)
Explanation: How many years of data do we have ?
End of explanation
print(min(years)," to ",max(years))
Explanation: What's the range of years?
End of explanation
# select CO2 emissions for the United States
hist_indicator = 'CO2 emissions \(metric'
hist_country = 'USA'
mask1 = data['IndicatorName'].str.contains(hist_indicator)
mask2 = data['CountryCode'].str.contains(hist_country)
# stage is just those indicators matching the USA for country code and CO2 emissions over time.
stage = data[mask1 & mask2]
stage.head()
Explanation: <p style="font-family: Arial; font-size:2.5em;color:blue; font-style:bold">
Matplotlib: Basic Plotting, Part 1</p>
<br>
Lets pick a country and an indicator to explore: CO2 Emissions per capita and the USA
End of explanation
# get the years
years = stage['Year'].values
# get the values
co2 = stage['Value'].values
# create
plt.bar(years,co2)
plt.show()
Explanation: Let's see how emissions have changed over time using MatplotLib
End of explanation
# switch to a line plot
plt.plot(stage['Year'].values, stage['Value'].values)
# Label the axes
plt.xlabel('Year')
plt.ylabel(stage['IndicatorName'].iloc[0])
#label the figure
plt.title('CO2 Emissions in USA')
# to make more honest, start they y axis at 0
plt.axis([1959, 2011,0,25])
plt.show()
Explanation: Turns out emissions per capita have dropped a bit over time, but let's make this graphic a bit more appealing before we continue to explore it.
End of explanation
# If you want to just include those within one standard deviation fo the mean, you could do the following
# lower = stage['Value'].mean() - stage['Value'].std()
# upper = stage['Value'].mean() + stage['Value'].std()
# hist_data = [x for x in stage[:10000]['Value'] if x>lower and x<upper ]
# Otherwise, let's look at all the data
hist_data = stage['Value'].values
print(len(hist_data))
# the histogram of the data
plt.hist(hist_data, 10, normed=False, facecolor='green')
plt.xlabel(stage['IndicatorName'].iloc[0])
plt.ylabel('# of Years')
plt.title('Histogram Example')
plt.grid(True)
plt.show()
Explanation: Using Histograms to explore the distribution of values
We could also visualize this data as a histogram to better explore the ranges of values in CO2 production per year.
End of explanation
# select CO2 emissions for all countries in 2011
hist_indicator = 'CO2 emissions \(metric'
hist_year = 2011
mask1 = data['IndicatorName'].str.contains(hist_indicator)
mask2 = data['Year'].isin([hist_year])
# apply our mask
co2_2011 = data[mask1 & mask2]
co2_2011.head()
Explanation: So the USA has many years where it produced between 19-20 metric tons per capita with outliers on either side.
But how do the USA's numbers relate to those of other countries?
End of explanation
print(len(co2_2011))
# let's plot a histogram of the emmissions per capita by country
# subplots returns a touple with the figure, axis attributes.
fig, ax = plt.subplots()
ax.annotate("USA",
xy=(18, 5), xycoords='data',
xytext=(18, 30), textcoords='data',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
)
plt.hist(co2_2011['Value'], 10, normed=False, facecolor='green')
plt.xlabel(stage['IndicatorName'].iloc[0])
plt.ylabel('# of Countries')
plt.title('Histogram of CO2 Emissions Per Capita')
#plt.axis([10, 22, 0, 14])
plt.grid(True)
plt.show()
Explanation: For how many countries do we have CO2 per capita emissions data in 2011
End of explanation
# select GDP Per capita emissions for the United States
hist_indicator = 'GDP per capita \(constant 2005'
hist_country = 'USA'
mask1 = data['IndicatorName'].str.contains(hist_indicator)
mask2 = data['CountryCode'].str.contains(hist_country)
# stage is just those indicators matching the USA for country code and CO2 emissions over time.
gdp_stage = data[mask1 & mask2]
#plot gdp_stage vs stage
gdp_stage.head(2)
stage.head(2)
# switch to a line plot
plt.plot(gdp_stage['Year'].values, gdp_stage['Value'].values)
# Label the axes
plt.xlabel('Year')
plt.ylabel(gdp_stage['IndicatorName'].iloc[0])
#label the figure
plt.title('GDP Per Capita USA')
# to make more honest, start they y axis at 0
#plt.axis([1959, 2011,0,25])
plt.show()
Explanation: So the USA, at ~18 CO2 emissions (metric tons per capital) is quite high among all countries.
An interesting next step, which we'll save for you, would be to explore how this relates to other industrialized nations and to look at the outliers with those values in the 40s!
<p style="font-family: Arial; font-size:2.0em;color:blue; font-style:bold">
Matplotlib: Basic Plotting, Part 2</p>
Relationship between GPD and CO2 Emissions in USA
End of explanation
print("GDP Min Year = ", gdp_stage['Year'].min(), "max: ", gdp_stage['Year'].max())
print("CO2 Min Year = ", stage['Year'].min(), "max: ", stage['Year'].max())
Explanation: So although we've seen a decline in the CO2 emissions per capita, it does not seem to translate to a decline in GDP per capita
ScatterPlot for comparing GDP against CO2 emissions (per capita)
First, we'll need to make sure we're looking at the same time frames
End of explanation
gdp_stage_trunc = gdp_stage[gdp_stage['Year'] < 2012]
print(len(gdp_stage_trunc))
print(len(stage))
%matplotlib inline
import matplotlib.pyplot as plt
fig, axis = plt.subplots()
# Grid lines, Xticks, Xlabel, Ylabel
axis.yaxis.grid(True)
axis.set_title('CO2 Emissions vs. GDP \(per capita\)',fontsize=10)
axis.set_xlabel(gdp_stage_trunc['IndicatorName'].iloc[0],fontsize=10)
axis.set_ylabel(stage['IndicatorName'].iloc[0],fontsize=10)
X = gdp_stage_trunc['Value']
Y = stage['Value']
axis.scatter(X, Y)
plt.show()
Explanation: We have 3 extra years of GDP data, so let's trim those off so the scatterplot has equal length arrays to compare (this is actually required by scatterplot)
End of explanation
np.corrcoef(gdp_stage_trunc['Value'],stage['Value'])
Explanation: This doesn't look like a strong relationship. We can test this by looking at correlation.
End of explanation
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
Explanation: A correlation of 0.07 is pretty weak, but you'll learn more about correlation in the next course.
You could continue to explore this to see if other countries have a closer relationship between CO2 emissions and GDP. Perhaps it is stronger for developing countries?
Want more ?
Matplotlib Examples Library
http://matplotlib.org/examples/index.html
End of explanation |
11,783 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm trying to create a 2-dimensional array in Scipy/Numpy where each value represents the Manhattan distance from the center. It's supposed to have the same shape as the first two dimensions of a 3-dimensional array (an image, created via scipy.misc.fromimage). | Problem:
import numpy as np
from scipy.spatial import distance
shape = (6, 6)
xs, ys = np.indices(shape)
xs = xs.reshape(shape[0] * shape[1], 1)
ys = ys.reshape(shape[0] * shape[1], 1)
X = np.hstack((xs, ys))
mid_x, mid_y = (shape[0]-1)/2.0, (shape[1]-1)/2.0
result = distance.cdist(X, np.atleast_2d([mid_x, mid_y]), 'minkowski', p=1).reshape(shape) |
11,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CORDEX ESGF submission form
General Information
Data to be submitted for ESGF data publication must follow the rules outlined in the Cordex Archive Design Document <br /> (https
Step1: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
Step2: please provide information on the contact person for this CORDEX data submission request
Type of submission
please specify the type of this data submission
Step3: Requested general information
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
Step4: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
Step5: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
Step6: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
Step7: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
Step8: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
Step9: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
Step10: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http
Step11: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http
Step12: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
Step13: Give the path where the data reside, for example
Step14: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
Step15: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
Step16: Variable list
list of variables submitted -- please remove the ones you do not provide
Step17: Check your submission form
Please evaluate the following cell to check your submission form.
In case of errors, please go up to the corresponden information cells and update your information accordingly.
Step18: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
Step19: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications | Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
Explanation: CORDEX ESGF submission form
General Information
Data to be submitted for ESGF data publication must follow the rules outlined in the Cordex Archive Design Document <br /> (https://verc.enes.org/data/projects/documents/cordex-archive-design)
Thus file names have to follow the pattern:<br />
VariableName_Domain_GCMModelName_CMIP5ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br />
Example: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
The directory structure in which these files are stored follow the pattern:<br />
activity/product/Domain/Institution/
GCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/
RCMModelName/RCMVersionID/Frequency/VariableName <br />
Example: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
Notice: If your model is not yet registered, please contact contact [email protected]
specifying: Full institution name, Short institution name (acronym), Contact person and
e-mail, RCM Name (acronym), Terms of Use (unrestricted or non-commercial only) and the CORDEX domains in which you are interested.
At some CORDEX ESGF data centers a 'data submission form' is in use in order to improve initial information exchange between data providers and the data center. The form has to be filled before the publication process can be started. In case you have questions pleas contact the individual data centers:
o at DKRZ: [email protected]
o at SMHI: [email protected]
End of explanation
MY_LAST_NAME = "aa" # e.gl MY_LAST_NAME = "schulz"
#-------------------------------------------------
from dkrz_forms import form_handler, form_widgets, checks
form_info = form_widgets.check_pwd(MY_LAST_NAME)
sfg = form_handler.init_form(form_info)
sf = sfg.sub.entity_out.report
Explanation: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
End of explanation
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
Explanation: please provide information on the contact person for this CORDEX data submission request
Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data
End of explanation
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
Explanation: Requested general information
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
End of explanation
sf.institute_id = "..." # example: sf.institute_id = "AWI"
Explanation: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
End of explanation
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
Explanation: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
End of explanation
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
Explanation: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
End of explanation
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
Explanation: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
End of explanation
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
Explanation: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
End of explanation
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
Explanation: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
End of explanation
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
Explanation: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk.
'QC2' refers to the quality checker developed at DKRZ.
If your answer is 'other' give some informations.
End of explanation
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
Explanation: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf
End of explanation
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
Explanation: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
End of explanation
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
Explanation: Give the path where the data reside, for example:
blizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string
End of explanation
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
Explanation: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
End of explanation
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
Explanation: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
End of explanation
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
Explanation: Variable list
list of variables submitted -- please remove the ones you do not provide:
End of explanation
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub.valid_submission = res['valid_submission']
form_handler.DictTable(res)
Explanation: Check your submission form
Please evaluate the following cell to check your submission form.
In case of errors, please go up to the corresponden information cells and update your information accordingly.
End of explanation
form_handler.save_form(sf,"..my comment..") # edit my comment info
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
Explanation: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
End of explanation
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
Explanation: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications
End of explanation |
11,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Visualization
Step1: DataFrame.plot
Step2: We can select which plot we want to use by passing it into the 'kind' parameter.
Step3: You can also choose the plot kind by using the DataFrame.plot.kind methods instead of providing the kind keyword argument.
kind
Step4: Kernel density estimation plots are useful for deriving a smooth continuous function from a given sample.
Step5: pandas.tools.plotting
Iris flower data set
Step6: Seaborn | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
# see the pre-defined styles provided.
plt.style.available
# use the 'seaborn-colorblind' style
plt.style.use('seaborn-colorblind')
Explanation: Pandas Visualization
End of explanation
np.random.seed(123)
df = pd.DataFrame({'A': np.random.randn(365).cumsum(0),
'B': np.random.randn(365).cumsum(0) + 20,
'C': np.random.randn(365).cumsum(0) - 20},
index=pd.date_range('1/1/2017', periods=365))
df.head()
df.plot(); # add a semi-colon to the end of the plotting call to suppress unwanted output
Explanation: DataFrame.plot
End of explanation
df.plot('A','B', kind = 'scatter');
Explanation: We can select which plot we want to use by passing it into the 'kind' parameter.
End of explanation
# create a scatter plot of columns 'A' and 'C', with changing color (c) and size (s) based on column 'B'
df.plot.scatter('A', 'C', c='B', s=df['B'], colormap='viridis')
ax = df.plot.scatter('A', 'C', c='B', s=df['B'], colormap='viridis')
ax.set_aspect('equal')
df.plot.box();
df.plot.hist(alpha=0.7);
Explanation: You can also choose the plot kind by using the DataFrame.plot.kind methods instead of providing the kind keyword argument.
kind :
- 'line' : line plot (default)
- 'bar' : vertical bar plot
- 'barh' : horizontal bar plot
- 'hist' : histogram
- 'box' : boxplot
- 'kde' : Kernel Density Estimation plot
- 'density' : same as 'kde'
- 'area' : area plot
- 'pie' : pie plot
- 'scatter' : scatter plot
- 'hexbin' : hexbin plot
End of explanation
df.plot.kde();
Explanation: Kernel density estimation plots are useful for deriving a smooth continuous function from a given sample.
End of explanation
iris = pd.read_csv('iris.csv')
iris.head()
pd.tools.plotting.scatter_matrix(iris);
plt.figure()
pd.tools.plotting.parallel_coordinates(iris, 'Name');
Explanation: pandas.tools.plotting
Iris flower data set
End of explanation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib notebook
np.random.seed(1234)
v1 = pd.Series(np.random.normal(0,10,1000), name='v1')
v2 = pd.Series(2*v1 + np.random.normal(60,15,1000), name='v2')
plt.figure()
plt.hist(v1, alpha=0.7, bins=np.arange(-50,150,5), label='v1');
plt.hist(v2, alpha=0.7, bins=np.arange(-50,150,5), label='v2');
plt.legend();
# plot a kernel density estimation over a stacked barchart
plt.figure()
plt.hist([v1, v2], histtype='barstacked', normed=True);
v3 = np.concatenate((v1,v2))
sns.kdeplot(v3);
plt.figure()
# we can pass keyword arguments for each individual component of the plot
sns.distplot(v3, hist_kws={'color': 'Teal'}, kde_kws={'color': 'Navy'});
sns.jointplot(v1, v2, alpha=0.4);
grid = sns.jointplot(v1, v2, alpha=0.4);
grid.ax_joint.set_aspect('equal')
sns.jointplot(v1, v2, kind='hex');
# set the seaborn style for all the following plots
sns.set_style('white')
sns.jointplot(v1, v2, kind='kde', space=0);
iris = pd.read_csv('iris.csv')
iris.head()
sns.pairplot(iris, hue='Name', diag_kind='kde', size=2);
plt.figure(figsize=(8,6))
plt.subplot(121)
sns.swarmplot('Name', 'PetalLength', data=iris);
plt.subplot(122)
sns.violinplot('Name', 'PetalLength', data=iris);
Explanation: Seaborn
End of explanation |
11,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating Customer Segments
In this project you, will analyze a dataset containing annual spending amounts for internal structure, to understand the variation in the different types of customers that a wholesale distributor interacts with.
Instructions
Step1: Feature Transformation
1) In this section you will be using PCA and ICA to start to understand the structure of the data. Before doing any computations, what do you think will show up in your computations? List one or two ideas for what might show up as the first PCA dimensions, or what type of vectors will show up as ICA dimensions.
Answer
Step2: 2) How quickly does the variance drop off by dimension? If you were to use PCA on this dataset, how many dimensions would you choose for your analysis? Why?
Step3: Answer
Step4: 4) For each vector in the ICA decomposition, write a sentence or two explaining what sort of object or property it corresponds to. What could these components be used for?
Answer
Step5: The customers in first cluster buy Groceries mostly followed by Milk and Detergents.
In second cluster, the customers buy some Fresh food and small quantities of everything else.
Third cluster consists predominantly of customers buying large quantities of Fresh food and very small quantities of everything else.
Fourth cluster customers buy huge quantity of Grocery followed by large quantity of Milk and Detergents. Fresh foods are also their important purchase. | Python Code:
import warnings
warnings.filterwarnings('ignore')
# Import libraries: NumPy, pandas, matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Tell iPython to include plots inline in the notebook
%matplotlib inline
# Read dataset
data = pd.read_csv("wholesale-customers.csv")
print "Dataset has {} rows, {} columns".format(*data.shape)
print data.head() # print the first 5 rows
print data.describe()
data.plot(kind="box")
Explanation: Creating Customer Segments
In this project you, will analyze a dataset containing annual spending amounts for internal structure, to understand the variation in the different types of customers that a wholesale distributor interacts with.
Instructions:
Run each code block below by pressing Shift+Enter, making sure to implement any steps marked with a TODO.
Answer each question in the space provided by editing the blocks labeled "Answer:".
When you are done, submit the completed notebook (.ipynb) with all code blocks executed, as well as a .pdf version (File > Download as).
End of explanation
# TODO: Apply PCA with the same number of dimensions as variables in the dataset
from sklearn.decomposition import PCA
pca = PCA (n_components=2, whiten=True)
# we don't need scaling here because the input data represents
# annual spending in monetary units and hence are on same scale.
Z = pd.DataFrame(pca.fit_transform(data), columns=["PC1", "PC2"])
# Print the components and the amount of variance in the data contained in each dimension
print pca.components_
print pca.explained_variance_ratio_
ax = Z.plot(kind='scatter', x="PC1", y="PC2", figsize=(10, 8), s=1)
loading = pca.components_
for i, (x,y) in enumerate(zip(loading[0], loading[1])):
ax.arrow(0, 0, x,y, width=0.001, fc='orange', ec='orange')
ax.annotate(data.columns[i], (x,y))
ax.set_xlim([-2.5, 1])
ax.set_ylim([-1, 2.5])
Explanation: Feature Transformation
1) In this section you will be using PCA and ICA to start to understand the structure of the data. Before doing any computations, what do you think will show up in your computations? List one or two ideas for what might show up as the first PCA dimensions, or what type of vectors will show up as ICA dimensions.
Answer:
PCA helps us to reduce the dimensionality of the data by projecting the higher dimension features to lower dimensions keeping the maximum variance of data possible. Looking at the box plot of data above, PCA should return first dimension corresponding to category "Fresh" and the second dimension corresponding to category "Grocery".
ICA helps to separate source signals from composite signals. In this problem, the amount of money spent on different product categories by 440 customers of a wholesale distributor is given. The types of vectors resulting from ICA will be indicators for different types of customers of the distributor.
PCA
End of explanation
pca = PCA (n_components=6, whiten=True)
# we don't need scaling here because the input data represents
# annual spending in monetary units and hence are on same scale.
Z = pd.DataFrame(pca.fit_transform(data))
# print the amount of variance explained by each component
variances = pca.explained_variance_ratio_
print variances
np.cumsum(variances)
pd.DataFrame({'pca_1': Z[:][0], 'fresh': data["Fresh"]}).plot(x='pca_1', y='fresh', kind='scatter')
pd.DataFrame({'pca_2': Z[:][1], 'grocery': data["Grocery"]}).plot(x='pca_2', y='grocery', kind='scatter')
Explanation: 2) How quickly does the variance drop off by dimension? If you were to use PCA on this dataset, how many dimensions would you choose for your analysis? Why?
End of explanation
# TODO: Fit an ICA model to the data
# Note: Adjust the data to have center at the origin first!
from sklearn.decomposition import FastICA
from sklearn import preprocessing
# scaling the data to align the mean to 0 and to have unit variance
scaler = preprocessing.StandardScaler()
data_std = pd.DataFrame(scaler.fit_transform(data))
ica = FastICA(n_components=6, random_state=42)
ica.fit(data_std)
# Print the independent components
print ica.components_
Explanation: Answer: Most of the variance is explained by 2 dimensions. For the third dimension, the variance dropped by almost 98% to 0.04. I will use 3 dimensions, because I would like my dimensions to capture atleast 90% of variance in the data.
3) What do the dimensions seem to represent? How can you use this information?
Answer: The dimensions seem to represent the product groups in the decreasing order of importance i.e. the product that is sold the most.
The first dimension output from the PCA algorithm is the loading vector of the first principal component indicating the maximum variance in product category "Fresh" than anything else. The component that follows, is the loading vector of the second principal component that indicates that second maximum variance is along the feature "Grocery". Also, it has significant contributions from Milk and Detergents.
Also, there seems to be some correlation among Frozen foods and Delicatesen i.e. customers buying Frozen food is likely to buy Delicatessen. Another product categories with correlation are Milk, Groceries and Detergent_Paper. Product category Fresh doesn't have any correlation with any other product groups. We can use these information to introduce discounted sales on the correlated product categories to maximize the sales.
Looking at the biplot and the scatter plots above, we can see that first principal component has high negative correlation with Fresh foods (~-0.97) i.e. it captures the fact that there is high variance in the Fresh foods and also how the volume of Fresh foods purchased reduces with the increase in the first principal component. It is therefore a measure of how Fresh foods is of less favor among high volume customers.
Similary, the second principal component shows a moderate level of positive correlation (~0.5) with the amount of Groceries bought from the distributor. Also Grocery is a favourite item among the high volume buyers unlike Fresh foods.
ICA
End of explanation
# Import clustering modules
from sklearn.cluster import KMeans
from sklearn.mixture import GMM
# TODO: First we reduce the data to two dimensions using PCA to capture variation
from sklearn.decomposition import PCA
pca = PCA(n_components = 2, whiten=True)
reduced_data = pca.fit_transform(data)
print reduced_data[:10] # print upto 10 elements
# TODO: Implement your clustering algorithm here, and fit it to the reduced data for visualization
# The visualizer below assumes your clustering object is named 'clusters'
kmeans = KMeans(n_clusters=4)
clusters = kmeans.fit(reduced_data)
print clusters
# Plot the decision boundary by building a mesh grid to populate a graph.
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
hx = (x_max-x_min)/1000.
hy = (y_max-y_min)/1000.
xx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))
# Obtain labels for each point in mesh. Use last trained model.
Z = clusters.predict(np.c_[xx.ravel(), yy.ravel()])
# TODO: Find the centroids for KMeans or the cluster means for GMM
centroids = kmeans.cluster_centers_
print centroids
pd.DataFrame(pca.inverse_transform(centroids)).plot(kind = 'bar')
Explanation: 4) For each vector in the ICA decomposition, write a sentence or two explaining what sort of object or property it corresponds to. What could these components be used for?
Answer:
The components can be used for separating out the source signals i.e. the type of customers of the wholesale distributor.
First component seems to indicate that the customer predominantly sells Frozen foods followed by grocery. It looks like a Deli store. The do very little or none of other product categories.
Second component indicates a grocery store. They do predominantly grocery followed by some Fresh foods and frozen items. They do a very little Dairy and other stuff.
Third component indicates Deli store. They mostly sell fine foods and some detergents and paper. They do little or nothing of other product categories.
Fourth component indicates a Local store. They do a lot of Groceries and Dairy items. Other product categories are not of much prevalent here.
The components output from ICA indicate how each customer weigh different product categories in terms of may be ... space allocated to them, the kind of marketing done and general demand. One potential use of this idea for the distributor is to pre-allocate appropriate portions of different product categories and pack them so that they will be ready to be shipped on demand relatively quickly.
References:
http://whatwhy.in/featured/what-is-the-difference-between-a-hyper-market-super-market-departmental-store-and-a-general-store/457/
https://en.wikipedia.org/wiki/Appetizing_store
Note: Above conclusions were derived from looking for the feature with maximum absolute value and then comparing the remaining features relative to the magnitude and sign of the maximum value.
Clustering
In this section you will choose either K Means clustering or Gaussian Mixed Models clustering, which implements expectation-maximization. Then you will sample elements from the clusters to understand their significance.
Choose a Cluster Type
5) What are the advantages of using K Means clustering or Gaussian Mixture Models?
Answer:
K-Means clustering or GMM lets us to segregate data points into sub groups within which the data points are mostly identical. In a sense, the advantage is "finding" these subgroups within the data that helps in exploratory analysis. These subgroups can then be used to study the effect of other variables on them helping us make effective decisions using the data.
K-means clustering uses a notion of distance of the data points in order to segregate them. GMM assumes that the observed data is a mixture of probability distributions where each distribution is Gaussian or Normal. It then tries to identify sub groups of data based on probability that a particular data point will land on a distribution.
Comparing KMeans and GMM algorithms, KMeans is scalable with even cluster sizes and works well with flat structures and medium number of clusters. GMM on the other hand is not scalable, also works with flat geometry and good for density estimation purposes. GMM is generally fast to learn mixture models but fails when the number of samples is really high. KMeans is generally good with large scale data and so is going to be the model of choice here.
Looking at the results above, it seems that there are distinct groups of customers and we can go with the hard assignment of clusters using K-means. GMM, does a soft assignment where a particular data point belongs to different clusters with varying degrees of probability. I feel that k-means will work well in this case.
6) Below is some starter code to help you visualize some cluster data. The visualization is based on this demo from the sklearn documentation.
End of explanation
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('Clustering on the wholesale grocery dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
# using elbow chart to determine the #clusters
def fitAndScore(num_clusters):
kmeans = KMeans(n_clusters=num_clusters)
clusters = kmeans.fit(reduced_data)
clusters_ = range(num_clusters)
for i, _ in enumerate(clusters_):
clusters_[i] = []
for l,d in zip(clusters.predict(reduced_data), reduced_data):
clusters_[l].append(d)
mu = kmeans.cluster_centers_
Wk = sum([np.linalg.norm(mu[k]-c)**2/(2*len(c)) for k in range(num_clusters) for c in clusters_[k]])
return Wk
cluster_sizes = range(2,16,1)
# sum of squared errors
sse = [fitAndScore(x) for x in cluster_sizes]
df = pd.DataFrame({'#clusters': cluster_sizes, 'SSE': sse})
df.head()
df.plot(x='#clusters', y='SSE', title='Elbow chart')
# From the chart below, the optimal number of clusters is 4
Explanation: The customers in first cluster buy Groceries mostly followed by Milk and Detergents.
In second cluster, the customers buy some Fresh food and small quantities of everything else.
Third cluster consists predominantly of customers buying large quantities of Fresh food and very small quantities of everything else.
Fourth cluster customers buy huge quantity of Grocery followed by large quantity of Milk and Detergents. Fresh foods are also their important purchase.
End of explanation |
11,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using Cloud Dataflow </h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create ML dataset using <a href="https
Step1: After installing Apache Beam, restart your kernel by selecting "Kernel" from the menu and clicking "Restart kernel..."
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
Step6: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options
Step7: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step. | Python Code:
%pip install apache-beam[gcp]==2.13.0
Explanation: <h1> Preprocessing using Cloud Dataflow </h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create ML dataset using <a href="https://cloud.google.com/dataflow/">Cloud Dataflow</a></li>
<li>Simulate a dataset where no ultrasound is performed (i.e. male or female unknown as a feature)</li>
<li>Launch the Cloud Dataflow job to preprocess the data</li>
</ol>
TODO: Complete the lab notebook #TODO sections. You can refer to the solutions/ notebook for reference.
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
End of explanation
import apache_beam as beam
print(beam.__version__)
Explanation: After installing Apache Beam, restart your kernel by selecting "Kernel" from the menu and clicking "Restart kernel..."
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
# TODO create logic for no_ultrasound where we only know whether its a single baby or multiple (but not how many multiple)
no_ultrasound['is_male'] = 'Unknown'
if # TODO create logic check for multiples
no_ultrasound['plurality'] = 'Multiple(2+)'
else: # TODO create logic check for single
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'max_num_workers': 6
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
End of explanation |
11,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the spectroscopic data
Step1: Hmm, the specobjid's are floats, but they should be integers to avoid possible rounding errors during comparison. Lets fix that
Step2: Find matches
First we see if we can match on spectroscopic ID's
Step3: Hmm... far too few specobjids. Probably whatever converted to a float above also led to some mis-casting of the ID's. So we'll fall back on doing matching based on sky coordinates.
Step4: This identifies all pairs if coordinates that are within 5" of each other
Step5: Because we matched scs to itself, this list also inclues "self-matches" where the pairs are an object and itself. But we want these to make sure we get the single-object "groups"
Are they really groups?
Step6: Lets have a quick look at the things that are between $.1"$ and $.2"$ to see if they are matches or shreds
Step7: Inspection of these reveals them to be the same objects. Most be difference between different DR's or similar. So we define matches as simply anything within $.4"$
Step8: Now convert the lists of pairs into groups
Step9: 79 seems like a lot... What's the deal with the big ones?
Step10: The above is just an example but the ones with the problem originally were IMACS targets...? So we have the option of ignoring all IMACS targets.
Marla fixed this after we realized it, though, so the remaining large groups are all MMT calibration stars
Step11: Final Function
The function below just combines all the steps above into one function | Python Code:
spec_data_raw = table.Table.read('SAGADropbox/data/saga_spectra_raw.fits.gz')
spec_data_raw
Explanation: Load the spectroscopic data
End of explanation
# Just setting the dtype does *not* do the conversion of the values. It instead tells numpy to
# re-interpret the same set of bits as thought they were ints.
# So we do the first and third line below to update the values - `astype` *does* do the conversion
spec_data = spec_data_raw.copy()
spec_data['specobjid'].dtype = int
spec_data['specobjid'] = spec_data_raw['specobjid'].astype(int)
spec_data
Explanation: Hmm, the specobjid's are floats, but they should be integers to avoid possible rounding errors during comparison. Lets fix that:
End of explanation
sid = spec_data['specobjid']
len(np.unique(sid)), len(sid)
Explanation: Find matches
First we see if we can match on spectroscopic ID's
End of explanation
scs = SkyCoord(spec_data['RA'], spec_data['DEC'], unit=u.deg)
Explanation: Hmm... far too few specobjids. Probably whatever converted to a float above also led to some mis-casting of the ID's. So we'll fall back on doing matching based on sky coordinates.
End of explanation
idx1, idx2, sep2d, _ = scs.search_around_sky(scs, 5*u.arcsec)
Explanation: This identifies all pairs if coordinates that are within 5" of each other:
End of explanation
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.hist(sep2d.to(u.arcsec).value, bins=100, range=(0, 5), histtype='step', log=True)
plt.xlim(0, 5)
ax2.hist(sep2d.to(u.arcsec).value, bins=100, range=(0, .4), histtype='step', log=True)
plt.xlim(0, .4)
plt.tight_layout()
Explanation: Because we matched scs to itself, this list also inclues "self-matches" where the pairs are an object and itself. But we want these to make sure we get the single-object "groups"
Are they really groups?
End of explanation
msk = (.1*u.arcsec<sep2d)&(sep2d<.2*u.arcsec)
random25 = np.random.permutation(np.sum(msk))[:25]
scs1 = scs[idx1[msk][random25]]
scs2 = scs[idx2[msk][random25]]
print(targeting.sampled_imagelist(scs1, None, n=len(scs1)))
print(targeting.sampled_imagelist(scs2, None, n=len(scs2)))
Explanation: Lets have a quick look at the things that are between $.1"$ and $.2"$ to see if they are matches or shreds
End of explanation
grpdct = {}
grpi = 0
for i1, i2 in zip(idx1, idx2):
if i1 in grpdct:
if i2 in grpdct:
# combine the two groups by assigning grp2 items to grp1
# this block is by far the slowest part so if the data size grows it should be optimized
grp1 = grpdct[i1]
grp2 = grpdct[i2]
if grp1 != grp2:
to_set_to_1 = [i for i, grp in grpdct.iteritems() if grp==grp2]
for i in to_set_to_1:
grpdct[i] = grp1
else:
#add i2 to the group i1 is already in
grpdct[i2] = grpdct[i1]
else:
if i2 in grpdct:
#add i1 to the group i2 is already in
grpdct[i2] = grpdct[i1]
else:
# add them both to a new group
grpdct[i1] = grpdct[i2] = grpi
grpi += 1
len(idx1), len(idx2), len(np.unique(grpdct.keys())), len(np.unique(grpdct.values()))
Explanation: Inspection of these reveals them to be the same objects. Most be difference between different DR's or similar. So we define matches as simply anything within $.4"$
End of explanation
grpnum_to_group_members = collections.defaultdict(list)
for k, v in grpdct.iteritems():
grpnum_to_group_members[v].append(k)
# convert the members into arrays
grpnum_to_group_members = {k:np.array(v) for k, v in grpnum_to_group_members.iteritems()}
# this is the count *before* the IMACS fix:
{1: 88822,
2: 2460,
3: 293,
4: 79,
5: 19,
6: 6,
7: 5,
8: 1,
9: 2,
10: 3,
13: 1,
34: 1,
54: 1,
64: 1,
72: 1,
79: 1}
# this counts how many have a certain number of elements in their group
counts = np.bincount([len(arr) for arr in grpnum_to_group_members.values()])
{i:c for i, c in enumerate(counts) if c != 0}
Explanation: Now convert the lists of pairs into groups
End of explanation
Groupnum = collections.namedtuple('Groupnum', ['nobjs', 'grpnum'])
big_grps = sorted([Groupnum(len(arr), idx) for idx, arr in grpnum_to_group_members.iteritems() if len(arr)>5])
big_grps
spec_data[grpnum_to_group_members[big_grps[-1].grpnum]]
Explanation: 79 seems like a lot... What's the deal with the big ones?
End of explanation
spec_data[grpnum_to_group_members[big_grps[-2].grpnum]]
idxs_to_keep = []
new_repeats = []
for grpnum, members in grpnum_to_group_members.iteritems():
notimacs_members = members[spec_data['TELNAME'][members]!='IMACS']
if len(notimacs_members)==0:
continue
idxs_to_keep.append(notimacs_members[np.argsort(spec_data['ZQUALITY'][notimacs_members])[0]])
new_repeats.append('+'.join(np.unique(spec_data['SPEC_REPEAT'][notimacs_members])))
unique_objs = spec_data[np.array(idxs_to_keep)]
del unique_objs['SPEC_REPEAT']
unique_objs['SPEC_REPEAT'] = new_repeats
unique_objs
# double check that you sometimes get a combination of them together
np.unique(unique_objs['SPEC_REPEAT'])
Explanation: The above is just an example but the ones with the problem originally were IMACS targets...? So we have the option of ignoring all IMACS targets.
Marla fixed this after we realized it, though, so the remaining large groups are all MMT calibration stars:
End of explanation
from __future__ import print_function, division
import collections
import numpy as np
from astropy import units as u
from astropy import table
from astropy.coordinates import SkyCoord
def find_uniques(infn='saga_spectra_raw.fits.gz', remove_imacs=True, nearenough_sep=5*u.arcsec):
spec_data_raw = table.Table.read(infn)
if spec_data_raw['specobjid'].dtype.kind == 'i':
spec_data = spec_data_raw
else:
#if the specobjid is not an int, convert the type of that column while retaining the value
spec_data = spec_data_raw.copy()
spec_data['specobjid'].dtype = int
spec_data['specobjid'] = spec_data_raw['specobjid'].astype(int)
scs = SkyCoord(spec_data['RA'], spec_data['DEC'], unit=u.deg)
idx1, idx2, sep2d, _ = scs.search_around_sky(scs, nearenough_sep)
# now contruct the groups from the pairs
grpdct = {}
grpi = 0
for i1, i2 in zip(idx1, idx2):
if i1 in grpdct:
if i2 in grpdct:
# combine the two groups by assigning grp2 items to grp1
# this block is by far the slowest part so if the data size grows it should be optimized
grp1 = grpdct[i1]
grp2 = grpdct[i2]
if grp1 != grp2:
to_set_to_1 = [i for i, grp in grpdct.iteritems() if grp==grp2]
for i in to_set_to_1:
grpdct[i] = grp1
else:
#add i2 to the group i1 is already in
grpdct[i2] = grpdct[i1]
else:
if i2 in grpdct:
#add i1 to the group i2 is already in
grpdct[i2] = grpdct[i1]
else:
# add them both to a new group
grpdct[i1] = grpdct[i2] = grpi
grpi += 1
grpnum_to_group_members = collections.defaultdict(list)
for k, v in grpdct.iteritems():
grpnum_to_group_members[v].append(k)
# convert the members into arrays
grpnum_to_group_members = {k:np.array(v) for k, v in grpnum_to_group_members.iteritems()}
# identify which is the "best" spectrum (meaning the first zq=4 spectrum)
idxs_to_keep = []
new_repeats = []
for grpnum, allmembers in grpnum_to_group_members.iteritems():
if remove_imacs:
members = allmembers[spec_data['TELNAME'][allmembers]!='IMACS']
if len(members)==0:
continue
else:
members = allmembers
idxs_to_keep.append(members[np.argsort(spec_data['ZQUALITY'][members])[0]])
new_repeats.append('+'.join(np.unique(spec_data['SPEC_REPEAT'][members])))
# now build the output table from the input
unique_objs = spec_data[np.array(idxs_to_keep)]
del unique_objs['SPEC_REPEAT']
unique_objs['SPEC_REPEAT'] = new_repeats
return unique_objs
uniq_objs = find_uniques('../saga_spectra_raw.fits.gz')
uniq_objs
Explanation: Final Function
The function below just combines all the steps above into one function
End of explanation |
11,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Step1: Periodic Yield
The thread periodic_yeild is woken up at 30ms intervals where it calls sched_yield and relinquishes its time-slice.
The expectation is that the task will have a duty cycle < 1% and a period of 30ms.
There are two threads, and the rank=1 conveys that the condition is true for one of the threads with the name "periodic_yeild"
Step2: CPU Hog
The reservation of a CPU hogging task is set to 10ms for every 100ms. The assertion ensures a duty cycle of 10%
Step3: Changing Reservations
A CPU hogging task has reservations set in the increasing order starting from 10% followed by a 2s period of normal execution | Python Code:
from trappy.stats.Topology import Topology
from bart.sched.SchedMultiAssert import SchedMultiAssert
from bart.sched.SchedAssert import SchedAssert
import trappy
import os
import operator
import json
#Define a CPU Topology (for multi-cluster systems)
BIG = [1, 2]
LITTLE = [0, 3, 4, 5]
CLUSTERS = [BIG, LITTLE]
topology = Topology(clusters=CLUSTERS)
BASE_PATH = "/Users/kapileshwarsingh/AnalysisRawData/LPC/sched_deadline/"
THRESHOLD = 10.0
def between_threshold(a, b):
return abs(((a - b) * 100.0) / b) < THRESHOLD
Explanation: Setup
End of explanation
TRACE_FILE = os.path.join(BASE_PATH, "yield")
ftrace = trappy.FTrace(TRACE_FILE, "cpuhog")
# Assert Period
s = SchedMultiAssert(ftrace, topology, execnames="periodic_yield")
if s.assertPeriod(30, between_threshold, rank=1):
print "PASS: Period"
print json.dumps(s.getPeriod(), indent=3)
print ""
# Assert DutyCycle
if s.assertDutyCycle(1, operator.lt, window=(0,4), rank=2):
print "PASS: DutyCycle"
print json.dumps(s.getDutyCycle(window=(0,4)), indent=3)
Explanation: Periodic Yield
The thread periodic_yeild is woken up at 30ms intervals where it calls sched_yield and relinquishes its time-slice.
The expectation is that the task will have a duty cycle < 1% and a period of 30ms.
There are two threads, and the rank=1 conveys that the condition is true for one of the threads with the name "periodic_yeild"
End of explanation
TRACE_FILE = os.path.join(BASE_PATH, "cpuhog")
ftrace = trappy.FTrace(TRACE_FILE, "cpuhog")
s = SchedMultiAssert(ftrace, topology, execnames="cpuhog")
s.plot().view()
# Assert DutyCycle
if s.assertDutyCycle(10, between_threshold, window=(0, 5), rank=1):
print "PASS: DutyCycle"
print json.dumps(s.getDutyCycle(window=(0, 5)), indent=3)
Explanation: CPU Hog
The reservation of a CPU hogging task is set to 10ms for every 100ms. The assertion ensures a duty cycle of 10%
End of explanation
TRACE_FILE = os.path.join(BASE_PATH, "cancel_dl_timer")
ftrace = trappy.FTrace(TRACE_FILE, "cpuhog")
s = SchedAssert(ftrace, topology, execname="cpuhog")
s.plot().view()
NUM_PHASES = 10
PHASE_DURATION = 2
start = s.getStartTime()
DUTY_CYCLE_FACTOR = 10
for phase in range(NUM_PHASES + 1):
window = (start + (phase * PHASE_DURATION),
start + ((phase + 1) * PHASE_DURATION))
if phase % 2 == 0:
DUTY_CYCLE = (phase + 2) * DUTY_CYCLE_FACTOR / 2
else:
DUTY_CYCLE = 100
print "WINDOW -> [{:.2f}, {:.2f}]".format(window[0],
window[1])
if s.assertDutyCycle(DUTY_CYCLE, between_threshold, window=window):
print "PASS: Expected={} Actual={:.2f} THRESHOLD={}".format(DUTY_CYCLE,
s.getDutyCycle(window=window),
THRESHOLD)
else:
print "FAIL: Expected={} Actual={:.2f} THRESHOLD={}".format(DUTY_CYCLE,
s.getDutyCycle(window=window),
THRESHOLD)
print ""
Explanation: Changing Reservations
A CPU hogging task has reservations set in the increasing order starting from 10% followed by a 2s period of normal execution
End of explanation |
11,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="top"></a>
UN SDG Indicator 6.6.1
Step1: <a id="plat_prod"></a>Choose Platforms and Products ▴
List available products for each platform
Step2: Choose products
Step3: <a id="extents"></a>Get the Extents of the Cube ▴
Step4: Visualize the full area
Step5: <a id="single_period"></a>Analyze a Single Time Period ▴
<a id="define_extents_single"></a>Define the Extents of the Analysis ▴
Specify start and end dates
Step6: Specify an area to analyze
Step7: Visualize the selected area
Step8: <a id="retrieve_data_single"></a>Retrieve the Data from the Datacube ▴
Step9: <a id="water_cls_single"></a> Get Water Classifications Using the WOFS Algorithm ▴
Step10: <a id="prepare_for_vis"></a> Prepare for Visualization ▴
Step11: <a id="find_water_extents"></a>Find the Minimum and Maximum Water Extents ▴
We are finding the minimum and maximum water extents based on a max-composite of water data. We do this to reduce the appearance of missing data due to cloud cover and to avoid cluttering a plot of the mean of water over time later.
Step12: <a id="water_extent_image"></a>Create Water Extent Image ▴
Step13: <a id="time_series_water"></a>Create a Time Series Plot of the Water ▴
Step14: <a id="two_period"></a>Compare Two Time Periods - a Baseline and an Analysis ▴
<a id="define_extents_baseline_analysis"></a>Define the Extents of the Analysis ▴
Specify start and end dates
Step15: Visualize the selected area
Step16: <a id="retrieve_data_baseline_analysis"></a>Retrieve the Data from the Datacube ▴
Step17: <a id="water_cls_baseline_analysis"></a>Get Water Classifications Using the WOFS Algorithm ▴
Step18: <a id="show_water_extents"></a>Show Water Extents of the Baseline and Analysis Periods ▴
Show the median composite for each period.
<br>The results are displayed using a false color RGB where water is dark blue
Step19: Show which pixels are sometime water or always water for each period, with a gray-scale background.
<br>Warning
Step20: <a id="show_analyze_diff"></a>Show and Analyze the Differences Between the Two Time Periods ▴
Compare the water extents of the cloud-filtered mosaics of the baseline and analysis time periods
Step21: Change Statistics
Step22: Change Matrix
Step23: Calculate the area of each of the 9 transition classes in the change matrix.
Step24: <a id="classify_by_degree_change"></a>Classify the Area by Degree of Change ▴ | Python Code:
# Supress Warning
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import warnings
import matplotlib.pyplot as plt
# Allow importing of our utilities.
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
# Import the datacube and the API
import datacube
from utils.data_cube_utilities.data_access_api import DataAccessApi
import datetime as dt
import numpy as np
import xarray as xr
import pandas as pd
from matplotlib.colors import LinearSegmentedColormap
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
# Create an instance of the datacube and API.
api = DataAccessApi()
dc = api.dc
Explanation: <a id="top"></a>
UN SDG Indicator 6.6.1:<br> Change in the Extent of Water-related Ecosystems
<hr>
Notebook Summary
The United Nations have prescribed 17 "Sustainable Development Goals" (SDGs). This notebook attempts to monitor SDG Indicator 6.6.1 - change in the extent of water-related ecosystems. Indicator 6.6.1 has 4 sub-indicators:
i. The spatial extent of water-related ecosystems <br>
ii. The quantity of water contained within these ecosystems <br>
iii. The quality of water within these ecosystems <br>
iv. The health or state of these ecosystems <br>
This notebook primarily focuses on the first sub-indicator - spatial extents.
In the first section, one time period is analyzed. The mean of water classifications over time is visualized, the minimum and maximum water extents are visualized, and the frequency of water classification for each pixel is visualized.
In the second section, two time periods are analyzed - a baseline time period and an analysis time period. The change in pixels that are water at any time during their respective time periods is calculated, the change in water from the baseline time period to the analysis time period is visualized, and the area is classified by degree of change using classification labels provided by UN SDG 6.6.1.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platforms and Products
Get the Extents of the Cube
Analyze a Single Time Period
Define the Extents of the Analysis
Retrieve the Data from the Datacube
Get Water Classifications Using the WOFS Algorithm
Prepare for Visualization
Find the Minimum and Maximum Water Extents
Create Water Extent Image
Create a Time Series Plot of the Water
Compare Two Time Periods - a Baseline and an Analysis
Define the Extents of the Analysis
Retrieve the Data from the Datacube
Get Water Classifications Using the WOFS Algorithm
Show and Analyze the Differences Between the Two Time Periods
Classify the Area by Degree of Change
<a id="import"></a>Import Dependencies and Connect to the Data Cube ▴
End of explanation
# Get available products
products_info = dc.list_products()
# List LANDSAT 7 products
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
# List LANDSAT 8 products
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
Explanation: <a id="plat_prod"></a>Choose Platforms and Products ▴
List available products for each platform
End of explanation
# These are the platforms (satelltes) and products (datacube sets)
# used for this demonstration.
# Single time period analysis
platforms_single = ['LANDSAT_8']
products_single = ['ls8_usgs_sr_scene']
collections_single = ['c1']
levels_single = ['l2']
# Two time period analysis
platforms_two = ['LANDSAT_7', 'LANDSAT_8']
products_two = ['ls7_usgs_sr_scene', 'ls8_usgs_sr_scene']
collections_two = ['c1', 'c1']
levels_two = ['l2', 'l2']
Explanation: Choose products
End of explanation
from utils.data_cube_utilities.dc_load import get_overlapping_area
from utils.data_cube_utilities.dc_time import dt_to_str
full_lat, full_lon, min_max_dates = \
get_overlapping_area(api, platforms_two, products_two)
# Print the extents of each product.
str_min_max_dates = np.vectorize(dt_to_str)(min_max_dates)
for i, (platform, product) in enumerate(zip(platforms_two, products_two)):
print("For platform {} and product {}:".format(platform, product))
print("Time Extents:", str_min_max_dates[i])
print()
# Print the extents of the combined data.
min_start_date_mutual = np.max(min_max_dates[:,0])
max_end_date_mutual = np.min(min_max_dates[:,1])
print("Overlapping Extents:")
print("Latitude Extents:", full_lat)
print("Longitude Extents:", full_lon)
print("Time Extents:", list(map(dt_to_str, (min_start_date_mutual, max_end_date_mutual))))
Explanation: <a id="extents"></a>Get the Extents of the Cube ▴
End of explanation
from utils.data_cube_utilities.dc_display_map import display_map
display_map(full_lat, full_lon)
Explanation: Visualize the full area
End of explanation
# Select a subset of the time available (year, month, day).
time_extents_single = [dt.datetime(2014,1,1), dt.datetime(2014,1,31)]
Explanation: <a id="single_period"></a>Analyze a Single Time Period ▴
<a id="define_extents_single"></a>Define the Extents of the Analysis ▴
Specify start and end dates
End of explanation
# Specify latitude and longitude bounds of an interesting area within the full extents.
# Ghana
# lat = (6.5016, 6.5221) # Lake Volta (very small)
# lon = (-0.1618, -0.100) # Lake Volta (very small)
# lat = (6.5016, 6.5821) # Lake Volta (small)
# lon = (-0.1618, -0.055) # Lake Volta (small)
# lat = (6.5138, 6.5292) # Lake Volta (medium)
# lon = (-0.1669, -0.1493) # Lake Volta (medium)
# lat = (6.2989, 7.9287) # Lake Volta (large)
# lon = (-0.4559, 0.2637) # Lake Volta (large)
# lat = (10.8600, 10.9350) # Tono Dam (old)
# lon = (-1.1934, -1.1423) # Tono Dam (old)
lat = (10.8622, 10.9143) # Tono Dam
lon = (-1.1822, -1.1440) # Tono Dam
# Lake Sulunga, Tanzania
# lat = (-6.3605, -5.8252) # Large
# lon = (34.9756, 35.4535) # Large
# lat = (-6.2593, -5.8701) # Small (close fit)
# lon = (34.9901, 35.3641) # Small (close fit)
# Lake Manyara, Tanzania
# lat = (-3.8505, -3.3886)
# lon = (35.7184, 35.9271)
# Vietnam
# lat = (10.037934, 10.237934)
# lon = (104.992264, 105.192264)
# lat = (11.0645, 11.2845) # Tri An Lake
# lon = (106.9567, 107.2967) # Tri An Lake
Explanation: Specify an area to analyze
End of explanation
display_map(lat, lon)
Explanation: Visualize the selected area
End of explanation
from utils.data_cube_utilities.dc_load import match_dim_sizes
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
from utils.data_cube_utilities.aggregate import xr_scale_res
from utils.data_cube_utilities.sort import xarray_sortby_coord
def load_for_time_range(platforms, products, collections, levels, time_extents):
measurements = ['red', 'blue', 'green', 'nir', 'swir1', 'swir2', 'pixel_qa']
matching_abs_res, same_dim_sizes = match_dim_sizes(dc, products, lon, lat)
datasets = {}
clean_masks = {}
for platform, product, collection, level in zip(platforms, products, collections, levels):
# Load the dataset.
prod_info = dc.list_products()
resolution = prod_info[prod_info.name==product].resolution.values[0]
dataset = dc.load(platform=platform, product=product, lat=lat, lon=lon,
time=time_extents, measurements=measurements,
group_by='solar_day',
output_crs='EPSG:4326',
resolution=resolution,
dask_chunks={'latitude':1000, 'longitude':1000, 'time':1}).persist()
if len(dataset.dims) == 0: # The dataset is empty.
continue
# Get the clean mask.
clean_mask = landsat_clean_mask_full(dc, dataset, product, platform, collection, level)
dataset = dataset.drop('pixel_qa')
# Discard acquisitions with no clean data.
acq_times_to_keep = dataset.time.isel(time=(clean_mask.mean(['latitude', 'longitude']) > 0.001))
dataset = dataset.sel(time=acq_times_to_keep)
clean_mask = clean_mask.sel(time=acq_times_to_keep)
# If needed, scale the datasets and clean masks to the same size in the x and y dimensions.
if not same_dim_sizes:
dataset = xr_scale_res(dataset, abs_res=matching_abs_res)
clean_mask = xr_scale_res(clean_mask.astype(np.uint8), abs_res=matching_abs_res).astype(np.bool)
# Clean the data.
dataset = dataset.astype(np.float16).where(clean_mask)
datasets[product], clean_masks[product] = dataset, clean_mask
# Combine everything.
if len(datasets) > 0:
dataset = xarray_sortby_coord(xr.concat(list(datasets.values()), dim='time'), coord='time')
clean_mask = xarray_sortby_coord(xr.concat(list(clean_masks.values()), dim='time'), coord='time')
else:
dataset = xr.Dataset()
clean_mask = xr.DataArray(np.empty((0,), dtype=np.bool))
return dataset, clean_mask
dataset, clean_mask = load_for_time_range(platforms_single, products_single,
collections_single, levels_single, time_extents_single)
dataset = dataset.persist()
clean_mask = clean_mask.persist()
Explanation: <a id="retrieve_data_single"></a>Retrieve the Data from the Datacube ▴
End of explanation
# Whether or not to denoise the WOFS classifications.
denoise = False
# Denoising parameters may need to be tuned depending on the region and resolution scaling.
# *min_size*: Minimum size of connected pixels
# that will not be filtered (minimum=2).
# *kernel_size*: Diameter of the modal kernal (minimum=3).
# Larger values will run slower as it calculates the mode
# of the neighborhood for each pixel inside this diameter.
# *connectivity*: Maximum distance between any two pixels.
# A value of 1 allows only contiguous regions of pixels.
# *max_num_filter_runs*: The filter is run until the output stops changing
# or this number of runs has been performed.
std_denoise_params = dict(min_size=40, kernel_size=7, connectivity=10)
max_num_filter_runs = 4
from utils.data_cube_utilities.dc_water_classifier import wofs_classify
from utils.data_cube_utilities.dc_utilities import ignore_warnings
from utils.data_cube_utilities.raster_filter import lone_object_filter
def get_water_classifications(dataset, clean_mask, max_num_filter_runs,
denoise=False, denoise_params=None):
# Get WOFS classifications.
water_da = ignore_warnings(wofs_classify, dataset).wofs
water_da = water_da.where(clean_mask)
# Denoise the classifications.
if denoise:
denoise_params = std_denoise_params if denoise_params is None else denoise_params
# Encode NaN as -1 for `lone_object_filter()`.
water_da = water_da.where(~xr_nan(water_da), -1)
for time in water_da.time:
water_slice_prev = water_da.sel(time=time).values
water_slice_new = lone_object_filter(water_slice_prev, **std_denoise_params)
# While the filter output is still changing or until the filter has
# been run `max_num_filter_runs` times, keep rerunning it.
filter_run_count = 0
while (water_slice_new != water_slice_prev).any() and \
filter_run_count < max_num_filter_runs:
water_slice_prev = water_slice_new
water_slice_new = lone_object_filter(water_slice_prev, **std_denoise_params)
filter_run_count += 1
water_da.sel(time=time).values[:] = water_slice_new
water_da = water_da.where(water_da!=-1) # Reencode -1 as NaN.
return water_da
water_da = get_water_classifications(dataset, clean_mask, max_num_filter_runs,
denoise, std_denoise_params)
# Resample the data in time to avoid cluttering the x axes of plots
# with tick labels (dates) and also smooth the plots.
dataset = dataset.resample(time='2d').mean('time').dropna('time', how='all').persist()
clean_mask = (clean_mask.resample(time='2d').mean('time').dropna('time', how='all') >= 0.5).persist()
water_da = water_da.resample(time='2d').mean('time').dropna('time', how='all').persist()
Explanation: <a id="water_cls_single"></a> Get Water Classifications Using the WOFS Algorithm ▴
End of explanation
# Define a standard dpi partly to tune font sizes in figures.
# A higher dpi and a lower figure size will result in larger
# font sizes relative to the figure size.
std_dpi = 100
Explanation: <a id="prepare_for_vis"></a> Prepare for Visualization ▴
End of explanation
## Settings ##
# Formatting for full-width plots.
title_fontdict = dict(fontsize=16) # Title formatting
tick_label_fmt_dict = dict(axis='both', labelsize=12) # Tick label formatting
axis_label_fmt_dict = dict(fontsize=16) # Axis label formatting
legend_kwargs = dict(fontsize=12)
# Can be any of ['time-slice', 'per-pixel'].
# The value 'time-slice' obtains the minimum and maximum water extents on an time-slice basis.
# The value 'per-pixel' obtains the minumum and maximum water extents on a per-pixel basis.
water_extent_method = 'time-slice'
assert water_extent_method in ['time-slice', 'per-pixel'], \
"The setting `water_extent_method` must be one of ['time-slice', 'per-pixel']."
# This setting is only relevant if `water_extent_method` is set to 'time-slice'.
# Water classifiers can sometimes output erroneous classifications and sometimes excessive clouds
# make compositing an impractical solution to accounting for noisy classifications.
# And SAR data is sometimes unavailable for WASARD water classification over cloudy regions,
# so percentiles can be specified for the minimum and maximum time slices. These can be between
# 0 and 100. The minimum water time-slice selected will be the time slice with non-water extents
# at or below `percentile_min`. The maximum water time-slice selected will be the time slice
# with water extents at or below `percentile_max`.
percentile_min, percentile_max = 100, 100
## End Settings ##
# Option 1: Obtain the acquisitions with the minimum and maximum water extents.
if water_extent_method == 'time-slice':
count_non_water = water_da.where(water_da == 0).count(dim=['latitude', 'longitude'])
count_water = water_da.where(water_da == 1).count(dim=['latitude', 'longitude'])
# Find the acquisition with the least water (the one with the most non-water pixels).
percentile_value_min = np.percentile(count_non_water, percentile_min, interpolation='lower')
# Handle the case of multiple acquisitions with the same amount of non-water (notably 0).
acqs_with_value_min = count_non_water == percentile_value_min
if acqs_with_value_min.sum() > 1: # Find the acquisition with the least water (and most non-water).
min_extent_time_ind = np.argmin(count_water.isel(time=acqs_with_value_min).values)
else:
min_extent_time_ind = (count_non_water == percentile_value_min).argmax().values
min_extent_time = water_da.time.values[min_extent_time_ind]
min_water_extent = water_da.isel(time=min_extent_time_ind)
# Find the acquisition with the most water (the one with the most water pixels).
percentile_value_max = np.percentile(count_water, percentile_max, interpolation='lower')
# Handle the case of multiple acquisitions with the same amount of water (notably 0).
acqs_with_value_max = count_water == percentile_value_max
if acqs_with_value_max.sum() > 1: # Find the acquisition with the least non-water (and most water).
max_extent_time_ind = np.argmin(count_non_water.isel(time=acqs_with_value_max).values)
else:
max_extent_time_ind = (count_water == percentile_value_max).argmax().values
max_extent_time = water_da.time.values[max_extent_time_ind]
max_water_extent = water_da.isel(time=max_extent_time_ind)
dt64_to_date_str = lambda dt64 : pd.to_datetime(str(dt64)).strftime('%Y/%m/%d')
max_water_date_str = dt64_to_date_str(max_extent_time)
min_water_date_str = dt64_to_date_str(min_extent_time)
# Option 2: Obtain the minimum and maximum water extents on a per-pixel basis.
else:
min_water_extent = water_da.min('time')
max_water_extent = water_da.max('time')
max_water_date_str = "N/A"
min_water_date_str = "N/A"
from utils.data_cube_utilities.plotter_utils import create_discrete_color_map, xarray_imshow
from utils.data_cube_utilities.plotter_utils import figure_ratio
# Begin plotting the data
fig_width = 12 # Use this to change the size of this figure.
fig = plt.figure(figsize=figure_ratio(water_da, fixed_width=fig_width), dpi=std_dpi)
# Create common colormap.
cmap = create_discrete_color_map([0,1], ['black', 'cyan'])
imshow_kwargs = dict(vmin=0, vmax=1, cmap=cmap)
legend_labels = {0:'Not Water', 1:'Water'}
# Minimum water extent
ax = plt.subplot(1,2,1)
fig, ax, im, cbar = \
xarray_imshow(min_water_extent, fig=fig, ax=ax, use_colorbar=False, use_legend=True,
legend_labels=legend_labels, imshow_kwargs=imshow_kwargs,
x_label_kwargs=axis_label_fmt_dict, y_label_kwargs=axis_label_fmt_dict,
legend_kwargs=legend_kwargs)
ax.set_title("Minimum Water Extent \nTime (YYYY/MM/DD): {}".format(min_water_date_str), fontdict=title_fontdict)
ax.tick_params(**tick_label_fmt_dict)
# Maximum water extent
ax = plt.subplot(1,2,2)
fig, ax, im, cbar = \
xarray_imshow(max_water_extent, fig=fig, ax=ax, use_colorbar=False, use_legend=True,
legend_labels=legend_labels, imshow_kwargs=imshow_kwargs,
x_label_kwargs=axis_label_fmt_dict, y_label_kwargs=axis_label_fmt_dict,
legend_kwargs=legend_kwargs)
ax.set_title("Maximum Water Extent \nTime (YYYY/MM/DD): {}".format(max_water_date_str), fontdict=title_fontdict)
ax.tick_params(**tick_label_fmt_dict)
plt.tight_layout()
plt.show()
Explanation: <a id="find_water_extents"></a>Find the Minimum and Maximum Water Extents ▴
We are finding the minimum and maximum water extents based on a max-composite of water data. We do this to reduce the appearance of missing data due to cloud cover and to avoid cluttering a plot of the mean of water over time later.
End of explanation
from utils.data_cube_utilities.plotter_utils import binary_class_change_plot
fig_width = 6 # Use this to change the size of this figure.
fig = plt.figure(figsize=figure_ratio(water_da, fixed_width=fig_width), dpi=std_dpi)
# Shade 3 regions - never, sometimes, and always water.
(fig,ax), [never_sometimes_always_stats] = \
binary_class_change_plot(
[water_da==1], [clean_mask], colors=['black', 'yellow', 'cyan'],
class_legend_label='Water', fig=fig)
plt.show()
index = list(map(lambda str: str.format('Water'),
['Never {}', 'Sometimes {}', 'Always {}',
'Max {}', 'Min {}', 'No Data']))
num_table_rows = len(index)
stats_table_single = pd.DataFrame(data=np.zeros((num_table_rows, 4)),
index=index, columns=['Date (YYYY/MM/DD)', 'Number', 'Percent', 'Area (km^2)'])
## Date
# There are the dates for the rows.
dates = np.array(['', '', '', max_water_date_str, min_water_date_str, ''])
stats_table_single.loc[:,'Date (YYYY/MM/DD)'] = dates
## Number
class_sums = np.array([never_sometimes_always_stats.loc['Never Water'].Number,
never_sometimes_always_stats.loc['Sometimes Water'].Number,
never_sometimes_always_stats.loc['Always Water'].Number,
max_water_extent.sum().values,
min_water_extent.sum().values,
never_sometimes_always_stats.loc['Unknown'].Number])
stats_table_single.loc[:, 'Number'] = class_sums
## Area
# Show pixel changes and calculate area using pixel resolution
prod_info = dc.list_products()
deg_per_px = prod_info[prod_info['name'] == products_single[0]]['resolution'].values[0]
deg_per_px = np.abs(deg_per_px)
# Roughly 111km per degree of latitude and longitude.
km_per_px = 111 * deg_per_px
# Calculate the total area.
sq_km_per_px = np.prod(km_per_px)
stats_table_single.loc[:, 'Area (km^2)'] = stats_table_single.loc[:,'Number'] * sq_km_per_px
## Percent
stats_table_single.loc[:, 'Percent'] = \
[never_sometimes_always_stats.loc['Never Water'].Percent,
never_sometimes_always_stats.loc['Sometimes Water'].Percent,
never_sometimes_always_stats.loc['Always Water'].Percent,
max_water_extent.mean().values,
min_water_extent.mean().values,
never_sometimes_always_stats.loc['Unknown'].Percent]
stats_table_single_fmt = stats_table_single.copy()
stats_table_single_fmt.loc[:, 'Percent'] = [f"{pct:0.2%}" for pct in stats_table_single.loc[:, 'Percent']]
stats_table_single_fmt
Explanation: <a id="water_extent_image"></a>Create Water Extent Image ▴
End of explanation
from utils.data_cube_utilities.plotter_utils import xarray_time_series_plot
from utils.data_cube_utilities.plotter_utils import np_dt64_to_str
total_area = len(water_da.latitude) * len(water_da.longitude) * sq_km_per_px
figsize = (8, 4) # The width and height of the figure, respectively.
fig = plt.figure(figsize=figsize, dpi=std_dpi)
(water_da.sum(['latitude', 'longitude']) * sq_km_per_px).plot.line(marker='o')
plt.title('Water Area Over Time')
plt.ylabel('Area (km^2)')
plt.xlabel('Time')
plt.show()
Explanation: <a id="time_series_water"></a>Create a Time Series Plot of the Water ▴
End of explanation
# Select a subset of the time available.
time_extents_two = {'baseline': [dt.datetime(2002,1,1), dt.datetime(2002,12,31)],
'analysis': [dt.datetime(2014,1,1), dt.datetime(2014,12,31)]}
Explanation: <a id="two_period"></a>Compare Two Time Periods - a Baseline and an Analysis ▴
<a id="define_extents_baseline_analysis"></a>Define the Extents of the Analysis ▴
Specify start and end dates
End of explanation
display_map(lat, lon)
Explanation: Visualize the selected area
End of explanation
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
datasets = {} # Maps categories to cleaned datasets.
clean_masks = {} # Maps categories to clean masks.
datasets_median_composites = {}
for category, time_extents in list(time_extents_two.items()):
# Use LANDSAT_7 for the baseline because its SLC was broken afterward (2003).
ind = 0 if category == 'baseline' else 1
platform = platforms_two[ind]
product = products_two[ind]
collection = collections_two[ind]
level = levels_two[ind]
datasets[category], clean_masks[category] = \
load_for_time_range([platform], [product], [collection], [level], time_extents)
datasets[category] = datasets[category].persist()
clean_masks[category] = clean_masks[category].persist()
datasets_median_composites[category] = \
create_median_mosaic(datasets[category], clean_masks[category]).persist()
Explanation: <a id="retrieve_data_baseline_analysis"></a>Retrieve the Data from the Datacube ▴
End of explanation
# Get WOFS classifications.
water_das = {}
for category in datasets:
water_da = \
get_water_classifications(datasets[category], clean_masks[category],
max_num_filter_runs,
denoise, std_denoise_params)
water_das[category] = water_da
# Resample the data in time to avoid cluttering the x axes of plots
# with tick labels (dates) and also smooth the plots.
for category in datasets:
datasets[category] = datasets[category].resample(time='2d').mean('time').dropna('time', how='all').persist()
clean_masks[category] = (clean_masks[category].resample(time='2d').mean('time').dropna('time', how='all') >= 0.5).persist()
water_das[category] = water_das[category].resample(time='2d').mean('time').dropna('time', how='all').persist()
Explanation: <a id="water_cls_baseline_analysis"></a>Get Water Classifications Using the WOFS Algorithm ▴
End of explanation
from utils.data_cube_utilities.plotter_utils import figure_ratio
## Settings ##
# Formatting for full-width plots.
title_fontdict = dict(fontsize=16) # Title formatting
tick_label_fmt_dict = dict(axis='both', labelsize=12) # Tick label formatting
axis_label_fmt_dict = dict(fontsize=16) # Axis label formatting
legend_kwargs = dict(fontsize=12)
std_dpi = 200 # Standard image dpi.
def std_figsize(fig_width):
return figure_ratio(datasets['baseline'].isel(time=0), fixed_width=fig_width)
# This is the name of the band used as the
# graysale background when showing water extents.
background_band = 'swir1'
## End Settings ##
from utils.data_cube_utilities.dc_rgb import rgb
fig_width = 12 # Use this to change the size of this figure.
fig, ax = plt.subplots(1, 2, figsize=std_figsize(fig_width))
for i, category in enumerate(water_das):
rgb(datasets_median_composites[category], fig=fig, ax=ax[i],
bands=['swir2', 'nir', 'green'], imshow_kwargs=dict(vmin=0, vmax=3000))
ax[i].set_title(category.capitalize())
plt.tight_layout()
plt.show()
Explanation: <a id="show_water_extents"></a>Show Water Extents of the Baseline and Analysis Periods ▴
Show the median composite for each period.
<br>The results are displayed using a false color RGB where water is dark blue
End of explanation
from utils.data_cube_utilities.plotter_utils import binary_class_change_plot
fig_width = 12 # Use this to change the size of this figure.
fig, ax = plt.subplots(1, 2, figsize=std_figsize(fig_width))
# Collect stats tables for each period so we can later create a change matrix.
stats_tables = {}
for i, category in enumerate(water_das):
# Show water in blue over a grey-scale background.
rgb(datasets_median_composites[category], bands=[background_band]*3,
ax=ax[i])
water_composite = water_das[category].where(clean_masks[category], 0).mean('time')
[fig, ax[i]], [stats_tables[category]] = \
binary_class_change_plot(
[water_das[category]==1],
[clean_masks[category]],
colors=['gray', 'yellow', 'blue'],
neg_trans=True,
class_legend_label='Water', fig=fig, ax=ax[i],
title_kwargs=dict(label='Water ' + category.capitalize(), fontdict=title_fontdict),
x_label_kwargs=axis_label_fmt_dict, y_label_kwargs=axis_label_fmt_dict,
legend_kwargs=legend_kwargs,
denoise=denoise, denoise_params=std_denoise_params)
plt.tight_layout()
plt.show()
Explanation: Show which pixels are sometime water or always water for each period, with a gray-scale background.
<br>Warning: These results can be impacted by cloud contamination. The identification of clouds in Landsat <br>scenes is not perfect, so errors in cloud identification can impact time series results
End of explanation
fig_width = 10 # Use this to change the size of this figure.
fig, ax = plt.subplots(figsize=std_figsize(fig_width))
# Show water change over a grey-scale background.
rgb(datasets_median_composites['baseline'], bands=[background_band]*3,
ax=ax)
# Show regions based on changes between having zero or more than zero times
# in which they are classified as water between the two time periods.
[fig, ax], [stats_table_two, change_matrix] = \
binary_class_change_plot(
[water_das['baseline']==1, water_das['analysis']==1],
[clean_masks['baseline'], clean_masks['analysis']],
colors=['gray', 'green', 'red', 'blue'],
neg_trans=True,
class_legend_label='Water', fig=fig, ax=ax, title_kwargs=dict(fontdict=title_fontdict),
x_label_kwargs=axis_label_fmt_dict, y_label_kwargs=axis_label_fmt_dict,
legend_kwargs=legend_kwargs,
denoise=denoise, denoise_params=std_denoise_params)
ax.tick_params(**tick_label_fmt_dict)
plt.show()
Explanation: <a id="show_analyze_diff"></a>Show and Analyze the Differences Between the Two Time Periods ▴
Compare the water extents of the cloud-filtered mosaics of the baseline and analysis time periods
End of explanation
# Add area to the stats table.
# 1. Get resolution. In this case, we know the x and y coordinates are longitude and latitude
# and that the products all have the same resolution.
prod_info = dc.list_products()
deg_per_px = prod_info[prod_info['name'] == products_two[0]]['resolution'].values[0]
deg_per_px = np.abs(deg_per_px)
# Roughly 111km per degree of latitude and longitude.
km_per_px = 111 * deg_per_px
# 2. Calculate the total area
length_in_km = km_per_px * np.array([len(water_das['baseline'].latitude),
len(water_das['baseline'].longitude)])
total_area_in_km = np.prod(length_in_km)
# 3. Compute the area for each row of the stats table and display the table.
stats_table_two.loc[:, 'Area (km^2)'] = stats_table_two.loc[:,'Percent'] * total_area_in_km
# Format percent as strings.
stats_table_two_fmt = stats_table_two.copy()
stats_table_two_fmt.loc[:, 'Percent'] = [f"{pct:0.2%}" for pct in stats_table_two.loc[:, 'Percent']]
stats_table_two_fmt.loc[:, 'Area (km^2)'] = [f"{area:.3f}" for area in stats_table_two.loc[:, 'Area (km^2)']]
stats_table_two_fmt
Explanation: Change Statistics
End of explanation
from utils.data_cube_utilities.dc_utilities import reverse_array_dict
from utils.data_cube_utilities.plotter_utils import create_discrete_color_map
from utils.data_cube_utilities.plotter_utils import print_matrix
# Create a colormap for coloring the cells in the change matrix.
# The colors are, in order, gray, red, green, yellow, and blue. Many are manually specified
# as rgb lists because their matplotlib named colors are fairly bright and saturated.
change_matrix_cmap = \
create_discrete_color_map([0,4], colors=['gray', [225,16,16], [16,128,16], [235,235,0], [32,32,192]])
# This is a matrix specifying each class transition based on the color it should recieve.
# Original classes are indexed by row and final classes are indexed by column.
cng_vals = [0, 1, 2, 3, 4]
cng_mat = np.array([
# Always Sometimes Never
[ 4, 1, 1], # Always
[ 2, 3, 1], # Sometimes
[ 2, 2, 0], # Never
])
# Create mappings of transitions to values for coloring based on a colormap.
cls_trans_for_value = {cng_val:[] for cng_val in cng_vals}
for i, orig_class in enumerate(change_matrix.baseline.values):
for j, final_class in enumerate(change_matrix.analysis.values):
cng_val = cng_mat[i,j]
cls_trans_for_value[cng_val].append((orig_class, final_class))
value_for_cls_trans = reverse_array_dict(cls_trans_for_value)
# Create the cell value matrix (used to color cells).
cell_value_mtx = np.empty_like(change_matrix.Percent.values)
for i, cls_label1 in enumerate(change_matrix.baseline.values):
for j, cls_label2 in enumerate(change_matrix.analysis.values):
cell_value_mtx[i,j] = value_for_cls_trans[(cls_label1, cls_label2)]
cell_label_mtx = np.full_like(cell_value_mtx, '', dtype=object)
for i, baseline_category in enumerate(change_matrix['baseline']):
for j, analysis_category in enumerate(change_matrix['analysis']):
cell_label_mtx[i,j] = "{0:.2%}".format(change_matrix.Percent.values[i,j])
row_labels = [label.capitalize() for label in change_matrix.baseline.values]
col_labels = [label.capitalize() for label in change_matrix.analysis.values]
fig = plt.figure(figsize=(6,6))
fig, ax = print_matrix(cell_value_mtx, cell_val_fmt='s',
cell_label_mtx=cell_label_mtx, cmap=change_matrix_cmap,
row_labels=row_labels, col_labels=col_labels,
x_axis_ticks_position='top', x_axis_tick_kwargs={},
annot_kwargs=dict(size=14), fig=fig)
ax.yaxis.set_label_position('left')
plt.ylabel('Baseline Class', fontsize=18)
ax.xaxis.set_label_position('top')
plt.xlabel('Analysis Class', fontsize=18)
plt.show()
Explanation: Change Matrix
End of explanation
# 1. Get resolution. In this case, we know the x and y coordinates are longitude and latitude
# and that the products all have the same resolution.
prod_info = dc.list_products()
deg_per_px = prod_info[prod_info['name'] == products_two[0]]['resolution'].values[0]
deg_per_px = np.abs(deg_per_px)
# Roughly 111km per degree of latitude and longitude.
meters_per_px = 111000 * deg_per_px
change_matrix['Area (m^2)'] = change_matrix.Number * np.prod(meters_per_px)
cng_mtx_as_table = change_matrix.to_dataframe().reset_index()
baseline_analysis_cls_strs = cng_mtx_as_table[['baseline', 'analysis']].values
cng_mtx_as_table.drop(columns=['analysis', 'baseline'], inplace=True)
cng_mtx_as_table.index = ["{} to {}".format(base_cls, analysis_cls) for base_cls, analysis_cls
in baseline_analysis_cls_strs]
cng_mtx_as_table.style.format({'percent': "{:.2%}"})
Explanation: Calculate the area of each of the 9 transition classes in the change matrix.
End of explanation
# Calculate the net change of water from the baseline to the analysis time periods.
starting_water_num = stats_table_two.loc[['Water to No Water', 'Water to Water'], ['Number']].sum()[0]
net_change_water_num = (stats_table_two.loc['No Water to Water', 'Number'] - \
stats_table_two.loc['Water to No Water', 'Number'])
net_change_water_pct = net_change_water_num / starting_water_num
print("The percent of change in the water extent from the baseline time period " \
"to the analysis time period (relative change from the baseline) "\
"is {0:.2%}.\n".format(net_change_water_pct))
# Classify the area by degree of change.
# Transitions from never water to some/always water (and vice versa) are considered changes.
frac_change_cls = stats_table_two.loc[['No Water to Water', 'Water to No Water'], ['Percent']].sum()[0]
region_class = None
frac_change_cls = abs(frac_change_cls)
if frac_change_cls <= 0.1:
region_class = "Unmodified Natural (class A)"
if 0.1 < frac_change_cls <= 0.2:
region_class = "Largely Natural (class B)"
if 0.2 < frac_change_cls <= 0.4:
region_class = "Moderately Modified (class C)"
if 0.4 < frac_change_cls <= 0.6:
region_class = "Largely Modified (class D)"
if 0.6 < frac_change_cls:
region_class = "Seriously Modified (class E)"
print("The percent of pixels that are different water classes between the baseline and analysis time periods " \
"is {0:.2%}, so this region can be classified as {1}".format(frac_change_cls, region_class))
Explanation: <a id="classify_by_degree_change"></a>Classify the Area by Degree of Change ▴
End of explanation |
11,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explicit 2D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from
Step1: Generate the data with noise
Step2: Plot inline and save image
Step3: Output json and csv data
Step4: Output clean json and csv data | Python Code:
from pypge.benchmarks import explicit
import numpy as np
# visualization libraries
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import gridspec
# plot the visuals in ipython
%matplotlib inline
Explanation: Explicit 2D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from:
Nguyen_09
Nguyen_10
Nguyen_11
Nguyen_12
Imports
End of explanation
# Set your output directories
img_dir = "../img/benchmarks/explicit/"
data_dir = "../data/benchmarks/explicit/"
# choose your problem here
prob = explicit.Nguyen_12(noise=5.0, npts=2000)
# you can also specify the following params as keyword arguments
#
# params = {
# 'name': "Koza_01",
# 'xs_str': ["x"],
# 'eqn_str': "x**4 + x**3 + x**2 + x",
# 'xs_params': [ (-4.0,4.0) ],
# 'npts': 200,
# 'noise': 1.0
# }
# or make your own with the following
#
# explicit.Explicit_1D(params):
Explanation: Generate the data with noise
End of explanation
print prob['name'], prob['eqn']
print prob['xpts'].shape
xs = prob['xpts'][0]
ys = prob['xpts'][1]
zs = prob['ypure']
fig = plt.figure()
fig.set_size_inches(16, 12)
gs = gridspec.GridSpec(4, 2)
fig.suptitle(prob['name'] + " Clean", fontsize=36)
ax1 = fig.add_subplot(gs[0:2,:], projection='3d')
ax1.scatter(xs, ys, zs, c='b', marker='.')
ax1.set_xlabel('X')
ax1.set_ylabel('Y')
ax1.set_zlabel('Z')
ax2 = fig.add_subplot(gs[2,:])
ax2.scatter(xs, zs, marker='.')
ax2.set_xlabel('X')
ax2.set_ylabel('Z')
ax3 = fig.add_subplot(gs[3,:])
ax3.scatter(ys, zs, marker='.')
ax3.set_xlabel('Y')
ax3.set_ylabel('Z')
plt.savefig(img_dir + prob['name'].lower() + "_clean.png", dpi=200)
plt.show()
zs = prob['ypts']
fig = plt.figure()
fig.set_size_inches(16, 12)
gs = gridspec.GridSpec(4, 2)
fig.suptitle(prob['name'] + " Noisy", fontsize=36)
ax = fig.add_subplot(gs[0:2,:], projection='3d')
ax.scatter(xs, ys, zs, c='b', marker='.')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax2 = fig.add_subplot(gs[2,:])
ax2.scatter(xs, zs, marker='.')
ax2.set_xlabel('X')
ax2.set_ylabel('Z')
ax3 = fig.add_subplot(gs[3,:])
ax3.scatter(ys, zs, marker='.')
ax3.set_xlabel('Y')
ax3.set_ylabel('Z')
plt.savefig(img_dir + prob['name'].lower() + "_noisy.png", dpi=200)
plt.show()
Explanation: Plot inline and save image
End of explanation
data = np.array([prob['xpts'][0], prob['xpts'][1], prob['ypts']]).T
print data.shape
cols = [['x', 'y', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open(data_dir + prob['name'].lower() + ".json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open(data_dir + prob['name'].lower() + ".csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
Explanation: Output json and csv data
End of explanation
data = np.array([prob['xpts'][0], prob['xpts'][1], prob['ypure']]).T
print data.shape
cols = [['x', 'y', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open(data_dir + prob['name'].lower() + "_clean.json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open(data_dir + prob['name'].lower() + "_clean.csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
Explanation: Output clean json and csv data
End of explanation |
11,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Improving the Search Index
(Inspired by and borrowed heavily from
Step5: Stemming
As we could see from the results of the last assignment, our simple index doesn't handle punctuation and the difference between singular and plural versions of the same word very well. A possible solution to those issues would be to apply better tokenization and stemming. Fortunately, Python's NLTK package provides implementations of these algorithms we can use. You have to install NLTK by following these instructions.
Step6: Ranking
Another way to improve our search results is to rank them. A possible way to do this is to calculate a score for each document based on the matching terms from the query. One such scoring method is tf-idf, as explained in the lecture slides.
In order to quickly calculate the scores for a term/document combination, we'll need quick access to a couple of things
Step7: Using these three helper functions, we can now easily calculate the tf-idf weights of a term in a document by implementing the weighting formula from the slides, which you will do in the assignments below.
Assignments
Your name
Step8: [Write your answer text here]
Create a function tfidf(t,d) that returns the tf.idf score of term t in document d by using the tf(t,d), df(t) and num_documents() functions we defined above. The tf-idf formula can be found on the lecture slides. Test your function with the examples shown below.
You can use our old index for this task and the tasks below
Step9: Create a function query(query_string), which accepts as input a single query string that could consist of one or more words, and returns or prints a list of (up to) 10 best matching documents, along with their score.
You should use tf-idf to calculate document scores based on the query, and the results should be ordered by score in descending order.
Hint
Step10: Come up with a few example queries to run, and include the output here. Do the results make sense? Why (not)? | Python Code:
import pickle, bz2, re
from collections import namedtuple, defaultdict, Counter
from IPython.display import display, HTML
from math import log10
Summaries_file = 'data/air__Summaries.pkl.bz2'
Abstracts_file = 'data/air__Abstracts.pkl.bz2'
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
def display_summary( id, extra_text='' ):
Function for printing a paper's summary through IPython's Rich Display System.
Trims long titles or author lists, and links to the paper's DOI (when available).
s = Summaries[ id ]
title = ( s.title if s.title[-1]!='.' else s.title[:-1] )
title = title[:150].rstrip() + ('' if len(title)<=150 else '...')
if s.doi!='':
title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title)
authors = ', '.join( s.authors[:5] ) + ('' if len(s.authors)<=5 else ', ...')
lines = [
title,
authors,
str(s.year),
'<small>id: %d%s</small>' % (id, extra_text)
]
display( HTML( '<blockquote>%s</blockquote>' % '<br>'.join(lines) ) )
def display_abstract( id, highlights=[]):
Function for displaying an abstract. Includes optional (naive) highlighting
a = Abstracts[ id ]
for h in highlights:
a = re.sub(r'\b(%s)\b'%h,'<mark>\\1</mark>',a, flags=re.IGNORECASE)
display( HTML( '<blockquote>%s</blockquote' % a ) )
def tokenize(text):
Function that tokenizes a string in a rather naive way. Can be extended later.
return text.split(' ')
def preprocess(tokens):
Perform linguistic preprocessing on a list of tokens. Can be extended later.
result = []
for token in tokens:
result.append(token.lower())
return result
inverted_index = defaultdict(set)
for (id, abstract) in Abstracts.items():
for term in preprocess(tokenize(abstract)):
inverted_index[term].add(id)
Explanation: Improving the Search Index
(Inspired by and borrowed heavily from: Collective Intelligence - Luís F. Simões. IR version and assignments by J.E. Hoeksema, 2014-11-12. Converted to Python 3 and minor changes by Tobias Kuhn, 2015-11-10.)
This notebook's purpose is to improve the search index and query functions built in the previous notebooks and assignments.
Loading the Data, Defining some functions
This section is copied from the previous notebook.
End of explanation
from nltk.tokenize import word_tokenize
from nltk.stem.snowball import EnglishStemmer
import nltk
nltk.download('punkt')
stemmer = EnglishStemmer()
s = '''Good muffins cost $3.88\nin New York. Please buy me two of them.\n\nThanks.'''
print(tokenize(s))
print(word_tokenize(s))
print(stemmer.stem("processes"))
Explanation: Stemming
As we could see from the results of the last assignment, our simple index doesn't handle punctuation and the difference between singular and plural versions of the same word very well. A possible solution to those issues would be to apply better tokenization and stemming. Fortunately, Python's NLTK package provides implementations of these algorithms we can use. You have to install NLTK by following these instructions.
End of explanation
tf_matrix = defaultdict(Counter)
for (id, abstract) in Abstracts.items():
tf_matrix[id] = Counter(preprocess(tokenize(abstract)))
def tf(t,d):
return float(tf_matrix[d][t])
def df(t):
return float(len(inverted_index[t]))
def num_documents():
return float(len(Abstracts))
print(tf('air',16820458))
print(df('air'))
print(num_documents())
Explanation: Ranking
Another way to improve our search results is to rank them. A possible way to do this is to calculate a score for each document based on the matching terms from the query. One such scoring method is tf-idf, as explained in the lecture slides.
In order to quickly calculate the scores for a term/document combination, we'll need quick access to a couple of things:
* tf(t,d) - How often does a term occur in a document
* df(t) - In how many documents does a term occur
* N - The number of documents in our index
End of explanation
# Change this code according to the task above:
from functools import reduce
def smarter_tokenize(text):
# Change this
return text.split(' ')
def smarter_preprocess(tokens):
result = []
for token in tokens:
# Change this
result.append(token.lower())
return result
def smarter_and_query(query): # Regular and_query using smarter_tokenize and smarter_preprocess
return reduce(lambda a, e: a.intersection(e), [smarter_index[term] for term in smarter_preprocess(smarter_tokenize(query))])
smarter_index = defaultdict(set)
# The code below creates an inverted index based on a subset of the documents
subset = set(Abstracts.keys()).intersection(set(range(26400000,26500000)))
for (id, abstract) in ((k, Abstracts[k]) for k in subset):
for term in smarter_preprocess(smarter_tokenize(abstract)):
smarter_index[term].add(id)
Explanation: Using these three helper functions, we can now easily calculate the tf-idf weights of a term in a document by implementing the weighting formula from the slides, which you will do in the assignments below.
Assignments
Your name: ...
Change (in the code cell below) the smarter_tokenize function to use NLTK's word_tokenize function for tokenization, and the smarter_preprocess function to perform stemming in addition to case normalization. Does smarter_and_query("air sample") return the paper 26488732? Why (not)?
Note: We are generating this index on a subset of the data, as generating an index with stemming on the entire set would take up to half an hour.
End of explanation
# Add your code here
#print(tfidf('air', 26488732))
#print(tfidf('samples', 26488732))
#print(tfidf('monkey', 26488732))
Explanation: [Write your answer text here]
Create a function tfidf(t,d) that returns the tf.idf score of term t in document d by using the tf(t,d), df(t) and num_documents() functions we defined above. The tf-idf formula can be found on the lecture slides. Test your function with the examples shown below.
You can use our old index for this task and the tasks below: You do not need to include the results from above with the smarter tokenization and preprocessing functions.
You can use the log10(n) function to calculate log<sub>10</sub>(n).
End of explanation
# Add your code here
Explanation: Create a function query(query_string), which accepts as input a single query string that could consist of one or more words, and returns or prints a list of (up to) 10 best matching documents, along with their score.
You should use tf-idf to calculate document scores based on the query, and the results should be ordered by score in descending order.
Hint: Start by copying your or_query function from mini-assignment 2, then expand that to rank the results, making use of the tfidf(t,d) function you created earlier.
Use the provided display_summary(id,extra_text) function to make the output a bit more 'search engine'-like.
End of explanation
# Add your code here
Explanation: Come up with a few example queries to run, and include the output here. Do the results make sense? Why (not)?
End of explanation |
11,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence
Step1: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
Step2: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
Step3: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
Step4: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
Step5: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
Step6: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
Step7: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
Step8: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
Step9: Helper Functions
We will now implement a few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assignment_history. We call this new class InstruCSP. This will allow us to see how the assignment evolves over time.
Step10: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
Step11: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes they are connected to.
Step12: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
Step13: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
Step14: Let us also check the number of assignments made.
Step15: Now let us check the total number of assignments and unassignments which is the length ofour assignment history.
Step16: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
Step17: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
Step18: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
Step19: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assignment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
Step20: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
Step21: Finally let us plot our problem. We first use the function above to obtain a step function.
Step22: Next we set the canvas size.
Step23: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step24: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
Step25: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
Step26: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step27: Now let us finally repeat the above steps for min_conflicts solution.
Step28: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background. | Python Code:
from csp import *
Explanation: Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.
End of explanation
%psource CSP
Explanation: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
End of explanation
s = UniversalDict(['R','G','B'])
s[5]
Explanation: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
End of explanation
%psource different_values_constraint
Explanation: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
End of explanation
%pdoc parse_neighbors
Explanation: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
End of explanation
%psource MapColoringCSP
australia, usa, france
Explanation: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
End of explanation
%psource queen_constraint
Explanation: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
End of explanation
%psource NQueensCSP
Explanation: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
End of explanation
eight_queens = NQueensCSP(8)
Explanation: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
End of explanation
import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assignment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assignment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assignment_history.append(copy.deepcopy(assignment))
Explanation: Helper Functions
We will now implement a few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assignment_history. We call this new class InstruCSP. This will allow us to see how the assignment evolves over time.
End of explanation
def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors,
csp.constraints)
Explanation: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
End of explanation
neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
}
Explanation: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes they are connected to.
End of explanation
coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem)
Explanation: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
End of explanation
result = backtracking_search(coloring_problem1)
result # A dictonary of assignments.
Explanation: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
End of explanation
coloring_problem1.nassigns
Explanation: Let us also check the number of assignments made.
End of explanation
len(coloring_problem1.assignment_history)
Explanation: Now let us check the total number of assignments and unassignments which is the length ofour assignment history.
End of explanation
%psource mrv
%psource num_legal_values
%psource CSP.nconflicts
Explanation: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
End of explanation
%psource lcv
Explanation: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
End of explanation
solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac )
solve_simple.nassigns
solve_parameters.nassigns
Explanation: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
End of explanation
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time
Explanation: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assignment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
End of explanation
def make_update_step_function(graph, instru_csp):
def draw_graph(graph):
# create networkx graph
G=nx.Graph(graph)
# draw graph
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assignment_history we want to visualize.
current = instru_csp.assignment_history[iteration]
# We convert the particular assignment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so as to not overlap nodes.
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# show graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
Explanation: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
End of explanation
step_func = make_update_step_function(neighbors, coloring_problem1)
Explanation: Finally let us plot our problem. We first use the function above to obtain a step function.
End of explanation
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
Explanation: Next we set the canvas size.
End of explanation
import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assignment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
def label_queen_conflicts(assignment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assignment.items(): # check each queen for conflict
row_conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if temp_row == row and temp_col != col}
up_conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if temp_row+temp_col == row+col and temp_col != col}
down_conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if temp_row-temp_col == row-col and temp_col != col}
# Now marking the grid.
for col, row in row_conflicts.items():
grid[col][row] = 3
for col, row in up_conflicts.items():
grid[col][row] = 3
for col, row in down_conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assignment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step
Explanation: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
End of explanation
twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
Explanation: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
End of explanation
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assignment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
Explanation: Now let us finally repeat the above steps for min_conflicts solution.
End of explanation
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assignment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.
End of explanation |
11,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Template Scan Analysis
This notebook is the 2nd part of the template scan analysis. Its input are the output files of the template_scan_analysis.sh shell script, which is simply a little script calling the digicam-template command on a defined set of raw-data files for you, so everybody can be sure to get the same results.
Let us quickly check if the 3 files are really here
Step1: Okay now we import all the libs we are going to need.
Step2: Now we read the histograms into memory, so we have quick and easy access, also we define a few global variables like x_bin_center, y_bin_center and extent, for plotting and analysis of the histograms. We store the histograms in the H dict using the file names as keys.
Step3: To give you a feeling what we are working with in the next few cells, let us plot just one example histogram
Step4: The next function analyse_2d_histo_for_pixel takes one of these histograms we have just seen, calculates the profile (think TProfile, if you are a ROOT user) and fits a cubic spline to the profile (where we think we know it well enough).
Developer remark
Step5: The interactive function below was useful in the beginning to explore the datasets and see what the problems might be.
It grew over time, and you see it is quite long. It does however not perform any analysis task. It is just plotting results, so you can ignore it
Step6: The cell below tries to find the "best" spline for every pixel. You can see above that depending on the DAC setting, the pixel can saturate, which is visible here as a longer but flatter curve.
Other pixel look into LEDs which are comparatively dim, i.e. at low DAC settings these pixel might see no light at all, while at the highest DAC setting they see enough light to produce a nicely defined template curve.
In order to find the "best" (non-saturating) template I say
Step7: The cell below simply plots the splines for all 1296 pixels into one plot, to understand if we really need one template per pixel
Step8: And in the cell below, we can see how the pulse_SST-1M_pixel_0.dat looks in comparison to the average template we got from 1296 different pixels. I find it remarkably similar. | Python Code:
ls template_scan_dac_*.h5
Explanation: Template Scan Analysis
This notebook is the 2nd part of the template scan analysis. Its input are the output files of the template_scan_analysis.sh shell script, which is simply a little script calling the digicam-template command on a defined set of raw-data files for you, so everybody can be sure to get the same results.
Let us quickly check if the 3 files are really here:
End of explanation
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact
from matplotlib import colors
from glob import glob
from scipy import interp, interpolate
from scipy.interpolate import BSpline, CubicSpline
from tqdm import tqdm, trange
Explanation: Okay now we import all the libs we are going to need.
End of explanation
paths = sorted(glob('template_scan_dac_*.h5'))
H = {}
for path in paths:
with h5py.File(path) as f:
dset = f['adc_count_histo']
H[path] = dset[...]
extent = dset.attrs['extent']
x_bin_edges = np.linspace(*extent[:2], dset.shape[1]+1)
y_bin_edges = np.linspace(*extent[2:], dset.shape[2]+1)
x_bin_center = (x_bin_edges[1:] + x_bin_edges[:-1]) / 2
y_bin_center = (y_bin_edges[1:] + y_bin_edges[:-1]) / 2
Explanation: Now we read the histograms into memory, so we have quick and easy access, also we define a few global variables like x_bin_center, y_bin_center and extent, for plotting and analysis of the histograms. We store the histograms in the H dict using the file names as keys.
End of explanation
pixel_id = 133
for h in H.values():
plt.imshow(
h[pixel_id].T,
origin='bottom',
extent=extent,
norm=colors.LogNorm()
)
plt.colorbar()
plt.gca().set_aspect('auto')
plt.xlabel('time around 50% max height [ns]')
plt.ylabel('normalized amplitude')
plt.title('example 2d histogram from pixel {}'.format(pixel_id))
break
None
Explanation: To give you a feeling what we are working with in the next few cells, let us plot just one example histogram:
End of explanation
def analyse_2d_histo_for_pixel(histogram_2d):
_h = histogram_2d
N = _h.sum(axis=-1)
mode = y_bin_center[_h.argmax(axis=-1)]
mean = (_h * y_bin_center[None, :]).sum(axis=-1) / N
squared_sum = (y_bin_center[None, :] - mean[:, None])**2
std = np.sqrt((_h * squared_sum).sum(axis=-1) / (N-1))
average_std = np.nanmean(std)
# For the spline we only use those bins, where we have "enough"
# statistics. I define here "enough" as 100 entries
G = N >= 100
_x = x_bin_center[G]
_y = mean[G]
spl = CubicSpline(_x, _y)
return {
'mode': mode,
'mean': mean,
'std': std,
'N': N,
'spline': spl,
'enough_entries': G,
}
Explanation: The next function analyse_2d_histo_for_pixel takes one of these histograms we have just seen, calculates the profile (think TProfile, if you are a ROOT user) and fits a cubic spline to the profile (where we think we know it well enough).
Developer remark:
This function clearly does more than one thing, hence the general name "analyse".
I think, "mode", "mean", "std" could also be methods of a Histogram2D class, then this function basically boils down to calculating the spline, which will look much cleaner.
Also you see this function looks again into the globals: y_bin_centerand x_bin_center, this is also bad, as you see below, when I analyze the combined histogram of all 1296 pixels.
End of explanation
@interact
def plot(pid=(0, 1295)):
N = len(H)
fig, ax = plt.subplots(N+1, figsize=(12, 12), sharex=True)
splines = []
for ax_id, (path, h) in enumerate(H.items()):
result = analyse_2d_histo_for_pixel(h[pid])
splines.append((
result['spline'],
np.nanmean(result['std'])
))
G = result['enough_entries']
img = ax[ax_id].imshow(
h[pid].T,
origin='bottom',
extent=extent,
norm=colors.LogNorm()
)
plt.colorbar(img, ax=ax[ax_id])
ax[ax_id].errorbar(
x=x_bin_center[G],
y=result['mean'][G],
yerr=result['std'][G],
fmt='.',
color='red'
)
__x = np.linspace(x_bin_center.min(), x_bin_center.max(), 1000)
ax[ax_id].plot(__x, result['spline'](__x), '-', color='magenta', lw=1)
ax[ax_id].set_aspect('auto')
ax[ax_id].set_ylabel('normalized amplitude')
ax[ax_id].set_title('Path:{}'.format(path))
ax[ax_id].grid()
for spl, average_std in splines:
__x = np.linspace(x_bin_center.min(), x_bin_center.max(), 1000)
ax[-1].plot(__x, spl(__x), '-', label='avg std: {:.2f}'.format(average_std))
ax[-1].grid()
ax[-1].legend()
plt.suptitle('Pixel: {}'.format(pid))
plt.xlabel('time around 50% max height [ns]')
plt.tight_layout()
None
Explanation: The interactive function below was useful in the beginning to explore the datasets and see what the problems might be.
It grew over time, and you see it is quite long. It does however not perform any analysis task. It is just plotting results, so you can ignore it
End of explanation
splines = []
for pid in trange(1296):
sub_splines = {}
for path, h in H.items():
result = analyse_2d_histo_for_pixel(h[pid])
max_amplitude = result['spline'](np.linspace(0, 20, 50)).max()
sub_splines[(max_amplitude, np.nanmean(result['std']))] = result['spline']
keys = list(sub_splines.keys())
average_stds = np.array([k[-1] for k in keys])
max_amplitudes = np.array([k[0] for k in keys])
if (average_stds < 0.05).all():
splines.append(sub_splines[keys[np.argmax(max_amplitudes)]])
else:
splines.append(sub_splines[keys[np.argmin(average_stds)]])
Explanation: The cell below tries to find the "best" spline for every pixel. You can see above that depending on the DAC setting, the pixel can saturate, which is visible here as a longer but flatter curve.
Other pixel look into LEDs which are comparatively dim, i.e. at low DAC settings these pixel might see no light at all, while at the highest DAC setting they see enough light to produce a nicely defined template curve.
In order to find the "best" (non-saturating) template I say:
* if all profiles have very low std deviations, then take the highest template.
* if not all profiles have low std deviations, then take the one with the smallest errors.
I think this method is not perfect, but at the moment, I have no better idea.
End of explanation
x = []
y = []
_x = np.linspace(x_bin_center.min(), x_bin_center.max(), 1000)
for spl in splines:
x.append(_x)
y.append(spl(_x))
x = np.concatenate(x)
y = np.concatenate(y)
plt.figure(figsize=(18, 12))
histogram_2d, xe, ye, _ = plt.hist2d(
x,
y,
bins=(501, 501),
range=[extent[:2], extent[2:]],
norm=colors.LogNorm()
)
plt.grid()
plt.colorbar()
None
_h = histogram_2d
xc = (xe[1:] + xe[:-1]) / 2
yc = (ye[1:] + ye[:-1]) / 2
N = _h.sum(axis=-1)
mode = yc[_h.argmax(axis=-1)]
mean = (_h * yc[None, :]).sum(axis=-1) / N
squared_sum = (yc[None, :] - mean[:, None])**2
std = np.sqrt((_h * squared_sum).sum(axis=-1) / (N-1))
average_std = np.nanmean(std)
# For the spline we only use those bins, where we have "enough"
# statistics. I define here "enough" as 100 entries
G = N >= 100
_x = xc[G]
_y = mean[G]
spl = CubicSpline(_x, _y)
plt.errorbar(
x=xc,
y=mean,
yerr=std / np.sqrt(1296),
fmt='.',
color='red'
)
Explanation: The cell below simply plots the splines for all 1296 pixels into one plot, to understand if we really need one template per pixel
End of explanation
from digicampipe.utils.utils import get_pulse_shape
plt.figure(figsize=(14, 8))
plt.plot(xc, mean, label='mean of 1296 templates')
plt.plot(
xc,
get_pulse_shape(xc, -7.5, np.nanmax(mean), 0),
label='pulse_SST-1M_pixel_0.dat'
)
plt.xlabel('time around 50% max height [ns]')
plt.ylabel('normalized amplitude')
plt.legend(loc='upper right');
Explanation: And in the cell below, we can see how the pulse_SST-1M_pixel_0.dat looks in comparison to the average template we got from 1296 different pixels. I find it remarkably similar.
End of explanation |
11,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exports nodes and edges from tweets (Retweets, Mentions, or Replies) [CSV]
Exports nodes and edges from tweets (either from retweets or mentions) in json format that can be exported from SFM, and saves it in a file format compatible with various social network graph tools such as Gephi, Cytoscape, Kumu, etc. These are for directed graphs.
Step1: 1. Export edges from Retweets, Mentions, or Replies
Run one of three blocks of codes below for your purpose.
Step2: 2. Leave only the tweets whose strength level >= user specified level (directed)
Step3: 3. Export nodes
Step4: 4. Export nodes and edges to csv files | Python Code:
import sys
import json
import re
import numpy as np
from datetime import datetime
import pandas as pd
tweetfile = '/home/soominpark/sfmproject/Work/Network Graphs/food_security.csv'
tweets = pd.read_csv(tweetfile)
Explanation: Exports nodes and edges from tweets (Retweets, Mentions, or Replies) [CSV]
Exports nodes and edges from tweets (either from retweets or mentions) in json format that can be exported from SFM, and saves it in a file format compatible with various social network graph tools such as Gephi, Cytoscape, Kumu, etc. These are for directed graphs.
End of explanation
# 1. Export edges from Retweets
retweets = tweets[tweets['is_retweet'] == 'Yes']
retweets['original_twitter'] = retweets['text'].str.extract('RT @([a-zA-Z0-9]\w{0,}):', expand=True)
edges = retweets[['screen_name', 'original_twitter','created_at']]
edges.columns = ['Source', 'Target', 'Strength']
# 2. Export edges from Mentions
mentions = tweets[tweets['mentions'].notnull()]
edges = pd.DataFrame(columns=('Source','Target','Strength'))
for index, row in mentions.iterrows():
mention_list = row['mentions'].split(", ")
for mention in mention_list:
edges = edges.append(pd.DataFrame([[row['screen_name'],
mention,
row['created_at']]]
, columns=('Source','Target','Strength')), ignore_index=True)
# 3. Export edges from Replies
replies = tweets[tweets['in_reply_to_screen_name'].notnull()]
edges = replies[['screen_name', 'in_reply_to_screen_name','created_at']]
edges.columns = ['Source', 'Target', 'Strength']
Explanation: 1. Export edges from Retweets, Mentions, or Replies
Run one of three blocks of codes below for your purpose.
End of explanation
strengthLevel = 3 # Network connection strength level: the number of times in total each of the tweeters responded to or mentioned the other.
# If you have 1 as the level, then all tweeters who mentioned or replied to another at least once will be displayed. But if you have 5, only those who have mentioned or responded to a particular tweeter at least 5 times will be displayed, which means that only the strongest bonds are shown.
edges2 = edges.groupby(['Source','Target'])['Strength'].count()
edges2 = edges2.reset_index()
edges2 = edges2[edges2['Strength'] >= strengthLevel]
Explanation: 2. Leave only the tweets whose strength level >= user specified level (directed)
End of explanation
# Export nodes from the edges and add node attributes for both Sources and Targets.
users = tweets[['screen_name','followers_count','friends_count']]
users = users.sort_values(['screen_name','followers_count'], ascending=[True, False])
users = users.drop_duplicates(['screen_name'], keep='first')
ids = edges2['Source'].append(edges2['Target']).to_frame()
ids['Label'] = ids
ids.columns = ['screen_name', 'Label']
ids = ids.drop_duplicates(['screen_name'], keep='first')
nodes = pd.merge(ids, users, on='screen_name', how='left')
print(nodes.shape)
print(edges2.shape)
Explanation: 3. Export nodes
End of explanation
# change column names for Kumu import (Run this when using Kumu)
edges2.columns = ['From','To','Strength']
# Print nodes to check
nodes.head(3)
# Print edges to check
edges2.head(3)
# Export nodes and edges to csv files
nodes.to_csv('nodes.csv', encoding='utf-8', index=False)
edges2.to_csv('edges.csv', encoding='utf-8', index=False)
Explanation: 4. Export nodes and edges to csv files
End of explanation |
11,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natural Language Processing in a Kaggle Competition
Step1: Now set up our function. This will clean all of the reviews for us.
Step2: Great! Now it is time to go ahead and load our data in. For this, pandas is definitely the library of choice. If you want to follow along with a downloaded version of the notebook yourself, make sure you obtain the data from Kaggle. You will need a Kaggle account in order to access it.
Step3: Now it is time to get the labels from the training set for our reviews. That way, we can teach our classifier which reviews are positive vs. negative.
Step4: Now we need to clean both the train and test data to get it ready for the next part of our program.
Step5: TF-IDF Vectorization
The next thing we are going to do is make TF-IDF (term frequency-interdocument frequency) vectors of our reviews. In case you are not familiar with what this is doing, essentially we are going to evaluate how often a certain term occurs in a review, but normalize this somewhat by how many reviews a certain term also occurs in. Wikipedia has an explanation that is sufficient if you want further information.
This can be a great technique for helping to determine which words (or ngrams of words) will make good features to classify a review as positive or negative.
To do this, we are going to use the TFIDF vectorizer from scikit-learn. Then, decide what settings to use. The documentation for the TFIDF class is available here.
In the case of the example code on Kaggle, they decided to remove all stop words, along with ngrams up to a size of two (you could use more but this will require a LOT of memory, so be careful which settings you use!)
Step6: Now that we have the vectorization object, we need to run this on all of the data (both training and testing) to make sure it is applied to both datasets. This could take some time on your computer!
Step7: Making Our Classifiers
Because we are working with text data, and we just made feature vectors of every word (that isn't a stop word of course) in all of the reviews, we are going to have sparse matrices to deal with that are quite large in size. Just to show you what I mean, let's examine the shape of our training set.
Step8: That means we have 25,000 training examples (or rows) and 309,798 features (or columns). We need something that is going to be somewhat computationally efficient given how many features we have. Using something like a random forest to classify would be unwieldy (plus random forests can't work with sparse matrices anyway yet in scikit-learn). That means we need something lightweight and fast that scales to many dimensions well. Some possible candidates are
Step9: You can investigate which parameters did the best and what scores they received by looking at the model_LR object.
Step10: Feel free, if you have an interactive version of the notebook, to play around with various settings inside the grid_values object to optimize your ROC_AUC score. Otherwise, let's move on to the next classifier, Naive Bayes.
Unlike Logistic Regression, Naive Bayes doesn't have a regularization parameter to tune. You just have to choose which "flavor" of Naive Bayes to use.
According to the documentation on Naive Bayes from scikit-learn, Multinomial is our best version to use, since we no longer have just a 1 or 0 for a word feature
Step11: Pretty fast, right? This speed comes at a price, however. Naive Bayes assumes all of your features are ENTIRELY independent from each other. In the case of word vectors, that seems like a somewhat reasonable assumption but with the ngrams we included that probably isn't always the case. Because of this, Naive Bayes tends to be less accurate than other classification algorithms, especially if you have a smaller number of training examples.
Why don't we see how Naive Bayes does (at least in a 20 fold CV comparison) so we have a rough idea of how well it performs compared to our Logistic Regression classifier?
You could use GridSearch again, but that seems like overkill. There is a simpler method we can import from scikit-learn for this task.
Step12: Well, it wasn't quite as good as our well-tuned Logistic Regression classifier, but that is a pretty good score considering how little we had to do!
One last classifier to try is the SGD classifier, which comes in handy when you need speed on a really large number of training examples/features.
Which machine learning algorithm it ends up using depends on what you set for the loss function. If we chose loss = 'log', it would essentially be identical to our previous logistic regression model. We want to try something different, but we also want a loss option that includes probabilities. We need those probabilities if we are going to be able to calculate the area under a ROC curve. Looking at the documentation, it seems a 'modified_huber' loss would do the trick! This will be a Support Vector Machine that uses a linear kernel.
Step13: Again, similar to the Logistic Regression model, we can see which parameter did the best.
Step14: Looks like this beat our previous Logistic Regression model by a very small amount. Now that we have our three models, we can work on submitting our final scores in the proper format. It was found that submitting predicted probabilities of each score instead of the final predicted score worked better for evaluation from the contest participants, so we want to output this instead.
First, do our Logistic Regression submission.
Step15: Repeat this with the other two. | Python Code:
import re
from bs4 import BeautifulSoup
Explanation: Natural Language Processing in a Kaggle Competition: Movie Reviews
<img src='Movie_thtr.jpg', width = 800, height = 600>
Source
I decided to try playing around with a Kaggle competition. In this case, I entered the "When bag of words meets bags of popcorn" contest. This contest isn't for money; it is just a way to learn about various machine learning approaches.
The competition was trying to showcase Google's Word2Vec. This essentially uses deep learning to find features in text that can be used to help in classification tasks. Specifically, in the case of this contest, the goal involves labeling the sentiment of a movie review from IMDB. Ratings were on a 10 point scale, and any review of 7 or greater was considered a positive movie review.
Originally, I was going to try out Word2Vec and train it on unlabeled reviews, but then one of the competitors pointed out that you could simply use a less complicated classifier to do this and still get a good result.
I decided to take this basic inspiration and try a few various classifiers to see what I could come up with. The highest my score received was 6th place back in December of 2014, but then people started using ensemble methods to combine various models together and get a perfect score after a lot of fine tuning with the parameters of the ensemble weights.
Hopefully, this notebook will help you understand some basic NLP (Natural Language Processing) techniques, along with some tips on using scikit-learn to make your classification models.
Cleaning the Reviews
The first thing we need to do is create a simple function that will clean the reviews into a format we can use. We just want the raw text, not all of the other associated HTML, symbols, or other junk.
We will need a couple of very nice libraries for this task: BeautifulSoup for taking care of anything HTML related and re for regular expressions.
End of explanation
def review_to_wordlist(review):
'''
Meant for converting each of the IMDB reviews into a list of words.
'''
# First remove the HTML.
review_text = BeautifulSoup(review).get_text()
# Use regular expressions to only include words.
review_text = re.sub("[^a-zA-Z]"," ", review_text)
# Convert words to lower case and split them into separate words.
words = review_text.lower().split()
# Return a list of words
return(words)
Explanation: Now set up our function. This will clean all of the reviews for us.
End of explanation
import pandas as pd
train = pd.read_csv('labeledTrainData.tsv', header=0,
delimiter="\t", quoting=3)
test = pd.read_csv('testData.tsv', header=0, delimiter="\t",
quoting=3 )
# Import both the training and test data.
Explanation: Great! Now it is time to go ahead and load our data in. For this, pandas is definitely the library of choice. If you want to follow along with a downloaded version of the notebook yourself, make sure you obtain the data from Kaggle. You will need a Kaggle account in order to access it.
End of explanation
y_train = train['sentiment']
Explanation: Now it is time to get the labels from the training set for our reviews. That way, we can teach our classifier which reviews are positive vs. negative.
End of explanation
traindata = []
for i in xrange(0,len(train['review'])):
traindata.append(" ".join(review_to_wordlist(train['review'][i])))
testdata = []
for i in xrange(0,len(test['review'])):
testdata.append(" ".join(review_to_wordlist(test['review'][i])))
Explanation: Now we need to clean both the train and test data to get it ready for the next part of our program.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer as TFIV
tfv = TFIV(min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1,
stop_words = 'english')
Explanation: TF-IDF Vectorization
The next thing we are going to do is make TF-IDF (term frequency-interdocument frequency) vectors of our reviews. In case you are not familiar with what this is doing, essentially we are going to evaluate how often a certain term occurs in a review, but normalize this somewhat by how many reviews a certain term also occurs in. Wikipedia has an explanation that is sufficient if you want further information.
This can be a great technique for helping to determine which words (or ngrams of words) will make good features to classify a review as positive or negative.
To do this, we are going to use the TFIDF vectorizer from scikit-learn. Then, decide what settings to use. The documentation for the TFIDF class is available here.
In the case of the example code on Kaggle, they decided to remove all stop words, along with ngrams up to a size of two (you could use more but this will require a LOT of memory, so be careful which settings you use!)
End of explanation
X_all = traindata + testdata # Combine both to fit the TFIDF vectorization.
lentrain = len(traindata)
tfv.fit(X_all) # This is the slow part!
X_all = tfv.transform(X_all)
X = X_all[:lentrain] # Separate back into training and test sets.
X_test = X_all[lentrain:]
Explanation: Now that we have the vectorization object, we need to run this on all of the data (both training and testing) to make sure it is applied to both datasets. This could take some time on your computer!
End of explanation
X.shape
Explanation: Making Our Classifiers
Because we are working with text data, and we just made feature vectors of every word (that isn't a stop word of course) in all of the reviews, we are going to have sparse matrices to deal with that are quite large in size. Just to show you what I mean, let's examine the shape of our training set.
End of explanation
from sklearn.linear_model import LogisticRegression as LR
from sklearn.grid_search import GridSearchCV
grid_values = {'C':[30]} # Decide which settings you want for the grid search.
model_LR = GridSearchCV(LR(penalty = 'L2', dual = True, random_state = 0),
grid_values, scoring = 'roc_auc', cv = 20)
# Try to set the scoring on what the contest is asking for.
# The contest says scoring is for area under the ROC curve, so use this.
model_LR.fit(X,y_train) # Fit the model.
Explanation: That means we have 25,000 training examples (or rows) and 309,798 features (or columns). We need something that is going to be somewhat computationally efficient given how many features we have. Using something like a random forest to classify would be unwieldy (plus random forests can't work with sparse matrices anyway yet in scikit-learn). That means we need something lightweight and fast that scales to many dimensions well. Some possible candidates are:
Naive Bayes
Logistic Regression
SGD Classifier (utilizes Stochastic Gradient Descent for much faster runtime)
Let's just try all three as submissions to Kaggle and see how they perform.
First up: Logistic Regression (see the scikit-learn documentation here).
While in theory L1 regularization should work well because p>>n (many more features than training examples), I actually found through a lot of testing that L2 regularization got better results. You could set up your own trials using scikit-learn's built-in GridSearch class, which makes things a lot easier to try. I found through my testing that using a parameter C of 30 got the best results.
End of explanation
model_LR.grid_scores_
model_LR.best_estimator_
Explanation: You can investigate which parameters did the best and what scores they received by looking at the model_LR object.
End of explanation
from sklearn.naive_bayes import MultinomialNB as MNB
model_NB = MNB()
model_NB.fit(X, y_train)
Explanation: Feel free, if you have an interactive version of the notebook, to play around with various settings inside the grid_values object to optimize your ROC_AUC score. Otherwise, let's move on to the next classifier, Naive Bayes.
Unlike Logistic Regression, Naive Bayes doesn't have a regularization parameter to tune. You just have to choose which "flavor" of Naive Bayes to use.
According to the documentation on Naive Bayes from scikit-learn, Multinomial is our best version to use, since we no longer have just a 1 or 0 for a word feature: it has been normalized by TF-IDF, so our values will be BETWEEN 0 and 1 (most of the time, although having a few TF-IDF scores exceed 1 is technically possible). If we were just looking at word occurrence vectors (with no counting), Bernoulli would have been a better fit since it is based on binary values.
Let's make our Multinomial Naive Bayes object, and train it.
End of explanation
from sklearn.cross_validation import cross_val_score
import numpy as np
print "20 Fold CV Score for Multinomial Naive Bayes: ", np.mean(cross_val_score
(model_NB, X, y_train, cv=20, scoring='roc_auc'))
# This will give us a 20-fold cross validation score that looks at ROC_AUC so we can compare with Logistic Regression.
Explanation: Pretty fast, right? This speed comes at a price, however. Naive Bayes assumes all of your features are ENTIRELY independent from each other. In the case of word vectors, that seems like a somewhat reasonable assumption but with the ngrams we included that probably isn't always the case. Because of this, Naive Bayes tends to be less accurate than other classification algorithms, especially if you have a smaller number of training examples.
Why don't we see how Naive Bayes does (at least in a 20 fold CV comparison) so we have a rough idea of how well it performs compared to our Logistic Regression classifier?
You could use GridSearch again, but that seems like overkill. There is a simpler method we can import from scikit-learn for this task.
End of explanation
from sklearn.linear_model import SGDClassifier as SGD
sgd_params = {'alpha': [0.00006, 0.00007, 0.00008, 0.0001, 0.0005]} # Regularization parameter
model_SGD = GridSearchCV(SGD(random_state = 0, shuffle = True, loss = 'modified_huber'),
sgd_params, scoring = 'roc_auc', cv = 20) # Find out which regularization parameter works the best.
model_SGD.fit(X, y_train) # Fit the model.
Explanation: Well, it wasn't quite as good as our well-tuned Logistic Regression classifier, but that is a pretty good score considering how little we had to do!
One last classifier to try is the SGD classifier, which comes in handy when you need speed on a really large number of training examples/features.
Which machine learning algorithm it ends up using depends on what you set for the loss function. If we chose loss = 'log', it would essentially be identical to our previous logistic regression model. We want to try something different, but we also want a loss option that includes probabilities. We need those probabilities if we are going to be able to calculate the area under a ROC curve. Looking at the documentation, it seems a 'modified_huber' loss would do the trick! This will be a Support Vector Machine that uses a linear kernel.
End of explanation
model_SGD.grid_scores_
Explanation: Again, similar to the Logistic Regression model, we can see which parameter did the best.
End of explanation
LR_result = model_LR.predict_proba(X_test)[:,1] # We only need the probabilities that the movie review was a 7 or greater.
LR_output = pd.DataFrame(data={"id":test["id"], "sentiment":LR_result}) # Create our dataframe that will be written.
LR_output.to_csv('Logistic_Reg_Proj2.csv', index=False, quoting=3) # Get the .csv file we will submit to Kaggle.
Explanation: Looks like this beat our previous Logistic Regression model by a very small amount. Now that we have our three models, we can work on submitting our final scores in the proper format. It was found that submitting predicted probabilities of each score instead of the final predicted score worked better for evaluation from the contest participants, so we want to output this instead.
First, do our Logistic Regression submission.
End of explanation
# Repeat this for Multinomial Naive Bayes
MNB_result = model_NB.predict_proba(X_test)[:,1]
MNB_output = pd.DataFrame(data={"id":test["id"], "sentiment":MNB_result})
MNB_output.to_csv('MNB_Proj2.csv', index = False, quoting = 3)
# Last, do the Stochastic Gradient Descent model with modified Huber loss.
SGD_result = model_SGD.predict_proba(X_test)[:,1]
SGD_output = pd.DataFrame(data={"id":test["id"], "sentiment":SGD_result})
SGD_output.to_csv('SGD_Proj2.csv', index = False, quoting = 3)
Explanation: Repeat this with the other two.
End of explanation |
11,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Why Objects?
Provide modularity and reuse through hierarchical structures
Object oriented programming is a different way of thinking.
Programming With Objects
Step1: Initial concepts
An object is a container of data (attributes) and code (methods)
A class is a template for creating objects
Reuse is provided by
Step2: Attributes
Step3: Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using "self". Instance attributes and methods are accessed using the dot "." operator.
Step4: EXERCISE
Step7: A class diagram provides a more compact representation of a class. There are three sections.
- Class name
- Attributes
- Methods
Instance methods
- functions associated with the objects constructed for a class
- provide a way to transform data in objects
- use instance attributes (references to variables beginning with "self.")
Step9: EXERCISE
Step10: Exercise
Step11: Subclasses can have their own methods.
Exercise
Step12: The diamond arrow is a "has-a" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object.
Interaction Diagram for the ATM System
An interaction diagram specifies how components interact to achieve a use case.
Interactions are from one object to another object, indicating that the first object calls a method in the second object.
Rules for drawing lines in an interaction diagram
Step13: Look at Objects/ATMDiagrams.pdf for a solution.
What Else in Design?
Other diagrams | Python Code:
from IPython.display import Image
Image(filename='Classes_vs_Objects.png')
Explanation: Why Objects?
Provide modularity and reuse through hierarchical structures
Object oriented programming is a different way of thinking.
Programming With Objects
End of explanation
# Definiting a Car class
class Car(object):
pass
car = Car()
Explanation: Initial concepts
An object is a container of data (attributes) and code (methods)
A class is a template for creating objects
Reuse is provided by:
reusing the same class to create many objects
"inheriting" data and code from other classes
End of explanation
from IPython.display import Image
Image(filename='ClassAttributes.png')
Explanation: Attributes
End of explanation
class Car(object):
# The following method is called when the class
# is created or "constructed". The variables "self.x" refers
# to the variable "x" in a created object.
def __init__(self, color, car_type, speed):
self.color = color
self.car_type = car_type
self.speed = speed
car1 = Car("blue", "sedan", "very slow")
car2 = Car("red", "sedan", "not so slow")
print(car1.speed, car2.speed)
class Car(object):
# The following method is called when the class
# is created or "constructed". The variables "self.x" refers
# to the variable "x" in a created object.
def __init__(self, color, car_type, speed, sunroof=True):
self.color = color
self.car_type = car_type
if isinstance(speed, int):
self.speed = speed
else:
raise ValueError("Bad speed value.")
self.sunroof = sunroof
car = Car("blue", "sedan", 100)
# Creating an object for a class with arguments in the __init__ method
car = Car("Blue", "HatchBack", 100)
car.color
# Creating an object for a class with arguments in the __init__ method
joe_car = Car("Blue", "Sedan", 100)
dave_car = Car("Red", "Sports", 150)
print ("Type of joe_car is %s. Type of dave_car is %s"% (type(joe_car), type(dave_car)))
# Accessed instance attributes
joe_car = Car("Blue", "Sedan", 100)
print ("Type of joe_car has (color, type, speed)=%s." % str((joe_car.color, joe_car.car_type, joe_car.speed)))
Explanation: Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using "self". Instance attributes and methods are accessed using the dot "." operator.
End of explanation
from IPython.display import Image
Image(filename='InstanceMethods.png')
#Class diagram
from IPython.display import Image
Image(filename='SingleClassDiagram.png', width=200, height=200)
Explanation: EXERCISE: Change the constructor for Car to include the attribute "doors".
Instance Methods
End of explanation
class Car(object):
def __init__(self, color, car_type, speed):
:param str color:
:param str car_type:
:param int speed:
self.color = color
self.car_type = car_type
self.speed = speed
def start(self):
print ("%s %s started!" % (self.color, self.car_type))
def stop(self):
pass
def turn(self, direction):
:parm str direction: left or right
pass
car = Car("Blue", "Sedan", 100)
car.start()
Explanation: A class diagram provides a more compact representation of a class. There are three sections.
- Class name
- Attributes
- Methods
Instance methods
- functions associated with the objects constructed for a class
- provide a way to transform data in objects
- use instance attributes (references to variables beginning with "self.")
End of explanation
from IPython.display import Image
Image(filename='SimpleClassHierarchy.png', width=400, height=400)
# Code for inheritance
class Sedan(Car):
# Sedan inherits from car
def __init__(self, color, speed):
:param str color:
:param int speed:
super().__init__(color, "Sedan", speed)
def play_cd(self):
print ("Playing cd in %s sedan" % self.color)
sedan = Sedan("Yellow", 1e6)
sedan.color
sedan.car_type
sedan.car_type
joe_car = Sedan("Blue", 100)
print ("Type of joe_car has (color, type, speed)=%s." % str((joe_car.color, joe_car.car_type, joe_car.speed)))
Explanation: EXERCISE: Implement the stop and turn methods. Run the methods.
Inheritance
Inheritance is a common way that classes reuse data and code from other classes. A child class or derived class gets attributes and methods from its parent class.
Programmatically:
- Specify inheritance in the class statement
- Constructor for derived class (class that inherits) have access to the constructor of its parent.
Inheritance is represented in diagrams as an arror from the child class to its parent class.
End of explanation
from IPython.display import Image
Image(filename='ClassInheritance.png', width=400, height=400)
Explanation: Exercise: Implement SportsCar and create dave_car from SportsCar. Print attributes of dave_car.
End of explanation
from IPython.display import Image
Image(filename='ATMClassDiagram.png', width=400, height=400)
Explanation: Subclasses can have their own methods.
Exercise: Add the play_cd() to Sedan and play_bluetooth() method to SportsCar. Construct a test to run these methods.
What Else?
Class attributes
Class methods
Object Oriented Design
A design methodology must specify:
- Components: What they do and how to build them
- Interactions: How the components interact to implement use cases
Object oriented designed
- Components are specified by class diagrams.
- Interactions are specified by interaction diagrams.
Class diagram for the ATM system
End of explanation
from IPython.display import Image
Image(filename='ATMAuthentication.png', width=800, height=800)
Explanation: The diamond arrow is a "has-a" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object.
Interaction Diagram for the ATM System
An interaction diagram specifies how components interact to achieve a use case.
Interactions are from one object to another object, indicating that the first object calls a method in the second object.
Rules for drawing lines in an interaction diagram:
- The calling object must know about the called object.
- The called object must have the method invoked by the calling object.
End of explanation
from IPython.display import Image
Image(filename='SciSheetsCoreClasses.png', width=300, height=30)
Explanation: Look at Objects/ATMDiagrams.pdf for a solution.
What Else in Design?
Other diagrams: state diagrams, package diagrams, ...
Object oriented design patterns
Complex Example of Class Hierarchy
End of explanation |
11,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scikit-Learn 朴素贝叶斯分类
朴素贝叶斯分类是机器学习中常见的分类方法之一,本节将带大家深入学习一下 Scikit-Learn 中朴素贝叶斯分类工具包的相关使用方法。
首先,我们导入相关的库,并构建第一个示例数据集:
Step1: Scikit-Learn 中的 make_blobs 方法常被用来生成聚类算法的测试数据,直观地说,make_blobs 会根据用户指定的特征数量、中心点数量、范围等来生成几类数据,这些数据可用于测试聚类算法的效果。接下来,我们导入朴素贝叶斯分类函数 GaussianNB ,构建并拟合模型:
Step2: 为了更清晰直观地展示模型效果,我们构建了另一组随机样本,用于观察朴素贝叶斯分类模型在二维空间中的分类边界,在这里,我们可以调用 predict 函数,来输出模型对样本点的判断结果:
Step3: 我们在分类中看到一个略微弯曲的边界 - 一般来说,高斯朴素贝叶斯的边界是二次的。
这个贝叶斯形式的一个好处是,它自然支持概率分类,我们可以使用predict_proba方法计算:
Step4: 这些列分别提供了第一个和第二个标签的后验概率。 如果您正在寻找分类中的不确定性估计,这样的贝叶斯方法可能是有用的方法。
当然,最终的分类只会与产生它的模型假设一样好,这就是高斯朴素的贝叶斯经常不会产生优秀结果的原因。 然而,在许多情况下,特别是随着特征数量的增加,这一假设并不足以阻止高斯朴素贝叶斯成为有用的方法。
多项式朴素贝叶斯
刚刚描述的高斯假设,绝不是唯一的简单假设,可以用于指定每个标签的生成分布。 另一个有用的例子是多项式朴素贝叶斯,其中假设特征从简单的多项式分布生成。 多项式分布描述了在多个类别中观察计数的概率,因此多项朴素贝叶斯最适合于表示计数或计数率的特征。
这个想法与以前一样,除了我们不用最佳拟合高斯模型,而用最合适的多项式分布,对数据分布建模。
示例:文本分类
经常使用多项朴素贝叶斯的一个地方是文本分类,其特征与要分类的文档中的字数或频率有关。 我们在特征工程中讨论了这些特征的提取; 在这里,我们将使用 20 个新闻组语料库中的稀疏字数特征,来展示我们如何将这些简短文档分类。
让我们下载数据并查看目标名称:
Step5: 为了简化,我们选择这些分类中的一些,之后下载测试集。
Step6: 数据样本呈如下形式,包含了一篇网络消息的来源、机构等诸多信息:
Step7: 为了将这些数据用于机器学习,我们需要将每个字符串的内容转换为数值向量。 为此,我们将使用 TF-IDF 向量化器(在特征工程中讨论),并创建一个流水线,将其附加到多项朴素贝叶斯分类器:
Step8: 使用这个流水线,我们可以将模型用于训练数据,并测试数据的预测标签。
Step9: 在我们已经预测了测试数据的标签,我们可以评估它们来了解估计器的性能。 例如,这里是测试数据的真实和预测标签之间的混淆矩阵:
Step10: 显然,即使这个非常简单的分类器,也可以成功将空间话题与电脑话题分开,但是在宗教话题和基督教话题之间会混淆。 这是预期的混乱。
这里非常酷,我们现在有了工具,用于确定任何字符串的分类,使用这个流水线的predict()方法。 这是一个简便的工具函数,它返回单个字符串的预测:
Step11: 让我们试试看: | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from sklearn.datasets import make_blobs
X, y = make_blobs(100, 2, centers=2, random_state=2, cluster_std=1.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu');
Explanation: Scikit-Learn 朴素贝叶斯分类
朴素贝叶斯分类是机器学习中常见的分类方法之一,本节将带大家深入学习一下 Scikit-Learn 中朴素贝叶斯分类工具包的相关使用方法。
首先,我们导入相关的库,并构建第一个示例数据集:
End of explanation
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X, y);
Explanation: Scikit-Learn 中的 make_blobs 方法常被用来生成聚类算法的测试数据,直观地说,make_blobs 会根据用户指定的特征数量、中心点数量、范围等来生成几类数据,这些数据可用于测试聚类算法的效果。接下来,我们导入朴素贝叶斯分类函数 GaussianNB ,构建并拟合模型:
End of explanation
rng = np.random.RandomState(0)
Xnew = [-6, -14] + [14, 18] * rng.rand(2000, 2)
ynew = model.predict(Xnew)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu')
lim = plt.axis()
plt.scatter(Xnew[:, 0], Xnew[:, 1], c=ynew, s=20, cmap='RdBu', alpha=0.1)
plt.axis(lim);
Explanation: 为了更清晰直观地展示模型效果,我们构建了另一组随机样本,用于观察朴素贝叶斯分类模型在二维空间中的分类边界,在这里,我们可以调用 predict 函数,来输出模型对样本点的判断结果:
End of explanation
yprob = model.predict_proba(Xnew)
yprob[-8:].round(2)
Explanation: 我们在分类中看到一个略微弯曲的边界 - 一般来说,高斯朴素贝叶斯的边界是二次的。
这个贝叶斯形式的一个好处是,它自然支持概率分类,我们可以使用predict_proba方法计算:
End of explanation
from sklearn.datasets import fetch_20newsgroups
data = fetch_20newsgroups()
data.target_names
Explanation: 这些列分别提供了第一个和第二个标签的后验概率。 如果您正在寻找分类中的不确定性估计,这样的贝叶斯方法可能是有用的方法。
当然,最终的分类只会与产生它的模型假设一样好,这就是高斯朴素的贝叶斯经常不会产生优秀结果的原因。 然而,在许多情况下,特别是随着特征数量的增加,这一假设并不足以阻止高斯朴素贝叶斯成为有用的方法。
多项式朴素贝叶斯
刚刚描述的高斯假设,绝不是唯一的简单假设,可以用于指定每个标签的生成分布。 另一个有用的例子是多项式朴素贝叶斯,其中假设特征从简单的多项式分布生成。 多项式分布描述了在多个类别中观察计数的概率,因此多项朴素贝叶斯最适合于表示计数或计数率的特征。
这个想法与以前一样,除了我们不用最佳拟合高斯模型,而用最合适的多项式分布,对数据分布建模。
示例:文本分类
经常使用多项朴素贝叶斯的一个地方是文本分类,其特征与要分类的文档中的字数或频率有关。 我们在特征工程中讨论了这些特征的提取; 在这里,我们将使用 20 个新闻组语料库中的稀疏字数特征,来展示我们如何将这些简短文档分类。
让我们下载数据并查看目标名称:
End of explanation
categories = ['talk.religion.misc', 'soc.religion.christian',
'sci.space', 'comp.graphics']
train = fetch_20newsgroups(subset='train', categories=categories)
test = fetch_20newsgroups(subset='test', categories=categories)
Explanation: 为了简化,我们选择这些分类中的一些,之后下载测试集。
End of explanation
print(train.data[5])
Explanation: 数据样本呈如下形式,包含了一篇网络消息的来源、机构等诸多信息:
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import make_pipeline
model = make_pipeline(TfidfVectorizer(), MultinomialNB())
Explanation: 为了将这些数据用于机器学习,我们需要将每个字符串的内容转换为数值向量。 为此,我们将使用 TF-IDF 向量化器(在特征工程中讨论),并创建一个流水线,将其附加到多项朴素贝叶斯分类器:
End of explanation
model.fit(train.data, train.target)
labels = model.predict(test.data)
Explanation: 使用这个流水线,我们可以将模型用于训练数据,并测试数据的预测标签。
End of explanation
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(test.target, labels)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=train.target_names, yticklabels=train.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label');
Explanation: 在我们已经预测了测试数据的标签,我们可以评估它们来了解估计器的性能。 例如,这里是测试数据的真实和预测标签之间的混淆矩阵:
End of explanation
def predict_category(s, train=train, model=model):
pred = model.predict([s])
return train.target_names[pred[0]]
Explanation: 显然,即使这个非常简单的分类器,也可以成功将空间话题与电脑话题分开,但是在宗教话题和基督教话题之间会混淆。 这是预期的混乱。
这里非常酷,我们现在有了工具,用于确定任何字符串的分类,使用这个流水线的predict()方法。 这是一个简便的工具函数,它返回单个字符串的预测:
End of explanation
predict_category('sending a payload to the ISS')
predict_category('discussing islam vs atheism')
predict_category('determining the screen resolution')
Explanation: 让我们试试看:
End of explanation |
11,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vectorization and JIT compilation in Python
Examples for the QuantEcon 2017 PhD workshops
John Stachurski
Step1: Problem 1
Step2: Here's a typical time series
Step3: Here's a function that simulates for n periods, starting from x0, and returns only the final value
Step4: Let's see how fast this runs
Step5: Now let's try this in FORTRAN.
Note --- this step is intended to be a demo and will only execute if
you have the file fastquad.f90 in your pwd
you have a FORTRAN compiler installed and modify the compilation code below appropriately
Step6: Now let's do the same thing in Python using Numba's JIT compilation
Step7: After JIT compilation, function execution speed is about the same as FORTRAN.
But remember, JIT compilation for Python is still limited --- see here
If these limitations frustrate you, then try Julia.
Problem 2
Step8: Vectorized code
Step9: JITTed code
A jitted version
Step10: Numba for vectorization with automatic parallelization - even faster | Python Code:
import numpy as np
from numba import vectorize, jit, float64
from quantecon.util import tic, toc
import matplotlib.pyplot as plt
Explanation: Vectorization and JIT compilation in Python
Examples for the QuantEcon 2017 PhD workshops
John Stachurski
End of explanation
α = 4
Explanation: Problem 1: A Time Series Model
Consider the time series model
$$ x_{t+1} = \alpha x_t (1 - x_t) $$
Let's set $\alpha = 4$
End of explanation
n = 200
x = np.empty(n)
x[0] = 0.2
for t in range(n-1):
x[t+1] = α * x[t] * (1 - x[t])
plt.plot(x)
plt.show()
Explanation: Here's a typical time series:
End of explanation
def quad(x0, n):
x = x0
for i in range(1, n):
x = α * x * (1 - x)
return x
Explanation: Here's a function that simulates for n periods, starting from x0, and returns only the final value:
End of explanation
n = 10_000_000
tic()
x = quad(0.2, n)
toc()
Explanation: Let's see how fast this runs:
End of explanation
!cat fastquad.f90
!gfortran -O3 fastquad.f90
!./a.out
Explanation: Now let's try this in FORTRAN.
Note --- this step is intended to be a demo and will only execute if
you have the file fastquad.f90 in your pwd
you have a FORTRAN compiler installed and modify the compilation code below appropriately
End of explanation
quad_jitted = jit(quad)
tic()
x = quad_jitted(0.2, n)
toc()
tic()
x = quad_jitted(0.2, n)
toc()
Explanation: Now let's do the same thing in Python using Numba's JIT compilation:
End of explanation
def f(x, y):
return np.cos(x**2 + y**2) / (1 + x**2 + y**2) + 1
from mpl_toolkits.mplot3d.axes3d import Axes3D
from matplotlib import cm
gridsize = 50
gmin, gmax = -3, 3
xgrid = np.linspace(gmin, gmax, gridsize)
ygrid = xgrid
x, y = np.meshgrid(xgrid, ygrid)
# === plot value function === #
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(x,
y,
f(x, y),
rstride=2, cstride=2,
cmap=cm.jet,
alpha=0.4,
linewidth=0.05)
ax.scatter(x, y, c='k', s=0.6)
ax.scatter(x, y, f(x, y), c='k', s=0.6)
ax.view_init(25, -57)
ax.set_zlim(-0, 2.0)
ax.set_xlim(gmin, gmax)
ax.set_ylim(gmin, gmax)
plt.show()
Explanation: After JIT compilation, function execution speed is about the same as FORTRAN.
But remember, JIT compilation for Python is still limited --- see here
If these limitations frustrate you, then try Julia.
Problem 2: Brute Force Optimization
The problem is to maximize the function
$$ f(x, y) = \frac{\cos \left(x^2 + y^2 \right)}{1 + x^2 + y^2} + 1$$
using brute force --- searching over a grid of $(x, y)$ pairs.
End of explanation
grid = np.linspace(-3, 3, 10000)
x, y = np.meshgrid(grid, grid)
tic()
np.max(f(x, y))
toc()
Explanation: Vectorized code
End of explanation
@jit
def compute_max():
m = -np.inf
for x in grid:
for y in grid:
z = np.cos(x**2 + y**2) / (1 + x**2 + y**2) + 1
if z > m:
m = z
return m
compute_max()
tic()
compute_max()
toc()
Explanation: JITTed code
A jitted version
End of explanation
@vectorize('float64(float64, float64)', target='parallel')
def f_par(x, y):
return np.cos(x**2 + y**2) / (1 + x**2 + y**2) + 1
x, y = np.meshgrid(grid, grid)
np.max(f_par(x, y))
tic()
np.max(f_par(x, y))
toc()
Explanation: Numba for vectorization with automatic parallelization - even faster:
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.