Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planar Point Patterns in PySAL
Author
Step1: Creating Point Patterns
From lists
We can build a point pattern by using Python lists of coordinate pairs $(s_0, s_1,\ldots, s_m)$ as follows
Step2: Thus $s_0 = (66.22, 32.54), \ s_{11}=(54.46, 8.48)$.
Step3: From numpy arrays
Step4: From shapefiles
This example uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N.
Step5: Attributes of PySAL Point Patterns
Step6: Intensity Estimates
The intensity of a point process at point $s_i$ can be defined as
Step7: Intensity based on convex hull | Python Code:
import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern
Explanation: Planar Point Patterns in PySAL
Author: Serge Rey sjsrey@gmail.com and Wei Kang weikang9009@gmail.com
Introduction
This notebook introduces the basic PointPattern class in PySAL and covers the following:
What is a point pattern?
Creating Point Patterns
Attributes of Point Patterns
Intensity Estimates
Next steps
What is a point pattern?
We introduce basic terminology here and point the interested reader to more detailed references on the underlying theory of the statistical analysis of point patterns.
Points and Event Points
To start we consider a series of point locations, $(s_1, s_2, \ldots, s_n)$ in a study region $\Re$. We limit our focus here to a two-dimensional space so that $s_j = (x_j, y_j)$ is the spatial coordinate pair for point location $j$.
We will be interested in two different types of points.
Event Points
Event Points are locations where something of interest has occurred. The term event is very general here and could be used to represent a wide variety of phenomena. Some examples include:
locations of individual plants of a certain species
archeological sites
addresses of disease cases
locations of crimes
the distribution of neurons
among many others.
It is important to recognize that in the statistical analysis of point patterns the interest extends beyond the observed point pattern at hand.
The observed patterns are viewed as realizations from some underlying spatial stochastic process.
Arbitrary Points
The second type of point we consider are those locations where the phenomena of interest has not been observed. These go by various names such as "empty space" or "regular" points, and at first glance might seem less interesting to a spatial analayst. However, these types of points play a central role in a class of point pattern methods that we explore below.
Point Pattern Analysis
The analysis of event points focuses on a number of different characteristics of the collective spatial pattern that is observed. Often the pattern is jugded against the hypothesis of complete spatial randomness (CSR). That is, one assumes that the point events arise independently of one another and with constant probability across $\Re$, loosely speaking.
Of course, many of the empirical point patterns we encounter do not appear to be generated from such a simple stochastic process. The depatures from CSR can be due to two types of effects.
First order effects
For a point process, the first-order properties pertain to the intensity of the process across space. Whether and how the intensity of the point pattern varies within our study region are questions that assume center stage. Such variation in the itensity of the pattern of, say, addresses of individuals with a certain type of non-infectious disease may reflect the underlying population density. In other words, although the point pattern of disease cases may display variation in intensity in our study region, and thus violate the constant probability of an event condition, that spatial drift in the pattern intensity could be driven by an underlying covariate.
Second order effects
The second channel by which departures from CSR can arise is through interaction and dependence between events in space. The canonical example being contagious diseases whereby the presence of an infected individual increases the probability of subsequent additional cases nearby.
When a pattern departs from expectation under CSR, this is suggestive that the underlying process may have some spatial structure that merits further investigation. Thus methods for detection of deviations from CSR and testing for alternative processes have given rise to a large literature in point pattern statistics.
Methods of Point Pattern Analysis in PySAL
The points module in PySAL implements basic methods of point pattern analysis organized into the following groups:
Point Processing
Centrography and Visualization
Quadrat Based Methods
Distance Based Methods
In the remainder of this notebook we shall focus on point processing.
End of explanation
points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21],
[9.47, 31.02], [30.78, 60.10], [75.21, 58.93],
[79.26, 7.68], [8.23, 39.93], [98.73, 77.17],
[89.78, 42.53], [65.19, 92.08], [54.46, 8.48]]
p1 = PointPattern(points)
p1.mbb
Explanation: Creating Point Patterns
From lists
We can build a point pattern by using Python lists of coordinate pairs $(s_0, s_1,\ldots, s_m)$ as follows:
End of explanation
p1.summary()
type(p1.points)
np.asarray(p1.points)
p1.mbb
Explanation: Thus $s_0 = (66.22, 32.54), \ s_{11}=(54.46, 8.48)$.
End of explanation
points = np.asarray(points)
points
p1_np = PointPattern(points)
p1_np.summary()
Explanation: From numpy arrays
End of explanation
f = ps.examples.get_path('vautm17n_points.shp')
fo = ps.io.open(f)
pp_va = PointPattern(np.asarray([pnt for pnt in fo]))
fo.close()
pp_va.summary()
Explanation: From shapefiles
This example uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N.
End of explanation
pp_va.summary()
pp_va.points
pp_va.head()
pp_va.tail()
Explanation: Attributes of PySAL Point Patterns
End of explanation
pp_va.lambda_mbb
Explanation: Intensity Estimates
The intensity of a point process at point $s_i$ can be defined as:
$$\lambda(s_j) = \lim \limits_{|\mathbf{A}s_j| \to 0} \left { \frac{E(Y(\mathbf{A}s_j)}{|\mathbf{A}s_j|} \right } $$
where $\mathbf{A}s_j$ is a small region surrounding location $s_j$ with area $|\mathbf{A}s_j|$, and $E(Y(\mathbf{A}s_j)$ is the expected number of event points in $\mathbf{A}s_j$.
The intensity is the mean number of event points per unit of area at point $s_j$.
Recall that one of the implications of CSR is that the intensity of the point process is constant in our study area $\Re$. In other words $\lambda(s_j) = \lambda(s_{j+1}) = \ldots = \lambda(s_n) = \lambda \ \forall s_j \in \Re$. Thus, if the area of $\Re$ = $|\Re|$ the expected number of event points in the study region is: $E(Y(\Re)) = \lambda |\Re|.$
In PySAL, the intensity is estimated by using a geometric object to encode the study region. We refer to this as the window, $W$. The reason for distinguishing between $\Re$ and $W$ is that the latter permits alternative definitions of the bounding object.
Intensity estimates are based on the following:
$$\hat{\lambda} = \frac{n}{|W|}$$
where $n$ is the number of points in the window $W$, and $|W|$ is the area of $W$.
Intensity based on minimum bounding box:
$$\hat{\lambda}{mbb} = \frac{n}{|W{mbb}|}$$
where $W_{mbb}$ is the minimum bounding box for the point pattern.
End of explanation
pp_va.lambda_hull
Explanation: Intensity based on convex hull:
$$\hat{\lambda}{hull} = \frac{n}{|W{hull}|}$$
where $W_{hull}$ is the convex hull for the point pattern.
End of explanation |
10,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Data Science, CS 5963 / Math 3900
Lecture 3
Step1: Bernoulli Distribution
The Bernoulli distribution, named after Jacob Bernoulli, is the probability distribution of a random variable which takes the value 1 (success) with probability $p$ and the value 0 (failure) with probability $q=1-p$.
The Bernoulli distribution with $p=0.5$ (implying $q=0.5$) describes a 'fair' coin toss where 1 and 0 represent "heads" and "tails", respectively. If the coin is unfair, then we would have that $p\neq 0.5$.
Step2: How many heads did we get? We just count the number of 1's.
Step3: What if we flip the coin more times?
Step4: Some facts about Bernoulli variables
Step5: Some facts about the binomial distribution
Step6: Observe that the probability mass function looks very much like the histogram plot! (not a coincidence)
Concept check
Step7: A normal random variable is an example of a continuous random variable. A normal random variable can take any real value, but some numbers are more likely than others. More formally, we say that the probability density function (PDF) for the normal (Gaussian) distribution is
$$
f(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
$$
where $\mu$ is the mean and $\sigma$ is the variance. What this means is that the probability that a normal random variable will take values in the interval $[a,b]$ is given by
$$
\int_a^b f(x) dx.
$$
This is just the area under the curve for this interval. For $a=\mu-\sigma$ and $b = \mu+\sigma$, we plot this below.
Step8: This integral can be computed using the cumulative distribution function (CDF)
$$
F(x) = \int_{-\infty}^x f(x) dx.
$$
We have that
$$
\int_a^b f(x) dx = F(b) - F(a)
$$
Step9: This means that 68% of the time, this normal random variable will have values between $\mu-\sigma$ and $\mu+\sigma$.
You used to have to look these values up in a table!
Let's see what it looks like if we sample 1,000,000 normal random variables and then plot a histogram.
Step10: The histogram of the sampled variables looks just like the probability distribution function!
Central Limit Theorem
One of the reasons that the normal distribution is so important is the following theorem.
Central Limit Theorem. Under "some assumptions", the sum of a "large number" $n$ of (independent) random variables, each with a finite mean $\mu$ and variance $\sigma^2$, will be approximately normally distributed with mean $n\mu$ and variance $n\sigma^2$.
How can we use the central limit theorem (CLT)?
The CLT tells us that if $n$ is large, binomial random variables will be distributed in a certain way. That is, if we flip a coin many times, the number of heads that we're likely to see is described by a normal distribution. This will allow us to ask questions like
Step11: Hypothesis testing
So what is the likelihood of flipping a coin 1000 times and seeing less than 545 heads?
The CLT tells us that this is approximately
$$
\int_{-\infty}^{545} p(x) dx = F(545).
$$
This is something that we can easily evaluate using the cumulative distribution function (CDF).
Step12: So $99.8\%$ of the time, we would see fewer than 545 heads. So seeing 545 heads is very unlikely! It happens only $0.2\%$ of the time. This is so unlikely that we might declare that the coin is not fair!
This is precisely what hypothesis testing is.
In hypothesis testing, we make a null hypothesis, denoted $H_0$. In this case, the null hypothesis is
$$
H_0
Step13: Thus, $99.6\%$ of the time we see a value less extreme than 545. In other words, we would see either more than 545 heads or less than 455 heads only 0.4% of the time. The is called the P-value. Since the P-value is smaller than the chosen significance level, we reject the null hypothesis and declare the coin to be unfair.
Some comments about the p-value | Python Code:
import scipy as sc
from scipy.stats import bernoulli
from scipy.stats import binom
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
Explanation: Introduction to Data Science, CS 5963 / Math 3900
Lecture 3: Hypothesis Testing I
In this lecture, we'll have a brief glimpse at hypothesis testing. To get started, we'll introduce a few concepts from probability.
Required reading:
Grus, Ch.7 link
Further reading:
Jay L. Devore, Probability and Statistics for Engineering and the Sciences, 9th ed. Cengage Learning (2016) Ch. 8 and 9.
For a more complete treatment, take Math 3070 (Applied Statistics I).
End of explanation
n = 1000;
coin_flips = bernoulli.rvs(p=0.5, size=n)
print(coin_flips)
Explanation: Bernoulli Distribution
The Bernoulli distribution, named after Jacob Bernoulli, is the probability distribution of a random variable which takes the value 1 (success) with probability $p$ and the value 0 (failure) with probability $q=1-p$.
The Bernoulli distribution with $p=0.5$ (implying $q=0.5$) describes a 'fair' coin toss where 1 and 0 represent "heads" and "tails", respectively. If the coin is unfair, then we would have that $p\neq 0.5$.
End of explanation
print(sum(coin_flips))
print(sum(coin_flips)/n)
Explanation: How many heads did we get? We just count the number of 1's.
End of explanation
n = 1000000
coin_flips = bernoulli.rvs(p=0.5, size=n)
print(sum(coin_flips)/n)
Explanation: What if we flip the coin more times?
End of explanation
p = 0.5
n = 10
bin_vars = binom.rvs(n=n,p=p,size=1000000)
print(bin_vars[:100])
bins=sc.arange(12)-.5
plt.hist(bin_vars, bins=bins,normed=True)
plt.title("A histogram of binomial random variables")
plt.xlim([-.5,10.5])
plt.show()
Explanation: Some facts about Bernoulli variables:
* mean is p
* variance is p(1-p)
Binomial distribution
The binomial distribution, with parameters $n$ and $p$, is a discrete probability distribution ``summarizing'' the outcome of $n$ Bernoulli random variables. For simplicity, take $p=0.5$ so that the Bernoulli distribution describes the outcome of a coin. For each flip, the probability of heads is $p$ (so the probability of tails is $q=1-p$). But we don't keep track of the individual flips. We only keep track of how many heads/tails there were in total. So, the binomial distribution can be thought of as summarizing a bunch of (independent) Bernoulli random variables.
The following code is equivalent to flipping a fair (p=0.5) coin n=10 times and counting the number of heads and then repeating this process 1,000,000 times.
End of explanation
f = lambda k: binom.pmf(k, n=n,p=p)
x = sc.arange(n+1);
plt.plot(x, f(x),'*-')
plt.title("The probability mass function for a Binomial random variable")
plt.xlim([0,n])
plt.show()
Explanation: Some facts about the binomial distribution:
* The mean is $np$
* The variance is $np(1-p)$
Mathematical aside: Binomial (and Bernoulli) random variables are examples of discrete random variables since they can take only discrete values. A Bernoulli random variable can take values $0$ or $1$. A binomial random variable can only take values
$$
0,1,\ldots, n.
$$
One can compute the probability that the variable takes each value. This is called the probability mass function.
For a Bernoulli random variable, the probability mass function is given by
$$
f(k) = \begin{cases} p & k=1 \ 1-p & k = 0 \end{cases}
$$
For a binomial random variable, the probability mass function is given by
$$
f(k) = \binom{n}{k} p^k (1-p)^{n-k}.
$$
Here, $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ is the number of ways to arrange the
$k$ heads among the $n$ flips. For a fair coin, we have $p=0.5$ and $f(k) = \binom{n}{k} \frac{1}{2^n}$. This is the number of ways to arrange $k$ heads among $n$ outcomes divided by the total number of outcomes.
The probability mass function can be plotted using the scipy library as follows.
End of explanation
mu = 0 # mean
sigma = 1 # standard deviation
x = sc.arange(mu-4*sigma,mu+4*sigma,0.001);
pdf = norm.pdf(x,loc=mu, scale=sigma)
# Here, I could have also written
# pdf = 1/(sigma * sc.sqrt(2 * sc.pi)) * sc.exp( - (x - mu)**2 / (2 * sigma**2))
plt.plot(x, pdf, linewidth=2, color='k')
plt.show()
Explanation: Observe that the probability mass function looks very much like the histogram plot! (not a coincidence)
Concept check: what is a random variable?
A random variable is an abstraction of a coin. It can take on a set of possible different values, each with a preassigned probability. A Bernoulli r.v. takes value $1$ with probability $p$ and $0$ with probability $1-p$. A binomial r.v. takes values $0,1,\ldots,n$, with a given probability. The probabilities are given by the probability mass function. This function looks just like a histogram if you were to sample a large number of random variables.
Quiz: what is the random variable that describes a fair dice? the sum of two fair dice?
Normal (Gaussian) distribution
Roughly speaking, normal random variables are described by a "bell curve". The curve is centered at the mean, $\mu$, and has width given by the standard deviation, $\sigma$.
End of explanation
plt.plot(x, pdf, linewidth=2, color='k')
x2 = sc.arange(mu-sigma,mu+sigma,0.001)
plt.fill_between(x2, y1= norm.pdf(x2,loc=mu, scale=sigma), facecolor='red', alpha=0.5)
plt.show()
Explanation: A normal random variable is an example of a continuous random variable. A normal random variable can take any real value, but some numbers are more likely than others. More formally, we say that the probability density function (PDF) for the normal (Gaussian) distribution is
$$
f(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }}
e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },
$$
where $\mu$ is the mean and $\sigma$ is the variance. What this means is that the probability that a normal random variable will take values in the interval $[a,b]$ is given by
$$
\int_a^b f(x) dx.
$$
This is just the area under the curve for this interval. For $a=\mu-\sigma$ and $b = \mu+\sigma$, we plot this below.
End of explanation
norm.cdf(mu+sigma, loc=mu, scale=sigma) - norm.cdf(mu-sigma, loc=mu, scale=sigma)
Explanation: This integral can be computed using the cumulative distribution function (CDF)
$$
F(x) = \int_{-\infty}^x f(x) dx.
$$
We have that
$$
\int_a^b f(x) dx = F(b) - F(a)
$$
End of explanation
norm_vars = norm.rvs(loc=mu,scale=sigma,size=1000000)
print(norm_vars[:100])
plt.hist(norm_vars, bins=100,normed=True)
plt.plot(x, pdf, linewidth=2, color='k')
plt.title("A histogram of normal random variables")
plt.show()
Explanation: This means that 68% of the time, this normal random variable will have values between $\mu-\sigma$ and $\mu+\sigma$.
You used to have to look these values up in a table!
Let's see what it looks like if we sample 1,000,000 normal random variables and then plot a histogram.
End of explanation
n = 1000
p = 0.5
bin_vars = binom.rvs(n=n,p=p,size=10000)
plt.hist(bin_vars, bins='auto',normed=True)
mu = n*p
sigma = sc.sqrt(n*p*(1-p))
x = sc.arange(mu-4*sigma,mu+4*sigma,0.1);
pdf = norm.pdf(x, loc=mu, scale=sigma)
# Here, I could also write
# pdf = 1/(sigma * sc.sqrt(2 * sc.pi)) * sc.exp( - (x - mu)**2 / (2 * sigma**2) )
plt.plot(x, pdf, linewidth=2, color='k')
plt.title("A comparison between the histogram of binomial random \n variables and the normal distribution predicted by the CLT")
plt.show()
Explanation: The histogram of the sampled variables looks just like the probability distribution function!
Central Limit Theorem
One of the reasons that the normal distribution is so important is the following theorem.
Central Limit Theorem. Under "some assumptions", the sum of a "large number" $n$ of (independent) random variables, each with a finite mean $\mu$ and variance $\sigma^2$, will be approximately normally distributed with mean $n\mu$ and variance $n\sigma^2$.
How can we use the central limit theorem (CLT)?
The CLT tells us that if $n$ is large, binomial random variables will be distributed in a certain way. That is, if we flip a coin many times, the number of heads that we're likely to see is described by a normal distribution. This will allow us to ask questions like: How unusual is it to flip a fair coin 1000 times and see 545 heads?
Suppose we flip a fair ($p=0.5$) coin 1000 times.
Question: How many heads do we expect to see?
The CLT says that the number of heads (= sum of Bernoulli r.v. = binomial r.v.) is approximately normally distributed with mean
$$
n\mu = np = 10000.5 = 500
$$
and variance
$$
n \sigma^2 = np(1-p) = 10000.5*0.5 = 250.
$$
Let's do some experiments.
We call flipping a fair coin n=1,000 times and counting the number of heads a "simulation". Recall that the outcome is precisely a binomial random variable with n=1,000 and p = 0.5. We'll do 10,000 simulations and then compare the histogram of the binomial random variables and the normal distribution predicted by the CLT.
End of explanation
n = 1000
p = 0.5
mu = n*p
sigma = sc.sqrt(n*p*(1-p))
print(norm.cdf(545, loc=mu, scale=sigma))
# a plot illustrating the integral
x = sc.arange(mu-4*sigma,mu+4*sigma,0.001);
plt.plot(x, norm.pdf(x, loc=mu, scale=sigma), linewidth=2, color='k')
x2 = sc.arange(mu-4*sigma,545,0.001)
plt.fill_between(x2, y1= norm.pdf(x2,loc=mu, scale=sigma), facecolor='red', alpha=0.5)
plt.xlim([mu-4*sigma,mu+4*sigma])
plt.show()
Explanation: Hypothesis testing
So what is the likelihood of flipping a coin 1000 times and seeing less than 545 heads?
The CLT tells us that this is approximately
$$
\int_{-\infty}^{545} p(x) dx = F(545).
$$
This is something that we can easily evaluate using the cumulative distribution function (CDF).
End of explanation
val_integral = norm.cdf(545, loc=mu, scale=sigma) - norm.cdf(455, loc=mu, scale=sigma)
print(val_integral)
print(1-val_integral)
Explanation: So $99.8\%$ of the time, we would see fewer than 545 heads. So seeing 545 heads is very unlikely! It happens only $0.2\%$ of the time. This is so unlikely that we might declare that the coin is not fair!
This is precisely what hypothesis testing is.
In hypothesis testing, we make a null hypothesis, denoted $H_0$. In this case, the null hypothesis is
$$
H_0: \text{the coin is fair, i.e., $p=0.5$}.
$$
The alternative hypothesis, $H_a$, is typically the hypothesis that the researcher wants to validate. In this case, that the coin is unfair, i.e., $p\neq 0.5$.
We also choose a significance level for the test, $\alpha$, traditionally $1\%$ or $5\%$.
In this case, let's choose a significance level of $\alpha = 1\%$. We then perform an experiment. In this case, we flip the coin 1000 times and count the number of heads (in this case 545).
Finally, assuming the null hypothesis is true, we compute how how likely it is to see a number that is at least as far from the expected value as the number obtained. To do this, we compute the integral
$$
\int_{455}^{545} p(x) dx = F(545) - F(455)
$$
Question: why this lower bound?
End of explanation
mu = 15
sigma = sc.sqrt(5.72**2/137)
print(2*norm.cdf(2.42, loc=mu, scale=sigma))
Explanation: Thus, $99.6\%$ of the time we see a value less extreme than 545. In other words, we would see either more than 545 heads or less than 455 heads only 0.4% of the time. The is called the P-value. Since the P-value is smaller than the chosen significance level, we reject the null hypothesis and declare the coin to be unfair.
Some comments about the p-value:
1. A p-value is a probability calculated assuming that $H_0$ is true.
+ The smaller the p-value, the stronger the evidence against $H_0$.
+ A p-value is not the probability that the null hypothesis is true of false. It is the probability that an erroneous conclusion is reached. (More on this next lecture)
Example: "Freshman 15", Fact or Fiction
This example was taken from Devore, pp.314-315.
"A common belief among the lay public is that body weight increases after entry into college, and the phrase 'freshman 15' has been coined to describe the 15 puunds that students presumably gain over their freshman year."
Let $\mu$ denote the true average weight gain in the first year of college. We take the null hypothesis to be
$$
H_0: \mu = 15
$$
We suppose a random sample of $n$ students is selected, their weights (before and after the first year of college) are measured, and the sample mean $\bar{x}$ and sample standard deviation $s$ are computed. An article in the journal Obesity (2006) cites that for a sample of $n=137$ students, the sample mean weight gain was $\bar{x}=2.42$ lb and with a sample standard deviation of $s=5.72$ lb. Assuming $H_0$ to be true, how unlikely is it that we would observe such a small value?
We take a normal distribution with mean given by the null value ($\mu = 15$) and variance given by $s^2/n = (5.72)^2/137=0.2388$.
End of explanation |
10,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
10,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Printing color
https
Step1: Insults package
Step2: Identifying quotation marks
https | Python Code:
print("\x1b[31m\"red\"\x1b[0m")
print('\x1b[1;31m'+'Hello world'+'\x1b[0m')
import sys
from termcolor import colored, cprint
text = colored('Hello, World!', 'red', attrs=['reverse', 'blink'])
print(text)
cprint('Hello, World!', 'green', 'on_red')
print_red_on_cyan = lambda x: cprint(x, 'red', 'on_cyan')
print_red_on_cyan('Hello, World!')
print_red_on_cyan('Hello, Universe!')
for i in range(10):
cprint(i, 'magenta', end=' ')
cprint("Attention!", 'red', attrs=['bold'], file=sys.stderr)
Explanation: Printing color
https://pypi.python.org/pypi/termcolor
End of explanation
from insults import Insults
comment = "You are a disgusting maggot of a person."
Insults.load_model()
print(Insults.rate_comment(comment))
comments = ["You called me a \"dickhead\", so I'll say you're a cunt.", "These shitakes taste like shit."]
print(Insults.foul_language(comments, context=False))
Explanation: Insults package
End of explanation
import regex as re
print(re.findall(r'"(.*?)"', comments[0]))
Explanation: Identifying quotation marks
https://regex101.com/r/cB0kB8/1
https://stackoverflow.com/questions/39713487/extracting-quotations-citations-with-nltk-not-regex
End of explanation |
10,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mathematical method to let poppy vertical
When you want to move the motors of the leg, you can not do whatever you want, because Poppy can fall if it is not balance.
So a very simple way to move the leg without any external perturbation (no wind, flat foor, no move of the upper body) is to let the upper body to the veticale of the ankle.
To do that, simple calculation of the angle of ankle, knee and hip can do the job.
Basically, the shin and the thigh form a triangle with a knee angle. So if you determine the angle of the knee, what you want is just to calcul the angle of the ankle and the hip to let Poppy to the verticale position.
To calcul the missing angle, we can use the sinus law and the Al-Kashi theorem.
More information here.
Step1: Now,You need the robot and the V-REP time.
Step2: It is now possible to define a mobility in percentage, according to the angle limit of ankle.
Step3: Finaly, a primitive can set the high and the foot gap of poppy.
Step4: It is now possible to set the high and the foot gap using the leg_primitive. | Python Code:
%pylab inline
from math import *
class leg_angle:
def __init__(self,knee=0):
# different length of poppy in cm
self.upper_body = 40.0
self.shin = 18.0
self.thigh = 18.0
# the angle of the knee
self.knee = radians(knee)
gamma = radians(180 - knee)
# Al-Kashi theorem to calcul the c side and the missing angle
c = sqrt(self.shin**2+self.thigh**2-2*self.shin*self.thigh*cos(gamma))
self.c = c
self.hip = -acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))
self.ankle = -acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))
# The high of the leg and the foot gap
self.high = c
self.foot_gap = 0.0
def update_knee(self,knee):
self.knee = radians(knee)
gamma = radians(180 - knee)
# Al-Kashi theorem to calcul the c side
c = sqrt(self.shin**2+self.thigh**2-2*self.shin*self.thigh*cos(gamma))
self.c = c
self.hip = -acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))
self.ankle = -acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))
self.high = sqrt(c**2-self.foot_gap**2)
def update_foot_gap(self,foot_gap):
if foot_gap >= 0 :
s = 1
else :
s=-1
self.foot_gap = foot_gap
# move the foot but let the high constant
c = sqrt(foot_gap**2+self.high**2)
self.c = c
alpha = acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))
beta = acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))
gamma = acos((self.shin**2+self.thigh**2-self.c**2)/(2*self.shin*self.thigh))
self.knee = pi - gamma
self.hip = -(alpha + s*acos(self.high/c))
self.ankle = -(beta - s*acos(self.high/c))
def update_high(self,high):
if self.foot_gap >= 0 :
s = 1
else :
s=-1
self.high = high
c = sqrt(self.foot_gap**2+self.high**2)
self.c = c
alpha = acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))
beta = acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))
gamma = acos((self.shin**2+self.thigh**2-self.c**2)/(2*self.shin*self.thigh))
self.knee = pi - gamma
self.hip = -(alpha + s*acos(self.high/c))
self.ankle = -(beta - s*acos(self.high/c))
def gravity_center_front(self,d_thigh):
c = sqrt(self.foot_gap**2+self.high**2)
self.c = c
alpha = acos(((self.thigh+d_thigh)**2+c**2-self.shin**2)/(2*(self.thigh+d_thigh)*c))
beta = acos((self.shin**2+c**2-(self.thigh+d_thigh)**2)/(2*self.shin*c))
gamma = acos((self.shin**2+(self.thigh+d_thigh)**2-self.c**2)/(2*self.shin*(self.thigh+d_thigh)))
self.knee = pi - gamma
self.hip = -(alpha + acos(self.high/c))
self.ankle = -(beta - acos(self.high/c))
gamma = pi+self.hip
self.hip = -(pi-gamma-asin(((d_thigh*sin(gamma)))/self.upper_body))
Explanation: Mathematical method to let poppy vertical
When you want to move the motors of the leg, you can not do whatever you want, because Poppy can fall if it is not balance.
So a very simple way to move the leg without any external perturbation (no wind, flat foor, no move of the upper body) is to let the upper body to the veticale of the ankle.
To do that, simple calculation of the angle of ankle, knee and hip can do the job.
Basically, the shin and the thigh form a triangle with a knee angle. So if you determine the angle of the knee, what you want is just to calcul the angle of the ankle and the hip to let Poppy to the verticale position.
To calcul the missing angle, we can use the sinus law and the Al-Kashi theorem.
More information here.
End of explanation
from poppy.creatures import PoppyHumanoid
poppy = PoppyHumanoid(simulator='vrep')
import time as real_time
class time:
def __init__(self,robot):
self.robot=robot
def time(self):
t_simu = self.robot.current_simulation_time
return t_simu
def sleep(self,t):
t0 = self.robot.current_simulation_time
while (self.robot.current_simulation_time - t0) < t-0.01:
real_time.sleep(0.001)
time = time(poppy)
print time.time()
time.sleep(0.025) #0.025 is the minimum step according to the V-REP defined dt
print time.time()
Explanation: Now,You need the robot and the V-REP time.
End of explanation
class leg_move(leg_angle):
def __init__(self,motor_limit,knee=0):
self.ankle_limit_front=radians(motor_limit.angle_limit[1])
self.ankle_limit_back=radians(motor_limit.angle_limit[0])
leg_angle.__init__(self,knee)
def update_foot_gap_percent(self,foot_gap_percent):
#calcul of foot_gap_max to convert foot_gap_percent into value
if foot_gap_percent>=0:# si le foot_gap est positif
if acos(self.high/(self.shin+self.thigh)) > self.ankle_limit_front:
# construction 1 knee!=0
gap1 = sin(self.ankle_limit_front)*self.shin
high1 = cos(self.ankle_limit_front)*self.shin
high2 = self.high - high1
gap2 = sqrt(self.thigh**2-high2**2)
foot_gap_max = gap1 + gap2
foot_gap = foot_gap_percent * foot_gap_max / 100
self.update_foot_gap(foot_gap)
else:
#construction 2 knee=0
foot_gap_max = sqrt((self.shin+self.thigh)**2-self.high**2)
foot_gap = foot_gap_percent * foot_gap_max / 100
self.update_foot_gap(foot_gap)
if foot_gap_percent<0:
if -acos((self.high-self.thigh)/self.shin )< self.ankle_limit_back:
#construction 1 knee!=0
print degrees(self.ankle_limit_back)
print degrees(-acos((self.high-self.thigh)/self.shin ))
gap1 = sin(self.ankle_limit_back)*self.shin
high1 = cos(self.ankle_limit_back)*self.shin
high2 = self.high - high1
print gap1,high1,high2
gap2 = sqrt(self.thigh**2-high2**2)
print gap1,gap2,high1,high2
foot_gap_max = gap1 + gap2
foot_gap = -foot_gap_percent * foot_gap_max / 100
self.update_foot_gap(foot_gap)
else:
#constrution 2 knee=0
foot_gap_max = sqrt((self.shin+self.thigh)**2-self.high**2)
foot_gap = foot_gap_percent * foot_gap_max / 100
self.update_foot_gap(foot_gap)
def update_high_percent(self,high_percent,high_min,high_max):
high_var = high_max-high_min
high = (high_percent*high_var/100)+high_min
self.update_high(high)
def high_limit(self):
high_max = sqrt((self.shin+self.thigh)**2-self.foot_gap**2)
high1_min = cos(self.ankle_limit_back)*self.shin
gap2 = self.foot_gap-sin(self.ankle_limit_back)*self.shin
# si gap2 est supérieur a thigh alors ce n'est plus la flexion de la cheville qui est limitante
# dans ce cas on met la hauteur a zero
if gap2 <= self.thigh:
high2_min = sqrt(self.thigh**2-gap2**2)
high_min = high1_min + high2_min
else:
high_min = 0
return [high_min,high_max]
Explanation: It is now possible to define a mobility in percentage, according to the angle limit of ankle.
End of explanation
from pypot.primitive import Primitive
class leg_primitive(Primitive):
def __init__(self,robot,speed,knee=0):
self.right = leg_move(robot.l_ankle_y,knee)# il faudrait mettre r_ankle_y mais les angles limites semblent faux, c'est l'opposé
self.left = leg_move(robot.l_ankle_y,knee)
self.robot = robot
Primitive.__init__(self, robot)
self.high_percent = 100
self.r_foot_gap_percent = 0
self.l_foot_gap_percent = 0
self.speed = speed
def run(self):
if self.high_percent !=-1:
high_limit=(max([self.right.high_limit()[0],self.left.high_limit()[0]]),min([self.right.high_limit()[1],self.left.high_limit()[1]]))
self.right.update_high_percent(self.high_percent,high_limit[0],high_limit[1])
self.left.update_high_percent(self.high_percent,high_limit[0],high_limit[1])
if self.r_foot_gap_percent !=-1:
self.right.update_foot_gap_percent(self.r_foot_gap_percent)
if self.l_foot_gap_percent !=-1:
self.left.update_foot_gap_percent(self.l_foot_gap_percent)
print "left - ankle" ,degrees(self.left.ankle),'knee', degrees(self.left.knee),'hip', degrees(self.left.hip), 'high', self.left.high,'foot_gap',self.left.foot_gap
print "right - ankle" ,degrees(self.right.ankle),'knee', degrees(self.right.knee),'hip', degrees(self.right.hip), 'high', self.right.high,'foot_gap',self.right.foot_gap
self.robot.l_ankle_y.goto_position(degrees(self.left.ankle),self.speed)
self.robot.r_ankle_y.goto_position(degrees(self.right.ankle),self.speed)
self.robot.l_knee_y.goto_position(degrees(self.left.knee),self.speed)
self.robot.r_knee_y.goto_position(degrees(self.right.knee),self.speed)
self.robot.l_hip_y.goto_position(degrees(self.left.hip),self.speed)
self.robot.r_hip_y.goto_position(degrees(self.right.hip),self.speed,wait=True)
Explanation: Finaly, a primitive can set the high and the foot gap of poppy.
End of explanation
leg=leg_primitive(poppy,speed=3)
leg.start()
time.sleep(1)
time.sleep(1)
leg.speed=3
leg.high_percent=50
leg.r_foot_gap_percent=20
leg.l_foot_gap_percent=-20
leg.start()
time.sleep(3)
leg.high_percent=100
leg.r_foot_gap_percent=-1
leg.l_foot_gap_percent=-1
leg.start()
time.sleep(3)
leg.high_percent=0
leg.start()
time.sleep(3)
leg.high_percent=80
leg.r_foot_gap_percent=-20
leg.l_foot_gap_percent=20
leg.start()
time.sleep(3)
leg.r_foot_gap_percent=-1
leg.l_foot_gap_percent=-1
leg.high_percent=0
leg.start()
time.sleep(3)
leg.high_percent=100
leg.r_foot_gap_percent=0
leg.l_foot_gap_percent=0
leg.start()
time.sleep(3)
Explanation: It is now possible to set the high and the foot gap using the leg_primitive.
End of explanation |
10,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
提升算法
提升算法的核心不再是投票,而是迭代.通过一轮一轮的优化权重来做到提升准确率的作用
提升算法(Boosting)虽然表面上看也是对训练数据的扰动(重采样/权值调整),但是Boosting的理论保证了其本质上是一个优化算法.集成分类器整体具有一个优化目标,即Boosting的训练过程最终可以使集成分类器收敛到最优贝叶斯决策,因此降低了bias(提高了准确度),而这个性质是Bagging不具有的.
常见的提升算法就是
AdaBoost
GradientBoosting
AdaBoost
AdaBoost的核心思想是用反复修改的数据的权重来训练一系列的弱学习器,由这些弱学习器的预测结果通过加权投票(或加权求和)的方式组合,得到我们最终的预测结果.在每一次所谓的提升迭代中,数据的修改由应用于每一个训练样本的新权重$w_1$,$w_2$,…,$w_N$组成(即修改每一个训练样本应用于新一轮学习器的权重). 初始化时,将所有弱学习器的权重都设置为$w_i = 1/N$,因此第一次迭代仅仅是通过原始数据训练出一个弱学习器.在接下来的连续迭代中,样本的权重逐个地被修改,学习算法也因此要重新应用这些已经修改的权重.在给定的一个迭代中,那些在上一轮迭代中被预测为错误结果的样本的权重将会被增加,而那些被预测为正确结果的样本的权重将会被降低.随着迭代次数的增加,那些难以预测的样例的影响将会越来越大,每一个随后的弱学习器都将会被强迫更加关注那些在之前被错误预测的样例.
AdaBoost在sklearn中的接口
sklearn中提供了俩个adaboost相关的接口
Step1: 数据预处理
Step2: 数据集拆分
Step3: 训练模型
Step4: 梯度提升(Gradient Boosting)
梯度提升算法和adaboost类似,但有几个区别 | Python Code:
import requests
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
from sklearn.ensemble import AdaBoostClassifier
Explanation: 提升算法
提升算法的核心不再是投票,而是迭代.通过一轮一轮的优化权重来做到提升准确率的作用
提升算法(Boosting)虽然表面上看也是对训练数据的扰动(重采样/权值调整),但是Boosting的理论保证了其本质上是一个优化算法.集成分类器整体具有一个优化目标,即Boosting的训练过程最终可以使集成分类器收敛到最优贝叶斯决策,因此降低了bias(提高了准确度),而这个性质是Bagging不具有的.
常见的提升算法就是
AdaBoost
GradientBoosting
AdaBoost
AdaBoost的核心思想是用反复修改的数据的权重来训练一系列的弱学习器,由这些弱学习器的预测结果通过加权投票(或加权求和)的方式组合,得到我们最终的预测结果.在每一次所谓的提升迭代中,数据的修改由应用于每一个训练样本的新权重$w_1$,$w_2$,…,$w_N$组成(即修改每一个训练样本应用于新一轮学习器的权重). 初始化时,将所有弱学习器的权重都设置为$w_i = 1/N$,因此第一次迭代仅仅是通过原始数据训练出一个弱学习器.在接下来的连续迭代中,样本的权重逐个地被修改,学习算法也因此要重新应用这些已经修改的权重.在给定的一个迭代中,那些在上一轮迭代中被预测为错误结果的样本的权重将会被增加,而那些被预测为正确结果的样本的权重将会被降低.随着迭代次数的增加,那些难以预测的样例的影响将会越来越大,每一个随后的弱学习器都将会被强迫更加关注那些在之前被错误预测的样例.
AdaBoost在sklearn中的接口
sklearn中提供了俩个adaboost相关的接口:
ensemble.AdaBoostClassifier([…])adaboost 分类器
ensemble.AdaBoostRegressor([base_estimator, …])adaboost 回归器
参数方面
base_estimator指定弱学习器,默认是cart tree,
n_estimators弱学习器的数量
learning_rate控制每个弱学习器对最终的结果的贡献程度(权重修改速率)
获取一个好的预测结果主要需要调整的参数是n_estimators和base_estimator的复杂度(例如:对于弱学习器为决策树的情况,树的深度max_depth 或叶子节点的最小样本数min_samples_leaf等都是控制树的复杂度的参数)
End of explanation
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data").text
row_name = ['sepal_length','sepal_width','petal_length','petal_width','label']
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
dataset = pd.DataFrame(row_matrix,columns=row_name)
encs = {}
encs["feature"] = StandardScaler()
encs["feature"].fit(dataset[row_name[:-1]])
table = pd.DataFrame(encs["feature"].transform(dataset[row_name[:-1]]),columns=row_name[:-1])
encs["label"]=LabelEncoder()
encs["label"].fit(dataset["label"])
table["label"] = encs["label"].transform(dataset["label"])
table[:10]
Explanation: 数据预处理
End of explanation
train_set,validation_set = train_test_split(table)
Explanation: 数据集拆分
End of explanation
ab= AdaBoostClassifier(n_estimators=100)
ab.fit(train_set[row_name[:-1]], train_set["label"])
pre = ab.predict(validation_set[row_name[:-1]])
print(classification_report(validation_set["label"],pre))
Explanation: 训练模型
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
gb= GradientBoostingClassifier(n_estimators=100)
gb.fit(train_set[row_name[:-1]], train_set["label"])
pre = gb.predict(validation_set[row_name[:-1]])
print(classification_report(validation_set["label"],pre))
Explanation: 梯度提升(Gradient Boosting)
梯度提升算法和adaboost类似,但有几个区别:
adaboost对于每个样本有一个权重,样本预估误差越大,权重越大
gradient boosting则是直接用梯度拟合残差,没有样本权重的概念
Gradient Boosting在sklearn中的接口
sklearn中提供了俩个梯度提升相关的接口:
ensemble.GradientBoostingClassifier([loss, …]) 梯度提升分类器
ensemble.GradientBoostingRegressor([loss, …]) 梯度提升回归器
其参数用法和adaboost的接口类似,只是多了个loss参数用于设定使用什么作为损失函数,有两个可选的损失函数:
deviance指使用logistic作为损失函数,也是默认值
exponential就和adaboost一样了
End of explanation |
10,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form solution as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form solution as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
N = len(input_feature)
# compute the sum of input_feature and output
sum_input = input_feature.sum()
sum_output = output.sum()
# compute the product of the output and the input_feature and its sum
product = input_feature * output
product_sum = product.sum()
# compute the squared value of the input_feature and its sum
input_square = input_feature * input_feature
sum_input_square = input_square.sum()
# use the formula for the slope
slope = (product_sum - (sum_output * sum_input) * 1.0/N) / (sum_input_square - sum_input*sum_input*1.0/N)
# use the formula for the intercept
intercept = (sum_output - slope * sum_input) / N
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = input_feature * slope + intercept
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
Explanation: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = output - predictions
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept) * 1.0 / slope
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
br_intercept, br_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], br_intercept, br_slope)
# Compute RSS when using squarefeet on TEST data:
get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
10,607 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Importing the necessary libraries required for Matrix Factorization using ALS
| Python Code::
import numpy as np
from pyspark.ml.recommendation import ALS
|
10,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Date Data With Gap In Values
Step2: Interpolate Missing Values
Step3: Forward-fill Missing Values
Step4: Backfill Missing Values
Step5: Interpolate Missing Values But Only Up One Value | Python Code:
# Load libraries
import pandas as pd
import numpy as np
Explanation: Title: Handling Missing Values In Time Series
Slug: handling_missing_values_in_time_series
Summary: How to handle the missing values in time series in pandas for machine learning in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Dates And Times
Authors: Chris Albon
Preliminaries
End of explanation
# Create date
time_index = pd.date_range('01/01/2010', periods=5, freq='M')
# Create data frame, set index
df = pd.DataFrame(index=time_index)
# Create feature with a gap of missing values
df['Sales'] = [1.0,2.0,np.nan,np.nan,5.0]
Explanation: Create Date Data With Gap In Values
End of explanation
# Interpolate missing values
df.interpolate()
Explanation: Interpolate Missing Values
End of explanation
# Forward-fill
df.ffill()
Explanation: Forward-fill Missing Values
End of explanation
# Back-fill
df.bfill()
Explanation: Backfill Missing Values
End of explanation
# Interpolate missing values
df.interpolate(limit=1, limit_direction='forward')
Explanation: Interpolate Missing Values But Only Up One Value
End of explanation |
10,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In the tutorial, you learned about different ways of measuring fairness of a machine learning model. In this exercise, you'll train a few models to approve (or deny) credit card applications and analyze fairness. Don't worry if you're new to coding
Step1: The dataset contains, for each applicant
Step2: The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,
- The model approved 38246 people for a credit card. Of these individuals, 8028 belonged to Group A, and 30218 belonged to Group B.
- The model is 94.56% accurate for Group A, and 95.02% accurate for Group B. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the accuracy is (39723+7528)/(39723+500+2219+7528).
- The true positive rate (TPR) for Group A is 77.23%, and the TPR for Group B is 98.03%. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the TPR is 7528/(7528+2219).
1) Varieties of fairness
Consider three different types of fairness covered in the tutorial
Step3: Run the next code cell without changes to visualize the model.
Step4: The flowchart shows how the model makes decisions
Step5: Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?
Run the next code cell to see how this new group unaware model performs.
Step6: 3) Varieties of fairness, part 2
How does this model compare to the first model you trained, when you consider demographic parity, equal accuracy, and equal opportunity? Once you have an answer, run the next code cell.
Step7: You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about here.)
Run the next code cell without changes to evaluate this new model.
Step8: 4) Varieties of fairness, part 3
How does this final model compare to the previous models, when you consider demographic parity, equal accuracy, and equal opportunity? | Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex4 import *
import pandas as pd
from sklearn.model_selection import train_test_split
# Load the data, separate features from target
data = pd.read_csv("../input/synthetic-credit-card-approval/synthetic_credit_card_approval.csv")
X = data.drop(["Target"], axis=1)
y = data["Target"]
# Break into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
# Preview the data
print("Data successfully loaded!\n")
X_train.head()
Explanation: In the tutorial, you learned about different ways of measuring fairness of a machine learning model. In this exercise, you'll train a few models to approve (or deny) credit card applications and analyze fairness. Don't worry if you're new to coding: this exercise assumes no programming knowledge.
Introduction
We work with a synthetic dataset of information submitted by credit card applicants.
To load and preview the data, run the next code cell. When the code finishes running, you should see a message saying the data was successfully loaded, along with a preview of the first five rows of the data.
End of explanation
from sklearn import tree
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Train a model and make predictions
model_baseline = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_baseline.fit(X_train, y_train)
preds_baseline = model_baseline.predict(X_test)
# Function to plot confusion matrix
def plot_confusion_matrix(estimator, X, y_true, y_pred, display_labels=["Deny", "Approve"],
include_values=True, xticks_rotation='horizontal', values_format='',
normalize=None, cmap=plt.cm.Blues):
cm = confusion_matrix(y_true, y_pred, normalize=normalize)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels)
return cm, disp.plot(include_values=include_values, cmap=cmap, xticks_rotation=xticks_rotation,
values_format=values_format)
# Function to evaluate the fairness of the model
def get_stats(X, y, model, group_one, preds):
y_zero, preds_zero, X_zero = y[group_one==False], preds[group_one==False], X[group_one==False]
y_one, preds_one, X_one = y[group_one], preds[group_one], X[group_one]
print("Total approvals:", preds.sum())
print("Group A:", preds_zero.sum(), "({}% of approvals)".format(round(preds_zero.sum()/sum(preds)*100, 2)))
print("Group B:", preds_one.sum(), "({}% of approvals)".format(round(preds_one.sum()/sum(preds)*100, 2)))
print("\nOverall accuracy: {}%".format(round((preds==y).sum()/len(y)*100, 2)))
print("Group A: {}%".format(round((preds_zero==y_zero).sum()/len(y_zero)*100, 2)))
print("Group B: {}%".format(round((preds_one==y_one).sum()/len(y_one)*100, 2)))
cm_zero, disp_zero = plot_confusion_matrix(model, X_zero, y_zero, preds_zero)
disp_zero.ax_.set_title("Group A")
cm_one, disp_one = plot_confusion_matrix(model, X_one, y_one, preds_one)
disp_one.ax_.set_title("Group B")
print("\nSensitivity / True positive rate:")
print("Group A: {}%".format(round(cm_zero[1,1] / cm_zero[1].sum()*100, 2)))
print("Group B: {}%".format(round(cm_one[1,1] / cm_one[1].sum()*100, 2)))
# Evaluate the model
get_stats(X_test, y_test, model_baseline, X_test["Group"]==1, preds_baseline)
Explanation: The dataset contains, for each applicant:
- income (in the Income column),
- the number of children (in the Num_Children column),
- whether the applicant owns a car (in the Own_Car column, the value is 1 if the applicant owns a car, and is else 0), and
- whether the applicant owns a home (in the Own_Housing column, the value is 1 if the applicant owns a home, and is else 0)
When evaluating fairness, we'll check how the model performs for users in different groups, as identified by the Group column:
- The Group column breaks the users into two groups (where each group corresponds to either 0 or 1).
- For instance, you can think of the column as breaking the users into two different races, ethnicities, or gender groupings. If the column breaks users into different ethnicities, 0 could correspond to a non-Hispanic user, while 1 corresponds to a Hispanic user.
Run the next code cell without changes to train a simple model to approve or deny individuals for a credit card. The output shows the performance of the model.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_1.check()
Explanation: The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,
- The model approved 38246 people for a credit card. Of these individuals, 8028 belonged to Group A, and 30218 belonged to Group B.
- The model is 94.56% accurate for Group A, and 95.02% accurate for Group B. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the accuracy is (39723+7528)/(39723+500+2219+7528).
- The true positive rate (TPR) for Group A is 77.23%, and the TPR for Group B is 98.03%. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the TPR is 7528/(7528+2219).
1) Varieties of fairness
Consider three different types of fairness covered in the tutorial:
- Demographic parity: Which group has an unfair advantage, with more representation in the group of approved applicants? (Roughly 50% of applicants are from Group A, and 50% of applicants are from Group B.)
- Equal accuracy: Which group has an unfair advantage, where applicants are more likely to be correctly classified?
- Equal opportunity: Which group has an unfair advantage, with a higher true positive rate?
End of explanation
def visualize_model(model, feature_names, class_names=["Deny", "Approve"], impurity=False):
plot_list = tree.plot_tree(model, feature_names=feature_names, class_names=class_names, impurity=impurity)
[process_plot_item(item) for item in plot_list]
def process_plot_item(item):
split_string = item.get_text().split("\n")
if split_string[0].startswith("samples"):
item.set_text(split_string[-1])
else:
item.set_text(split_string[0])
plt.figure(figsize=(20, 6))
plot_list = visualize_model(model_baseline, feature_names=X_train.columns)
Explanation: Run the next code cell without changes to visualize the model.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_2.check()
Explanation: The flowchart shows how the model makes decisions:
- Group <= 0.5 checks what group the applicant belongs to: if the applicant belongs to Group A, then Group <= 0.5 is true.
- Entries like Income <= 80210.5 check the applicant's income.
To follow the flow chart, we start at the top and trace a path depending on the details of the applicant. If the condition is true at a split, then we move down and to the left branch. If it is false, then we move to the right branch.
For instance, consider an applicant in Group B, who has an income of 75k. Then,
- We start at the top of the flow chart. the applicant has an income of 75k, so Income <= 80210.5 is true, and we move to the left.
- Next, we check the income again. Since Income <= 71909.5 is false, we move to the right.
- The last thing to check is what group the applicant belongs to. The applicant belongs to Group B, so Group <= 0.5 is false, and we move to the right, where the model has decided to approve the applicant.
2) Understand the baseline model
Based on the visualization, how can you explain one source of unfairness in the model?
Hint: Consider the example applicant, but change the group membership from Group B to Group A (leaving all other characteristics the same). Is this slightly different applicant approved or denied by the model?
End of explanation
# Create new dataset with gender removed
X_train_unaware = X_train.drop(["Group"],axis=1)
X_test_unaware = X_test.drop(["Group"],axis=1)
# Train new model on new dataset
model_unaware = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_unaware.fit(X_train_unaware, y_train)
# Evaluate the model
preds_unaware = model_unaware.predict(X_test_unaware)
get_stats(X_test_unaware, y_test, model_unaware, X_test["Group"]==1, preds_unaware)
Explanation: Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?
Run the next code cell to see how this new group unaware model performs.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_3.check()
Explanation: 3) Varieties of fairness, part 2
How does this model compare to the first model you trained, when you consider demographic parity, equal accuracy, and equal opportunity? Once you have an answer, run the next code cell.
End of explanation
# Change the value of zero_threshold to hit the objective
zero_threshold = 0.11
one_threshold = 0.99
# Evaluate the model
test_probs = model_unaware.predict_proba(X_test_unaware)[:,1]
preds_approval = (((test_probs>zero_threshold)*1)*[X_test["Group"]==0] + ((test_probs>one_threshold)*1)*[X_test["Group"]==1])[0]
get_stats(X_test, y_test, model_unaware, X_test["Group"]==1, preds_approval)
Explanation: You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about here.)
Run the next code cell without changes to evaluate this new model.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_4.check()
Explanation: 4) Varieties of fairness, part 3
How does this final model compare to the previous models, when you consider demographic parity, equal accuracy, and equal opportunity?
End of explanation |
10,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resonant excitation
We want to study the behaviour of an undercritically damped SDOF system when it is
subjected to a harmonic force $p(t) = p_o \sin\omega_nt$, i.e., when the excitation frequency equals the free vibration frequency of the system.
Of course, $\beta=1$, $D(\beta,\zeta)|{\beta=1}=\displaystyle\frac{1}{2\zeta}$
and $\theta=\pi/2$, hence $$\xi(t)=\Delta{st}\,\frac{1}{2\zeta}\cos\omega_nt.$$
Starting from rest conditions, we have
$$\frac{x(t)}{\Delta_{st}} = \exp(-\zeta\omega_n t)\left(
-\frac{\omega_n}{2\omega_D}\sin(\omega_n t)
-\frac{1}{2\zeta}\cos(\omega_n t)\right) + \frac{1}{2\zeta}\cos(\omega_n t)$$
and, multiplying both sides by $2\zeta$
\begin{align}
x(t)\frac{2\zeta}{\Delta_{st}} = \bar{x}(t)& =
\exp(-\zeta\omega_n t)\left(
-\zeta\frac{\omega_n}{\omega_D}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t)\
& = \exp(-\zeta\omega_n t)\left(
-\frac{\zeta}{\sqrt{1-z^2}}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t).
\end{align}
We have now a normalized function of time that grows, oscillating, from 0 to 1,
where the free parameters are just $\omega_n$ and $\zeta$.
To go further, we set arbitrarily $\omega_n=2\pi$ (our plots will be nicer...)
and have just a dependency on $t$ and $\zeta$.
Eventually, we define a function of $\zeta$ that returns a function of $t$ only,
here it is...
Step1: Above we compute some constants that depend on $\zeta$,
i.e., the damped frequency and the coefficient in
front of the sine term, then we define a function of time
in terms of these constants and of $\zeta$ itself.
Because we are going to use this function with a vector argument,
the last touch is to vectorize the function just before returning it
to the caller.
Plotting our results
We start by using a function defined in the pylab aka pl module to
generate a vector whose entries are 1001 equispaced real numbers, starting from
zero and up to 20, inclusive of both ends, and assigning the name t to this vector.
Step2: We want to see what happens for different values of $\zeta$, so we create
a list of values and assign the name zetas to this list.
Step3: Now, the real plotting
Step4: Wait a minute!
So, after all this work, we have that the greater the damping, the smaller the
number of cycles that's needed to reach the maximum value of the response...
Yes, it's exactly like that, and there is a reason. Think of it.
.
.
.
.
.
.
.
.
.
.
We have normalized the response functions to have always a maximum absolute
value of one, but in effect the max values are different, and a heavily damped
system needs less cycles to reach steady-state because the maximum value is much,
much smaller.
Let's plot the unnormalized (well, there's still the $\Delta_{st}$ normalization)
responses.
Note the differences with above | Python Code:
def x_2z_over_dst(z):
w = 2*pi
# beta = 1, wn =w
wd = w*sqrt(1-z*z)
# Clough Penzien p. 43
A = z/sqrt(1-z*z)
def f(t):
return (cos(wd*t)+A*sin(wd*t))*exp(-z*w*t)-cos(w*t)
return pl.vectorize(f)
Explanation: Resonant excitation
We want to study the behaviour of an undercritically damped SDOF system when it is
subjected to a harmonic force $p(t) = p_o \sin\omega_nt$, i.e., when the excitation frequency equals the free vibration frequency of the system.
Of course, $\beta=1$, $D(\beta,\zeta)|{\beta=1}=\displaystyle\frac{1}{2\zeta}$
and $\theta=\pi/2$, hence $$\xi(t)=\Delta{st}\,\frac{1}{2\zeta}\cos\omega_nt.$$
Starting from rest conditions, we have
$$\frac{x(t)}{\Delta_{st}} = \exp(-\zeta\omega_n t)\left(
-\frac{\omega_n}{2\omega_D}\sin(\omega_n t)
-\frac{1}{2\zeta}\cos(\omega_n t)\right) + \frac{1}{2\zeta}\cos(\omega_n t)$$
and, multiplying both sides by $2\zeta$
\begin{align}
x(t)\frac{2\zeta}{\Delta_{st}} = \bar{x}(t)& =
\exp(-\zeta\omega_n t)\left(
-\zeta\frac{\omega_n}{\omega_D}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t)\
& = \exp(-\zeta\omega_n t)\left(
-\frac{\zeta}{\sqrt{1-z^2}}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t).
\end{align}
We have now a normalized function of time that grows, oscillating, from 0 to 1,
where the free parameters are just $\omega_n$ and $\zeta$.
To go further, we set arbitrarily $\omega_n=2\pi$ (our plots will be nicer...)
and have just a dependency on $t$ and $\zeta$.
Eventually, we define a function of $\zeta$ that returns a function of $t$ only,
here it is...
End of explanation
t = pl.linspace(0,20,1001)
print(t)
Explanation: Above we compute some constants that depend on $\zeta$,
i.e., the damped frequency and the coefficient in
front of the sine term, then we define a function of time
in terms of these constants and of $\zeta$ itself.
Because we are going to use this function with a vector argument,
the last touch is to vectorize the function just before returning it
to the caller.
Plotting our results
We start by using a function defined in the pylab aka pl module to
generate a vector whose entries are 1001 equispaced real numbers, starting from
zero and up to 20, inclusive of both ends, and assigning the name t to this vector.
End of explanation
zetas = (.02, .05, .10, .20)
print(zetas)
Explanation: We want to see what happens for different values of $\zeta$, so we create
a list of values and assign the name zetas to this list.
End of explanation
for z in zetas:
# call the function of zeta that returns
# a function of time, assign the name bar_x to this function
bar_x = x_2z_over_dst(z)
# do the plotting...
pl.plot(t,bar_x(t))
pl.ylim((-1.0, 1.0))
pl.title(r'$\zeta=%4.2f$'%(z,))
pl.show()
Explanation: Now, the real plotting:
z takes in turn each of the values in zetas,
then we generate a function of time for the current z
we generate a plot with a line that goes through the point
(a(0),b(0)), (a(1),b(1)), (a(2),b(2)), ...
where, in our case, a is the vector t and b is the vector
returned from the vectorized function bar_x
we make a slight adjustement to the extreme values of the y-axis
of the plot
we give a title to the plot
we FORCE (pl.show()) the plot to be produced.
End of explanation
t = pl.linspace(0,5,501)
for z in zetas:
# call the function of zeta that returns
# a function of time, assign the name bar_x to this function
bar_x = x_2z_over_dst(z)
# do the plotting...
pl.plot(t,bar_x(t)/2/z, label=r'$\zeta=%4.2f$'%(z,))
pl.legend(ncol=5,loc='lower center', fancybox=1, shadow=1, framealpha=.95)
pl.grid()
Explanation: Wait a minute!
So, after all this work, we have that the greater the damping, the smaller the
number of cycles that's needed to reach the maximum value of the response...
Yes, it's exactly like that, and there is a reason. Think of it.
.
.
.
.
.
.
.
.
.
.
We have normalized the response functions to have always a maximum absolute
value of one, but in effect the max values are different, and a heavily damped
system needs less cycles to reach steady-state because the maximum value is much,
much smaller.
Let's plot the unnormalized (well, there's still the $\Delta_{st}$ normalization)
responses.
Note the differences with above:
we focus on a shorter interval of time and, in each step
we don't add a title
we don't force the creation of a distinct plot in each cycle,
we add a label to each curve
at the end of the cycle,
we ask for the generation of a legend that uses the labels
we specified to generate a, well, a legend for the curves
we ask to plot all the properly labeled curves using pl.plot().
End of explanation |
10,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setting the hierarchy in DisMod-MR
The goal of this document is to demonstrate how to set the spatial hierarchy for the random effects in DisMod-MR.
The examples are based on a spatial hierarchy of Japan, provided by Ver Bilano, and included in the examples directory.
Step1: First, we will use simulation to generate $n$ rows of input data.
Step2: The following code generates a single level hierarchy, with all prefectures below the national level
Step3: That is all there is to it!
Step4: To use a two-level hierarchy instead, simply build the regions into the hierarchy graph
Step5: It would be great to extend this example so that the results differed in a meaningful way when using one- and two-level hierarchical models. This is left as an exercise to the reader. | Python Code:
import dismod_mr, numpy as np, pandas as pd
df = pd.read_csv('hierarchy.csv')
df.head()
Explanation: Setting the hierarchy in DisMod-MR
The goal of this document is to demonstrate how to set the spatial hierarchy for the random effects in DisMod-MR.
The examples are based on a spatial hierarchy of Japan, provided by Ver Bilano, and included in the examples directory.
End of explanation
import random
n = 100
dm = dismod_mr.data.ModelData()
inp = pd.DataFrame(columns=dm.input_data, index=range(n))
# data type, value, and uncertainty
inp['data_type'] = 'p'
inp['value'] = .5 + .1*np.random.randn(n)
inp['effective_sample_size'] = 1000.
# geographic information (to be used for random effects)
inp['area'] = [random.choice(df.Prefecture) for i in range(n)]
inp['sex'] = 'total'
inp['age_start'] = 50
inp['age_end'] = 50
inp['standard_error'] = np.nan
inp['upper_ci'] = np.nan
inp['lower_ci'] = np.nan
# put data in model
dm.input_data = inp
# set model parameters for simple fit
dm.parameters['p'] = {'level_value': {'age_after': 100, 'age_before': 1, 'value': 0.},
'parameter_age_mesh': [0, 100]}
Explanation: First, we will use simulation to generate $n$ rows of input data.
End of explanation
for p in df.Prefecture:
dm.hierarchy.add_edge('all', p)
Explanation: The following code generates a single level hierarchy, with all prefectures below the national level:
End of explanation
dm.vars = dismod_mr.model.asr(dm, 'p', rate_type='neg_binom')
%time dismod_mr.fit.asr(dm, 'p', iter=10_000, burn=5_000, thin=5)
dismod_mr.plot.effects(dm, 'p', figsize=(18,10))
Explanation: That is all there is to it!
End of explanation
dm = dismod_mr.data.ModelData()
dm.input_data = inp
dm.parameters['p'] = {'level_value': {'age_after': 100, 'age_before': 1, 'value': 0.},
'parameter_age_mesh': [0, 100]}
for i, row in df.iterrows():
dm.hierarchy.add_edge('all', row['Region'])
dm.hierarchy.add_edge(row['Region'], row['Prefecture'])
dm.vars = dismod_mr.model.asr(dm, 'p', rate_type='neg_binom')
%time dismod_mr.fit.asr(dm, 'p', iter=10_000, burn=5_000, thin=5)
dismod_mr.plot.effects(dm, 'p', figsize=(18,14))
Explanation: To use a two-level hierarchy instead, simply build the regions into the hierarchy graph:
End of explanation
!date
Explanation: It would be great to extend this example so that the results differed in a meaningful way when using one- and two-level hierarchical models. This is left as an exercise to the reader.
End of explanation |
10,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
Step1: 1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
Step2: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step3: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here
Step4: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
Step5: 50-50
Step6: 75-25 | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import datasets
from sklearn import tree
from sklearn.cross_validation import train_test_split
from pandas.tools.plotting import scatter_matrix
Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
End of explanation
iris = datasets.load_iris()
iris
print(iris.feature_names)
type(iris['data'])
characteristics = iris.data[:,2:]
species = iris.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(characteristics,species)
characteristics_train, characteristics_test, species_train, species_test = train_test_split(characteristics,species,test_size=0.5,train_size=0.5)
dt = dt.fit(characteristics_train,species_train)
dt
from sklearn import metrics
def measure_performance(characteristics,species,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
species_pred=clf.predict(characteristics)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(species, species_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(species, species_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(species, species_pred),"\n")
measure_performance(characteristics_test,species_test,dt)
# ACCURACY: The model predicts the species correctly for 94.7% of plant samples
# PRECISION:
# Species 1 is predicted precisely for all cases -- no false negatives/false positives
# For species 2, the model predicted 96% of cases precisely as true positives, 4% were false positives
# For species 3, the model predicted 90% of cases precisely as true positives, 10% were false positives
# CONFUSION MATRIX
# 23 plant samples were classified as Iris species 1
# 22 plant samples were classified as Iris species 2, with 3 more being falsely labelled as species 3
# 26 plant samples were classified as Iris species 3, with 1 more being falsely labelled as species 2
# Given the fact that the model doesn't fit 100% it seems at least not to be overfitting
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
species_pred = dt.fit(characteristics_train, species_train).predict(characteristics_test)
cm = metrics.confusion_matrix(species_test, species_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
Explanation: 1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
End of explanation
characteristics_train, characteristics_test, species_train, species_test = train_test_split(characteristics,species,test_size=0.25,train_size=0.75)
measure_performance(characteristics_test,species_test,dt)
# ACCURACY: The model predicts the species correctly for 100% of plant samples
# CONFUSION MATRIX
# 13 plant samples were classified as Iris species 1
# 10 plant samples were classified as Iris species 2
# 15 plant samples were classified as Iris species 3
# Maybe the test dataset is to small when setting it to a share of 25% of all data: so that the training data already
# covers all eventualities and thus is overfitting;
# not enough variability in test dataset to highlight inaccuracy of the model
# What's a good split for training vs test data? (Maybe depends on overall size?)
Explanation: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
End of explanation
# Only one donor? (Nick Street) --> big issue with marker sensitivity in detection!
# With a males-sounding first name (having breast cancer in 1995, so at least 30 years old) for breast cancer cells?
cancer = datasets.load_breast_cancer()
# Reading up on scikit learn -- documentation is not that good, this one is a bit better:
# https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/base.py
#WE ARE TRYING TO PREDICT WHETHER A TUMOR IS MALIGNANT OR BENIGN
print(cancer.target_names)
#THESE ARE ALL THE ATTRIBUTES AVAILABLE
print(cancer.feature_names)
#BASIC DESCRIPTIVE STATISTICS BELOW IN THE TABLE
print(cancer.DESCR)
Explanation: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
End of explanation
markers = cancer.data[:,:]
seeds = cancer.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(markers,seeds)
Explanation: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
End of explanation
markers_train, markers_test, seeds_train, seeds_test = train_test_split(markers,seeds,test_size=0.5,train_size=0.5)
dt = dt.fit(markers_train,seeds_train)
def measure_performance(markers,seeds,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
seeds_pred=clf.predict(markers)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(seeds, seeds_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(seeds, seeds_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(seeds, seeds_pred),"\n")
measure_performance(markers_test,seeds_test,dt)
# malignant = 0
# benign = 1
# ACCURACY
# The classifier predicts 88 percent of samples correctly
# CONFUSION MATRIX:
# 47 samples are correctly predicted as malignant, whereas there are 5 that are malignant, but classified as benign
# 80 samples are correctly predicted as benign, whereas there are 11 that are benign, but classified as malignant
# PRECISION
# The matter outlined above translates to the following precision:
# For malignant samples, the model predicted 81% of cases precisely as true positives, 9% were false positives
# For benign samples, the model predicted 94% of cases precisely as true positives, 6% were false positives
Explanation: 50-50
End of explanation
markers_train, markers_test, seeds_train, seeds_test = train_test_split(markers,seeds,test_size=0.25,train_size=0.75)
dt = dt.fit(markers_train,seeds_train)
def measure_performance(markers,seeds,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
seeds_pred=clf.predict(markers)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(seeds, seeds_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(seeds, seeds_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(seeds, seeds_pred),"\n")
measure_performance(markers_test,seeds_test,dt)
# With the 75-25 split, the classifier performs better
# ACCURACY
# The classifier predicts 94 percent of samples correctly
# CONFUSION MATRIX:
# 46 samples are correctly predicted as malignant, whereas there are 5 that are malignant, but classified as benign
# 89 samples are correctly predicted as benign, whereas there are 3 that are benign, but classified as malignant
# PRECISION
# The matter outlined above translates to the following precision:
# For malignant samples, the model predicted 94% of cases precisely as true positives, 6% were false positives
# For benign samples, the model predicted 95% of cases precisely as true positives, 5% were false positives
Explanation: 75-25
End of explanation |
10,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p style="text-align
Step1: Player and odds data from 2010-2016 has beeen matched and stored. Retrieve, merge, and rename.
Step5: Get additional training data. We'll include data from 2005, excluding Davis Cup matches, and focusing exclusively on players who played in our 2010-2016 set.
Step6: II. Calculate match statistics
Many of the features we will develop will involve quantities derived from previous matches. Examples include weighted historical averages of winning first serves, double faults, etc. This section calculates some important quantities for each match.
Insert index on matches_hist
Step7: Extract text string indicating unusual match outcomes
Step8: Calculate games scores for each set, store in a separate dataframe
Step9: Store the number of games played in each match.
Step10: It seems a few matches were cut short for no recorded reason
Step11: Calcuate first serve percentages (fsp) for both winners and losers.
Step12: Calculate winning first serve percentages (wfs)**
Step13: Calculate second serves in (2ndIn) for both winners and losers
Step14: Calculate second serve (ssp)* percentages *
Step15: Calculate wining second serve percentages (wss)**
Step16: Calculate overall win on serve percentages (wsp)**
Step17: Calculate winning on return percentages (wrp).
[(# of opponent serves) - (# of opponent service victories)]/(# of opponent serves)
Step18: Calculate total points won percentage (tpw)**
Step19: Calculate double faults per game (dfpg), per game
There are a couple of bad entries for SvGms. We'll first fix those. (Note
Step20: Calculate aces per game (acpg), per game
Step21: Calculate break points saved percentage (bpsp)**
Step22: Flag games with premature closure, probably due to injury (retired)**
Step23: Flag games won as a walkover (wo)**
Step24: Calculate player completeness (complete), defined as
$$
wsp \times wrp
$$
Note
Step25: Calculate player service advantage (serveadv), defined as
$$
wsp_1 -wrp_2
$$
Note
Step26: Sanity check
Step27: III. Calculate features
Each feature generally requires some calculation. We'll use the bigger data set matches_hist to drive the calculations, and store the results in a data frame called features, of the same size as data.
Extract winner and loser data separately and concatenate into dataframe all_records with uniform column names.
Step28: Calculate surface weighting matrix. Note that the resulting values are quite different from Sipko's.
Step30: Function to calculate static features. (Specifically, calculate normalized rank, rankpts, age, height, hand features for every match in the dataset. Operates on the data as a whole, rather than by line.)
Step33: Get dynamic features (i.e. all features that require some time averaging.)
Step34: It will be help with indexing if we isolate all relevant features for players 1 and 0 into their own dataframes. This is what the following code does.
Step35: Features I and II
Step36: Feature III
Step37: Feature 4
Step38: Feature 5
Step39: Future Work
Proposed features | Python Code:
import sqlalchemy # pandas-mysql interface library
import sqlalchemy.exc # exception handling
from sqlalchemy import create_engine # needed to define db interface
import sys # for defining behavior under errors
import numpy as np # numerical libraries
import scipy as sp
import pandas as pd # for data analysis
import pandas.io.sql as sql # for interfacing with MySQL database
import matplotlib as mpl # a big library with plotting functionality
import matplotlib.pyplot as plt # a subset of matplotlib with most of the useful tools
import IPython as IP
%matplotlib inline
import pdb
#%qtconsole
Explanation: <p style="text-align: center"> Extracting features</p>
Author: Carl Toews
File: extract_features.ipynb
Description:
In order to implement machine learning algorithms, we need to develop a set of informative features. Following Machine learning for predicting professional tennis matches [Sipko, 2015], for each match we assign one player to be "Player 0" and the other to be "Player 1", and call the outcome a 0 if Player 0 won and a 1 otherwise. For each match, we produce a set of features, each a measure of difference between some characteristic of the two players. The characteristics we consider include the following (many of these are from Sipko):
rank: rank
rankpts: rank points
height: height
hand: handedness binary (0 for right, 1 for left)
fsp: "first serve is valid" (percentage)
wfs: winning first serve (percentage)
wss: winning second serve (percentage )
wsp: winning on any serve (percentage)
wrp: winning on returns (percentage)
tpw: total points won (percentage)
acpg: average number of aces (per game)
dfpg: average number of double faults (per game)
bps: break points saved (percentage)
tmw: total matches won (percentage)
retired: binary (True if 1st match back since retirement)
fatigue: fatigue score (based on number of matches in past 3 days)
complete: player completeness score (Sipko)
serveadv: score to measure the relative advantage when serving
direct: head to head balance with a particular player
Note that rank, rankpts, height, and hand can be read from player data directly with no calculation. On the other hand, fsp, wfs, wss, wsp, wrp, tpw, acpg, dfpg, bps can be calculated for any given match, but for purposes of predicting a future match, need to be averaged over the historical record. Finally, tmw, retired, fatigue, complete, serveadv, and direct are not calculated "match-by-match", but rather derived from the historical record.
In order to derive these features, we'll first need to clean the data a bit. Specifically, we need to deal with missing or null values, as well as rectify incorrect values. Examples of issues include:
* The 'score' column contains inconsistent strings for indicating irregular outcomes
* Many matches don't include statistics such as ace rates, double faults, etc.
I. Data extraction
Import statements
End of explanation
pickle_dir = '../pickle_files/'
odds_file = 'odds.pkl'
matches_file = 'matches.pkl'
odds= pd.read_pickle(pickle_dir + odds_file)
matches= pd.read_pickle(pickle_dir + matches_file)
data = pd.merge(matches,odds[['PSW','PSL','key']],how='inner',on='key')
Explanation: Player and odds data from 2010-2016 has beeen matched and stored. Retrieve, merge, and rename.
End of explanation
# name of database
db_name = "tennis"
# name of db user
username = "testuser"
# db password for db user
password = "test623"
# location of atp data files
atpfile_directory = "../data/tennis_atp-master/"
# focus on most recent data; exclude Davis Cup stuff
startdate = '20050101'
enddate = '20161231'
engine = create_engine('mysql+mysqldb://' + username + ':' + password + '@localhost/' + db_name)
# get unique winners and losers in our set
players = tuple(pd.concat((data.winner_id,data.loser_id)).unique())
# load all data pertinant for any player
with engine.begin() as connection:
matches_hist = pd.read_sql_query(SELECT * FROM matches \
WHERE tourney_date >= ' + startdate + ' \
AND tourney_date <= ' + enddate + ' \
AND (winner_id IN %(p)s \
OR loser_id IN %(p)s) \
AND tourney_name NOT LIKE 'Davis%%';,connection,params={'p':players})
Explanation: Get additional training data. We'll include data from 2005, excluding Davis Cup matches, and focusing exclusively on players who played in our 2010-2016 set.
End of explanation
matches_hist['key'] = np.arange(len(matches_hist))
Explanation: II. Calculate match statistics
Many of the features we will develop will involve quantities derived from previous matches. Examples include weighted historical averages of winning first serves, double faults, etc. This section calculates some important quantities for each match.
Insert index on matches_hist
End of explanation
# scores are just numbers, unless something weird happened. Extract comments about irregular outcomes.
t=matches_hist.score.str.extractall('(?P<comment>[a-zA-Z]+.+)').xs(0,level='match')
matches_hist = pd.merge(matches_hist,t,how='outer',left_index=True, right_index=True)
matches_hist.comment.unique()
Explanation: Extract text string indicating unusual match outcomes
End of explanation
# discard comments and trailing white space
scores = matches_hist.score.str.replace('(?P<comment>[a-zA-Z]+.+)','')
scores = scores.str.replace('(?P<comment>\([0-9]+\))','').str.strip()
# split the game scores into columns of a dataframe
scores = scores.str.split('-|\s',expand=True)
scores.columns=['W1','L1','W2','L2','W3','L3','W4','L4','W5','L5']
scores = scores.apply(lambda x: pd.to_numeric(x))
Explanation: Calculate games scores for each set, store in a separate dataframe
End of explanation
ngames = np.sum(scores,axis=1)
matches_hist.insert(0,'ngames',ngames.astype('int'))
Explanation: Store the number of games played in each match.
End of explanation
# sanity check: are matches involving few than 12 games identical to those with commments?
idx1 = (ngames<12)
idx2 = matches_hist.comment.notnull()
z=(idx1*1)*(idx2*1-1)
zz = np.where(np.abs(z))[0]
print("matches with weird outcomes: ")
print(matches_hist.loc[zz,'score'])
Explanation: It seems a few matches were cut short for no recorded reason:
End of explanation
matches_hist.insert(0,'w_fsp',matches_hist.w_1stIn/matches_hist.w_svpt)
matches_hist.insert(0,'l_fsp',matches_hist.l_1stIn/matches_hist.l_svpt)
matches_hist.loc[matches_hist.]
Explanation: Calcuate first serve percentages (fsp) for both winners and losers.
End of explanation
matches_hist.insert(0,'w_wfs',matches_hist.w_1stWon/matches_hist.w_svpt)
matches_hist.insert(0,'l_wfs',matches_hist.l_1stWon/matches_hist.l_svpt)
Explanation: Calculate winning first serve percentages (wfs)**
End of explanation
matches_hist.insert(0,'w_2ndIn',matches_hist.w_svpt-matches_hist.w_df-matches_hist.w_1stIn)
matches_hist.insert(0,'l_2ndIn',matches_hist.l_svpt-matches_hist.l_df-matches_hist.l_1stIn)
Explanation: Calculate second serves in (2ndIn) for both winners and losers
End of explanation
matches_hist.insert(0,'w_ssp',matches_hist.w_2ndIn/(matches_hist.w_2ndIn+matches_hist.w_df))
matches_hist.insert(0,'l_ssp',matches_hist.l_2ndIn/(matches_hist.l_2ndIn+matches_hist.l_df))
Explanation: Calculate second serve (ssp)* percentages *
End of explanation
matches_hist.insert(0,'w_wss',matches_hist.w_2ndWon/matches_hist.w_2ndIn)
matches_hist.insert(0,'l_wss',matches_hist.l_2ndWon/matches_hist.l_2ndIn)
Explanation: Calculate wining second serve percentages (wss)**
End of explanation
#matches_hist.insert(0,'w_wsp',(matches_hist.w_1stWon + matches_hist.w_2ndWon)/matches_hist.w_svpt)
#matches_hist.insert(0,'l_wsp',(matches_hist.l_1stWon+matches_hist.l_2ndWon)/matches_hist.l_svpt)
matches_hist['w_wsp']=(matches_hist.w_1stWon + matches_hist.w_2ndWon)/matches_hist.w_svpt
matches_hist['l_wsp']=(matches_hist.l_1stWon+matches_hist.l_2ndWon)/matches_hist.l_svpt
Explanation: Calculate overall win on serve percentages (wsp)**
End of explanation
matches_hist.insert(0,'w_wrp',(matches_hist.l_svpt - matches_hist.l_1stWon \
- matches_hist.l_2ndWon)/(matches_hist.l_svpt))
matches_hist.insert(0,'l_wrp',(matches_hist.w_svpt - matches_hist.w_1stWon \
- matches_hist.w_2ndWon)/(matches_hist.w_svpt))
Explanation: Calculate winning on return percentages (wrp).
[(# of opponent serves) - (# of opponent service victories)]/(# of opponent serves)
End of explanation
matches_hist.insert(0,'w_tpw',(matches_hist.l_svpt\
-matches_hist.l_1stWon-matches_hist.l_2ndWon\
+matches_hist.w_1stWon +matches_hist.w_2ndWon)/\
(matches_hist.l_svpt + matches_hist.w_svpt))
matches_hist.insert(0,'l_tpw',(matches_hist.w_svpt\
-matches_hist.w_1stWon-matches_hist.w_2ndWon\
+matches_hist.l_1stWon +matches_hist.l_2ndWon)/\
(matches_hist.l_svpt + matches_hist.w_svpt))
Explanation: Calculate total points won percentage (tpw)**
End of explanation
idx = np.where(((matches_hist.w_SvGms == 0)|(matches_hist.l_SvGms==0)) & (matches_hist.ngames >1))
print(matches_hist.loc[idx[0],['w_df','l_df','w_SvGms','l_SvGms','score','ngames']])
matches_hist.loc[idx[0],'w_SvGms'] = matches_hist.ngames[idx[0]]/2
matches_hist.loc[idx[0],'l_SvGms'] = matches_hist.ngames[idx[0]]/2
print(matches_hist.loc[idx[0],['w_df','l_df','w_SvGms','l_SvGms','score','ngames']])
matches_hist.insert(0,'w_dfpg',matches_hist.w_df/matches_hist.w_SvGms)
matches_hist.insert(0,'l_dfpg',matches_hist.l_df/matches_hist.l_SvGms)
#matches_hist['w_dfpg']=matches_hist.w_df/matches_hist.w_SvGms
#matches_hist['l_dfpg']=matches_hist.l_df/matches_hist.l_SvGms
Explanation: Calculate double faults per game (dfpg), per game
There are a couple of bad entries for SvGms. We'll first fix those. (Note: the fix evenly partions total games between both players. This could result in a fractional number of service games. My hunch is the total effect on the stats is minimal.)
End of explanation
matches_hist.insert(0,'w_acpg',matches_hist.w_ace/matches_hist.w_SvGms)
matches_hist.insert(0,'l_acpg',matches_hist.l_ace/matches_hist.l_SvGms)
#matches_hist['w_acpg']=matches_hist.w_ace/matches_hist.w_SvGms
#matches_hist['l_acpg']=matches_hist.l_ace/matches_hist.l_SvGms
Explanation: Calculate aces per game (acpg), per game
End of explanation
matches_hist.insert(0,'w_bps',matches_hist.w_bpSaved/matches_hist.w_bpFaced)
matches_hist.insert(0,'l_bps',matches_hist.l_bpSaved/matches_hist.l_bpFaced)
Explanation: Calculate break points saved percentage (bpsp)**
End of explanation
matches_hist.insert(0,'retired',0)
matches_hist.loc[(matches_hist.comment=='RET'),'retired']=1
Explanation: Flag games with premature closure, probably due to injury (retired)**
End of explanation
matches_hist.insert(0,'walkover',0)
matches_hist.loc[(matches_hist.comment=='W/O'),'walkover']=1
Explanation: Flag games won as a walkover (wo)**
End of explanation
matches_hist.insert(0,'w_complete',matches_hist.w_wsp*matches_hist.w_wrp)
matches_hist.insert(0,'l_complete',matches_hist.l_wsp*matches_hist.l_wrp)
Explanation: Calculate player completeness (complete), defined as
$$
wsp \times wrp
$$
Note: might be more useful to aggregate first, rather than compute on a per-match basis.
End of explanation
matches_hist.insert(0,'w_serveadv',matches_hist.w_wsp-matches_hist.l_wrp)
matches_hist.insert(0,'l_serveadv',matches_hist.l_wsp-matches_hist.w_wrp)
Explanation: Calculate player service advantage (serveadv), defined as
$$
wsp_1 -wrp_2
$$
Note: as with complete, it might be more useful to aggregated first
End of explanation
idx = matches_hist.comment.isnull()
labels = ['dfpg', 'acpg', 'tpw', 'wrp', 'wsp', 'wss', 'wfs', 'fsp', 'ssp', 'bps','complete']
for label in labels:
printstr = label + ": max for winner/loser is {:5.2f}/{:5.2f}, min for winner/loser is {:5.2f}/{:5.2f}"
v1 = eval('matches_hist.w_' + label + '[idx].max()')
v2 = eval('matches_hist.l_' + label + '[idx].max()')
v3 = eval('matches_hist.w_' + label + '[idx].min()')
v4 = eval('matches_hist.l_' + label + '[idx].min()')
print(printstr.format(v1,v2,v3,v4))
Explanation: Sanity check: investigate calculated quantities
End of explanation
# extract winner stats
w_records = matches_hist[['winner_id',
'tourney_date',
'tourney_id',
'match_num',
'ngames',
'key',
'w_acpg', # avg. no of aces per game
'w_dfpg', # avg no. of double faults per game
'w_tpw', # total points won
'w_wrp', # wining return percent
'w_wsp', # winning service percent
'w_wss', # winning second serve percent
'w_wfs', # winning first serve percent
'w_fsp', # good first serves percent
'w_ssp', # good second serves percent
'w_bps', # breakpoints saved percent
'retired',# 1 if loser retired prematurely
'walkover', # 1 if loser didn't show up
'surface', # 'Hard', 'Clay', or 'Grass'
'winner_age', # age
'winner_ht', # height
'winner_rank', # rank
'winner_rank_points' # rank points
]]
# rename columns
newcols = {'winner_id':'pid',
'tourney_date':'date',
'tourney_id':'tid',
'match_num':'mid',
'ngames':'ngames',
'key':'key',
'w_acpg':'acpg',
'w_dfpg':'dfpg',
'w_tpw':'tpw',
'w_wrp':'wrp',
'w_wsp':'wsp',
'w_wss':'wss',
'w_wfs':'wfs',
'w_fsp':'fsp',
'w_ssp':'ssp',
'w_bps':'bps',
'retired':'retired',
'walkover':'walkover',
'surface':'surface',
'winner_age':'age',
'winner_ht':'ht',
'winner_rank':'rank',
'winner_rank_points':'rank_points'
}
w_records = w_records.rename(columns = newcols)
# record that the outcome was a victory for these players
w_records['outcome'] = np.ones(len(w_records))
# extract loser stats
l_records = matches_hist[['loser_id',
'tourney_date',
'tourney_id',
'match_num',
'ngames',
'key',
'l_acpg', # avg. no of aces per game
'l_dfpg', # avg no. of double faults per game
'l_tpw', # total points won
'l_wrp', # wining return percent
'l_wsp', # winning service percent
'l_wss', # winning second serve percent
'l_wfs', # winning first serve percent
'l_fsp', # percent of successful first serves
'l_ssp', # percent of successful second serves
'l_bps', # percent of breakpoints saved
'retired',# 1 if loser retired prematurely
'walkover',# 1 if loser didn't show up
'surface', # 'Hard', 'Clay', or 'Grass'
'loser_age', # age
'loser_ht', # height
'loser_rank', # rank
'loser_rank_points' # rank points
]]
# rename columns
newcols = {'loser_id':'pid',
'tourney_date':'date',
'tourney_id':'tid',
'match_num':'mid',
'ngames':'ngames',
'key':'key',
'l_acpg':'acpg',
'l_dfpg':'dfpg',
'l_tpw':'tpw',
'l_wrp':'wrp',
'l_wsp':'wsp',
'l_wss':'wss',
'l_wfs':'wfs',
'l_fsp':'fsp',
'l_ssp':'ssp',
'l_bps':'bps',
'retired':'retired',
'walkover':'walkover',
'surface':'surface',
'loser_age':'age',
'loser_ht':'ht',
'loser_rank':'rank',
'loser_rank_points':'rank_points'
}
l_records = l_records.rename(columns = newcols)
# record outcome as a loss
l_records['outcome'] = np.zeros(len(w_records))
# fuse all the data into one dataframe
all_records = pd.concat([w_records,l_records]).reset_index().sort_values(['key']).replace(np.inf,np.nan)
Explanation: III. Calculate features
Each feature generally requires some calculation. We'll use the bigger data set matches_hist to drive the calculations, and store the results in a data frame called features, of the same size as data.
Extract winner and loser data separately and concatenate into dataframe all_records with uniform column names.
End of explanation
grouped = all_records.groupby(['pid','surface'])
t=grouped['outcome'].mean()
surf_wt = t.unstack(level=-1).corr()
surf_wt
Explanation: Calculate surface weighting matrix. Note that the resulting values are quite different from Sipko's.
End of explanation
def get_static_features(data):
Description: returns differences of those features that don't depend on match histories.
(Rank, Rankpoints, Height, Hand)
Input: dataframe with all the match data for which features are to be calculated
Output: another dataframe of the same length but only four columns, each with one feature
# boolean, 1 means (winner,loser) are (player1,player2), 0 means the reverse
outcome = data['outcome']
# features dataframe should include merge identifiers
features = data[['tourney_id', 'match_num','tourney_date','key']].copy()
# rank (normalize)
rank=(data.loser_rank-data.winner_rank)*(-1)**outcome
features.insert(0,'rank',rank/rank.std())
# rank points (normalize)
rankpts = (data.loser_rank_points-data.winner_rank_points)*(-1)**outcome
features.insert(0,'rankpts',rankpts/rankpts.std())
# height (normalize)
height = (data.loser_ht-data.winner_ht)*(-1)**outcome
features.insert(0,'height',height/height.std())
# age (normalize)
height = (data.loser_age-data.winner_age)*(-1)**outcome
features.insert(0,'age',height/height.std())
# hand (1 for right, 0 for left)
hand = ((data.loser_hand=='R')*1-(data.winner_hand=='R')*1)*(-1)**outcome
hand.iloc[np.where((data['winner_hand']=='U')|\
(data['loser_hand']=='U'))[0]]=np.nan
features.insert(0,'hand',hand)
return features
Explanation: Function to calculate static features. (Specifically, calculate normalized rank, rankpts, age, height, hand features for every match in the dataset. Operates on the data as a whole, rather than by line.)
End of explanation
def get_dynamic_features(x):
Input: a row of the dataframe. Needs to have the following fields:
pid (player id number)
tid (tournament id number)
mid (match id number)
date (match date)
surface (match surface)
Output: a dataframe of two columns, one a time discount, the other a surface discount
# extract identifiers and date from input row
pid = x['pid'] # player id
tid = x['tid'] # tourney id
mid = x['mid'] # match id
date = x['date']
surface = x['surface']
# extract all historical records for this player, from before this match
records = all_records.loc[(all_records.pid==pid) & (all_records.date <= date) &\
((all_records.tid != tid) | (all_records.mid != mid)),:].copy()
# get time discount factor
p = 0.8
t = (date - records.date).apply(lambda x: x.days/365)
t_wt = p**t
t_wt.loc[t_wt>p]=p
# get surface discount factor
s_wt = records.surface.apply(lambda x: surf_wt.loc[x,surface])
# get time and court weighted averages of serve and performance stats
t = records[['dfpg','acpg','tpw','wrp','wsp',\
'wss','wfs','fsp','ssp','bps']].mul(t_wt*s_wt,axis=0).sum(axis=0)/\
records[['dfpg','acpg','tpw','wrp','wsp',\
'wss','wfs','fsp','ssp','bps']].notnull().mul(t_wt*s_wt,axis=0).sum(axis=0)
if len(records)==0:
t['complete']=np.nan
t['retired']=np.nan
return t
# get player completeness
t['complete'] = t['wsp']*t['wrp']
# get player serveadvantage
t['serveadv'] = t['wsp']+t['wrp']
# get player "return from retirement" status
t['retired'] = records.loc[records.date==records.date.min(),'retired'].values[0]
# return a series
return t
def dynamic_feature_wrapper(x):
calls "get_dynamic_features" to extract dynamic features for each player
pids = x[['lid','wid']]
y = x.copy()
# get Player1 info
y['pid'] = pids[y['outcome']]
P1_features = get_dynamic_features(y)
# get Player0 info
y['pid'] = pids[1-y['outcome']]
P2_features = get_dynamic_features(y)
# features are differences
features = P1_features - P2_features
# compute service advantage
features['serveadv'] =
return
data['outcome']=np.random.choice([0,1],size=len(features))
s_features=get_static_features(data)
s_features
x=data[['tourney_id','match_num','tourney_date','key','winner_id','loser_id','surface','outcome']].copy()
x.rename(columns={'tourney_id':'tid','match_num':'mid','tourney_date':'date',\
'winner_id':'wid','loser_id':'lid'},inplace=True)
x.iloc[0:5,:].apply(dynamic_feature_wrapper,axis=1)
#x.iloc[0:5,:]
y = matches_hist.iloc[15000]
dynamic_feature_wrapper(y)
x = all_records.iloc[13002]
records = get_dynamic_features(x)
records
Explanation: Get dynamic features (i.e. all features that require some time averaging.)
End of explanation
# initialize dataframes to hold features for players 1 and 0
P1 = pd.DataFrame(columns=['DATE','TID','MID','PID','HAND','HT',\
'AGE','RANKPTS','RANK','ACE','DF','SVPT',\
'FSTIN','FSTWON','SNDWON','BPSAVED','BPFACED'])
P0 = pd.DataFrame(columns=['DATE','TID','MID','PID','HAND','HT',
'AGE','RANKPTS','RANK','ACE','DF','SVPT',\
'FSTIN','FSTWON','SNDWON','BPSAVED','BPFACED'])
# define a function that returns winner info if RES=1, otherwise loser info
def assign_player_1(x):
winner = pd.Series({'DATE':x['tourney_date'],\
'TID':x['tourney_id'],\
'MID':x['match_num'],\
'PID':x['winner_id'],\
'HAND':x['winner_hand'],\
'HT':x['winner_ht'],\
'AGE':x['winner_age'],\
'RANKPTS':x['winner_rank_points'],\
'RANK':x['winner_rank'],\
'ACE':x['w_ace'],\
'DF':x['w_df'],\
'SVPT':x['w_svpt'],\
'FSTIN':x['w_1stIn'],\
'FSTWON':x['w_1stWon'],\
'SNDWON':x['w_2ndWon'],\
'BPSAVED':x['w_bpSaved'],\
'BPFACED':x['w_bpFaced']})
loser = pd.Series({'DATE':x['tourney_date'],\
'TID':x['tourney_id'],\
'MID':x['match_num'],\
'PID':x['loser_id'],\
'HAND':x['loser_hand'],\
'HT':x['loser_ht'],\
'AGE':x['loser_age'],\
'RANKPTS':x['loser_rank_points'],\
'RANK':x['loser_rank'],\
'ACE':x['l_ace'],\
'DF':x['l_df'],\
'SVPT':x['l_svpt'],\
'FSTIN':x['l_1stIn'],\
'FSTWON':x['l_1stWon'],\
'SNDWON':x['l_2ndWon'],\
'BPSAVED':x['l_bpSaved'],\
'BPFACED':x['l_bpFaced']})
if x['RES']==1:
return winner
else:
return loser
# mutatis mutandis for player 0. (Note: no need to rewrite this function if I can figure
# out how to assign two outputs within an "apply" call.)
def assign_player_0(x):
winner = pd.Series({'DATE':x['tourney_date'],\
'TID':x['tourney_id'],\
'MID':x['match_num'],\
'PID':x['winner_id'],\
'HAND':x['winner_hand'],\
'HT':x['winner_ht'],\
'AGE':x['winner_age'],\
'RANKPTS':x['winner_rank_points'],\
'RANK':x['winner_rank'],\
'ACE':x['w_ace'],\
'DF':x['w_df'],\
'SVPT':x['w_svpt'],\
'FSTIN':x['w_1stIn'],\
'FSTWON':x['w_1stWon'],\
'SNDWON':x['w_2ndWon'],\
'BPSAVED':x['w_bpSaved'],\
'BPFACED':x['w_bpFaced']})
loser = pd.Series({'DATE':x['tourney_date'],\
'TID':x['tourney_id'],\
'MID':x['match_num'],\
'PID':x['loser_id'],\
'HAND':x['loser_hand'],\
'HT':x['loser_ht'],\
'AGE':x['loser_age'],\
'RANKPTS':x['loser_rank_points'],\
'RANK':x['loser_rank'],\
'ACE':x['l_ace'],\
'DF':x['l_df'],\
'SVPT':x['l_svpt'],\
'FSTIN':x['l_1stIn'],\
'FSTWON':x['l_1stWon'],\
'SNDWON':x['l_2ndWon'],\
'BPSAVED':x['l_bpSaved'],\
'BPFACED':x['l_bpFaced']})
if x['RES']==1:
return loser
else:
return winner
matches_hist.insert(len(matches_hist.columns),'RES',features['RES'].values)
P1=matches_hist.apply(assign_player_1,axis=1)
P0=matches_hist.apply(assign_player_0,axis=1)
Explanation: It will be help with indexing if we isolate all relevant features for players 1 and 0 into their own dataframes. This is what the following code does.
End of explanation
features.insert(len(features.columns), 'RANKPTS', P1['RANKPTS']-P0['RANKPTS'])
features.insert(len(features.columns), 'RANK', P1['RANK']-P0['RANK'])
features['RANKPTS'] = features['RANKPTS']/features['RANKPTS'].std()
features['RANK'] = features['RANK']/features['RANK'].std()
# define figure and axes
fig = plt.figure(figsize=(15,5))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
ax0.hist(features.RANK.dropna())
ax0.set_title('Diff. in rank')
ax1.hist(features.RANKPTS.dropna())
ax1.set_title('Diff in rank pts')
Explanation: Features I and II: differences of ranks and rank points (RPTS)
We'll scale the rank point differences by the standard deviation.
End of explanation
P1.insert(len(P1.columns),'FSWPCT',P1['FSTWON']/P1['SVPT'])
P0.insert(len(P0.columns),'FSWPCT',P0['FSTWON']/P0['SVPT'])
P1_grouped = P1.groupby('PID')
P0_grouped = P0.groupby('PID')
def extract_features(group):
mean_fswpct = group['FSWPCT'].mean()
size = len(group)
return pd.Series({'mean_fswpct':mean_fswpct,'size':size})
t1=P1_grouped.apply(extract_features).reset_index()
t0=P0_grouped.apply(extract_features).reset_index()
t2 = pd.merge(t1,t0,how='outer',on='PID')
t2 = t2.fillna(0)
t2['FSWPCT_HIST'] = (t2['mean_fswpct_x']*t2['size_x'] +\
t2['mean_fswpct_y']*t2['size_y'])/(t2['size_x']+t2['size_y'])
P1=pd.merge(P1,t2[['PID','FSWPCT_HIST']],how='inner',on='PID')
P0=pd.merge(P0,t2[['PID','FSWPCT_HIST']],how='inner',on='PID')
features['FSWPCT']=P1['FSWPCT']-P0['FSWPCT']
plt.hist(features.FSWPCT.dropna())
plt.title('Diff. in first serve winning percentages')
Explanation: Feature III: differences of first serve winning percentage
End of explanation
features['HT'] = P1['HT']-P2['HT']
features['HT'] = features['HT']/features['HT'].std()
plt.hist(features.HT.dropna())
plt.title('Difference in height')
Explanation: Feature 4: Height differences
End of explanation
features['AGE'] = P1['AGE']-P2['AGE']
features['AGE'] = features['AGE']/features['AGE'].std()
plt.hist(features.AGE.dropna())
plt.title('Difference in age')
Explanation: Feature 5: Age differences
End of explanation
P1=pd.merge(P1,t2[['PID','SSWPCT_HIST']],how='inner',on='PID')
P0=pd.merge(P0,t2[['PID','SSWPCT_HIST']],how='inner',on='PID')
Explanation: Future Work
Proposed features:
percent of winning service returns (derivable from existing data)
percent of winning tie-breakers
percent of upsets (losing when higher ranked, winning when lower ranked)
percent of double faults per game
percent aces per game
percent head-to-head victories (against the same player)
advantage when serving
time since injury
Parameters to solve for:
Time discount factor over historical averages
Match weighting factor. Issues include:
-heads-to-heads
-common opponents
-court surface
Take-aways:
rank is not the whole picture
-Ex: David Nalbandian has gone 8-11 against Federer, in spite of being lower ranked
the data is messy
-incomplete matches
-missing data
-incorrect data
informative features need to be constructed
-infering injury
-head-to-head history
End of explanation |
10,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"Third" Light
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
The 'l3' parameter describes how much third light exists in a given passband. Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
Step3: So let's add a LC dataset
Step4: We now see that the LC dataset created 'l3' parameters for the new dataset.
Step5: Influence on Light Curves (Fluxes)
"Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.
To see this we'll compare a light curve with and without "third" light.
Step6: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
Step7: Influence on Meshes (Intensities)
"Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes.
NOTE | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: "Third" Light
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.filter(qualifier='l3')
Explanation: Relevant Parameters
The 'l3' parameter describes how much third light exists in a given passband. Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: So let's add a LC dataset
End of explanation
b.filter(qualifier='l3')
print b['l3@lc01']
Explanation: We now see that the LC dataset created 'l3' parameters for the new dataset.
End of explanation
b.run_compute(irrad_method='none', model='no_third_light')
b['l3@lc01'] = 5
b.run_compute(irrad_method='none', model='with_third_light')
Explanation: Influence on Light Curves (Fluxes)
"Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.
To see this we'll compare a light curve with and without "third" light.
End of explanation
afig, mplfig = b['lc01'].plot(model='no_third_light')
afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)
Explanation: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b['l3@lc01'] = 0.0
b.run_compute(irrad_method='none', model='no_third_light')
b['l3@lc01'] = 5
b.run_compute(irrad_method='none', model='with_third_light')
print "no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light'))
print "with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light'))
print "no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light'))
print "with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light'))
Explanation: Influence on Meshes (Intensities)
"Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our models again and look at the values of the intensities in the mesh.
End of explanation |
10,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 3
Step2: Problem 1
You start on the open square (.) in the top-left corner and need to reach the bottom (below the bottom-most row on your map).
The toboggan can only follow a few specific slopes (you opted for a cheaper model that prefers rational numbers); start by counting all the trees you would encounter for the slope right 3, down 1
open squares (.) and trees (#)
From your starting position at the top-left, check the position that is right 3 and down 1. Then, check the position that is right 3 and down 1 from there, and so on until you go past the bottom of the map.
Step3: Problem 2
Right 1, down 1.
Right 3, down 1. (This is the slope you already checked.)
Right 5, down 1.
Right 7, down 1.
Right 1, down 2.
What do you get if you multiply together the number of trees encountered on each of the listed slopes? | Python Code:
input_f = './input.txt'
Explanation: Day 3
End of explanation
def find_trees(input_f, move_right, move_down):
Find the trees in the path
trees = 0
pointer = 0
number_of_columns = 0
with open(input_f, 'r') as fd:
for row, line in enumerate(fd, 0):
line = line.strip()
if number_of_columns:
if row%move_down == 0:
# Position
pointer += move_right
if pointer >= number_of_columns:
pointer = pointer - number_of_columns
# Count trees
if line[pointer] == '#':
trees += 1
else:
number_of_columns = len(line)
return trees
find_trees(input_f, 3, 1)
Explanation: Problem 1
You start on the open square (.) in the top-left corner and need to reach the bottom (below the bottom-most row on your map).
The toboggan can only follow a few specific slopes (you opted for a cheaper model that prefers rational numbers); start by counting all the trees you would encounter for the slope right 3, down 1
open squares (.) and trees (#)
From your starting position at the top-left, check the position that is right 3 and down 1. Then, check the position that is right 3 and down 1 from there, and so on until you go past the bottom of the map.
End of explanation
find_trees(input_f, 1, 1) * find_trees(input_f, 3, 1) * find_trees(input_f, 5, 1) * find_trees(input_f, 7, 1) * find_trees(input_f, 1, 2)
Explanation: Problem 2
Right 1, down 1.
Right 3, down 1. (This is the slope you already checked.)
Right 5, down 1.
Right 7, down 1.
Right 1, down 2.
What do you get if you multiply together the number of trees encountered on each of the listed slopes?
End of explanation |
10,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Databaker is an Open Source Python library for converting semi-structured spreadsheets into computer-friendly datatables. The resulting data can be stored into Pandas data tables or the ONS-specific WDA format.
The system is embedded into the interactive programming environment called Jupyter for fast prototyping and development, and depends for its spreadsheet processing on messytables and xypath.
Install it with the command
Step1: Conversion segments
Databaker gives you tools to help you write the code to navigate around the spreadsheet and select the cells and their correspondences.
When you are done your code will look like the following.
You can click on the OBS (observation) cells to see how they connect to the headings.
Step2: Output in pandas
Pandas data tables provides an enormous scope for further processing and cleaning of the data.
To make full use of its power you should become familiar with its Time series functionality, which will allows you to plot, resample and align multple data sources at once.
Step3: Output in WDA Observation File
The WDA system in the ONS has been the primary use for this library. If you need output into WDA the result would look like the following | Python Code:
from databaker.framework import *
tab = loadxlstabs("example1.xls", "beatles", verbose=False)[0]
savepreviewhtml(tab, verbose=False)
Explanation: Introduction
Databaker is an Open Source Python library for converting semi-structured spreadsheets into computer-friendly datatables. The resulting data can be stored into Pandas data tables or the ONS-specific WDA format.
The system is embedded into the interactive programming environment called Jupyter for fast prototyping and development, and depends for its spreadsheet processing on messytables and xypath.
Install it with the command:
pip3 install databaker
Your main interaction with databaker is through the Jupyter notebook interface. There are many tutorials to show you how to master this system elsewhere on-line.
Once you've have a working program to converts a particular spreadsheet style into the output which you want, there are ways to rerun the notebook on other spreadsheets externally or from the command line.
Example
Although Databaker can handle spreadsheets of any size, here is a tiny example from the tutorials to illustrate what it does.
End of explanation
r1 = tab.excel_ref('B3').expand(RIGHT)
r2 = tab.excel_ref('A3').fill(DOWN)
dimensions = [
HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE),
HDim(r1, "Vehicles", DIRECTLY, ABOVE),
HDim(r2, "Name", DIRECTLY, LEFT),
HDimConst("Category", "Beatles")
]
observations = tab.excel_ref('B4').expand(DOWN).expand(RIGHT).is_not_blank().is_not_whitespace()
c1 = ConversionSegment(observations, dimensions)
savepreviewhtml(c1)
Explanation: Conversion segments
Databaker gives you tools to help you write the code to navigate around the spreadsheet and select the cells and their correspondences.
When you are done your code will look like the following.
You can click on the OBS (observation) cells to see how they connect to the headings.
End of explanation
c1.topandas()
Explanation: Output in pandas
Pandas data tables provides an enormous scope for further processing and cleaning of the data.
To make full use of its power you should become familiar with its Time series functionality, which will allows you to plot, resample and align multple data sources at once.
End of explanation
print(writetechnicalCSV(None, c1))
Explanation: Output in WDA Observation File
The WDA system in the ONS has been the primary use for this library. If you need output into WDA the result would look like the following:
End of explanation |
10,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Utilities
The file utils.py contains some usefull stuff which is used throughout all notebooks in this site.
First of all it includes the initialization of all constants and variables for the use of MNIST database
Step1: Then you find a function to plot mnist images in a row of subplots
Step2: and a function to reshape raw mnist data as squared matrices
Step3: Finally you have some general use functions.
A list of transfer functions
Step4: A function to build a dataset of points based on two lists (belonging/not belonging) centroids.
Step5: <br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
Next cell is just for styling | Python Code:
# import the mnist class
from mnist import MNIST
# init with the 'data' dir
mndata = MNIST('./data')
# Load data
mndata.load_training()
mndata.load_testing()
# The number of pixels per side of all images
img_side = 28
# Each input is a raw vector.
# The number of units of the network
# corresponds to the number of input elements
n_mnist_pixels = img_side*img_side
Explanation: Utilities
The file utils.py contains some usefull stuff which is used throughout all notebooks in this site.
First of all it includes the initialization of all constants and variables for the use of MNIST database:
End of explanation
# Set the maximum number of plots to be printed in a row
windows = 8
# A custom plot that uses imshow to draw a matrix
# x : array The matrix to be plotted
# fig : figure object Figure device to use
# window : int The current subplot position
# windows: int Number of subplot
def plot_img(x, fig, window, windows = windows) :
ax = fig.add_subplot(1, windows, window)
ax.imshow(x, interpolation = 'none',
aspect = 'auto', cmap=cm.Greys)
axis('off')
fig.canvas.draw()
Explanation: Then you find a function to plot mnist images in a row of subplots:
End of explanation
#-------------------------------------------------------------
# transform a raw input in an image matrix
# x: array the raw input vector
# return array a squared matrix
def to_mat(x) :
return x.reshape( img_side, img_side )
Explanation: and a function to reshape raw mnist data as squared matrices:
End of explanation
#-------------------------------------------------------------
# Add a bias unit to the input
def biased(x) :
return hstack([1,x])
#-------------------------------------------------------------
# step function
# return: 1 if x > 0
# 0 otherwise
def step(x) :
return 1.0*(x>0)
#-------------------------------------------------------------
# sigmoid function
# t float temperature
def sigmfun(x, t = 1.0) :
return 1.0/(1.0 + exp(-x/t))
# sigmoid derivative
def sigmder(y) :
return y*(1-y)
#-------------------------------------------------------------
# hyperbolic tangent function
# th float threshold
# alpha float amplitude
def tanhfun(x, t = 0.0, alpha = 1.0) :
return tanh(alpha*(x - th))
# hyperbolic tangent derivative
def tanhder(y) :
return 1-y**2
Explanation: Finally you have some general use functions.
A list of transfer functions:
End of explanation
#-------------------------------------------------------------
# Create an array with 2-dimensional patterns belonging to two categories
# n_patterns : int Number of patterns
# std_deviation : float Standard deviation of noise
# centroids1 : 2-elements-vectors Points from which the
# list patterns belonging to
# the class are generated
# centroids2 : 2-elements-vectors Points from which the
# list patterns not belonging
# to the class are generated
# returns : array Each row contains a pattern as
# its first two elements and
# the group (belonging/not
# belonging/) as its third element
def build_dataset( n_patterns = 100,
std_deviation = 0.2,
centroids1 = [ [-1.2, 1.8], [-1.8, 1.2] ],
centroids2 = [ [-0.2, 0.4], [-0.8, -0.2 ] ] ) :
# Decide to which group patterns are from.
# First half belongs to the class
categories = arange(n_patterns)/(n_patterns/2)
# Each row of this array will contain a 2-element-wide
# input pattern plus an integer defining to which category
# the pattern belongs
data = zeros([n_patterns,3])
# Iterate the patterns to generate
for t in xrange(n_patterns) :
pattern = zeros(2)
if categories[t] > 0 :
index = int( rand()*len(centroids1) )
pattern = array(centroids1[index])
else :
index = int( rand()*len(centroids2) )
pattern = array(centroids2[index])
# Add noise to each element of the centroid
pattern += std_deviation*randn(2)
# Fill up data
data[t,:] = hstack([pattern, categories[t]])
return data
Explanation: A function to build a dataset of points based on two lists (belonging/not belonging) centroids.
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../style/ipybn.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
Next cell is just for styling
End of explanation |
10,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coastline Evolution Model
The Coastline Evolution Model (CEM) addresses predominately sandy, wave-dominated coastlines on time-scales ranging from years to millenia and on spatial scales ranging from kilometers to hundreds of kilometers. Shoreline evolution results from gradients in wave-driven alongshore sediment transport.
At its most basic level, the model follows the standard 'one-line' modeling approach, where the cross-shore dimension is collapsed into a single data point. However, the model allows the planview shoreline to take on arbitrary local orientations, and even fold back upon itself, as complex shapes such as capes and spits form under some wave climates (distributions of wave influences from different approach angles). So the model works on a 2D grid.
The model has been used to represent varying geology underlying a sandy coastline and shoreface in a simplified manner and enables the simulation of coastline evolution when sediment supply from an eroding shoreface may be constrained. CEM also supports the simulation of human manipulations to coastline evolution through beach nourishment or hard structures.
CEM authors & developers
Step1: Import the Cem class. In Python, a model with a Basic Model Interface (BMI) will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Some things we can do with our model are to get help, to get the names of the input variables or output variables.
Step3: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
Step4: First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
Step5: Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.
Step6: Assignment 1
Let's think about the wave conditions that are the input to this CEM model run. For both assignment 1 and 2 it will help to look theory up in the paper by Ashton & Murray 2001, and/or Ashton et al, 2006.
How do wave height and wave period determine sediment transport?
The relationship between sediment transport and wave height and period is non-linear. What are the implications of this non-linearity for the impact of lots of small ocean storms versus a few extreme storms with much higher wave height?
Step7: Assignment 2
The other important part of the wave conditions that is input to CEM model is under what angle the waves approach the shore. It will help to read the paper by Ashton & Murray 2001, and the longer version by Ashton et al, 2006.
Explain why incoming wave angle is an important control?
Step8: The CEM model operates on a grid, consisting of a number of rows and colums with values.
The main output variable for this model is water depth, or bathymetry. In this case, the CSDMS Standard Name is much shorter
Step9: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid. This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars, or single values.
Step10: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include
Step11: Allocate memory for the water depth grid and get the current values from cem.
Step12: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
Step13: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain to more than 20 m water depth.
Step14: Right now we have waves coming in but no sediment entering the ocean. To add a sediment source and specify its discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean.
Step15: The CSDMS Standard Name for this variable is
Step16: Assignment 3
Here, we are introducing a river mouth of one gridcell of 200 by 200m. And we just have specified a bedload flux of 750 kg/s. Is this a realistic incoming value?
How much water discharge and slope would you possibly need to transport a bedload flux of that magnitude?
Step17: Assignment 4
The bedload measurements were a combination of very different methods, and taken at different locations (although nearby). The data is quite scattered. But if you would fit a linear regression line through this data,
you would find that the river discharge of the Rhine can be related to its bedload transport as
Step18: Set the bedload flux and run the model.
Step19: Assignment 5
Describe what the CEM model has simulated in 3000 timesteps. How far has this wave influenced delta prograded?
Recall the R-factor for fluvial dominance (Nienhuis 2015). What would the R-factor be for this simulated system? (smaller then 1, larger then 1)? Motivate.
Step20: Assignment 6
Let's add another sediment source with a different flux and update the model. remember that the Basic Model Interface allows you to update values and then continue a simulation
Step21: Here we shut off the sediment supply completely.
Step22: Assignment 7
Create a new CEM run (remember to create a new cem instance) with a more subdued river influx and higher waves.
Step23: Assignment 8
Step24: BONUS Assignment 9 - for graduate students
Create a new CEM run (remember to create a new cem instance) that is all similar to your first simulation.
In this experiment we will use a different incoming wave angle, and look at its effect without a river input first, 1000 timesteps and then with a river input for another 2000 timsteps.
Step25: BONUS Assignment 9 - for graduate students
Use the same CEM run that you have just started.
Keep the incoming wave angle you had specified, and now run the rest of the simulation with a new river input for another 2000 timsteps. 'Place' the rivermouth out of center in the grid (although not too close to the grid boundary, that can give instability problems). | Python Code:
import numpy as np
import matplotlib.pyplot as plt
#Some magic that allows us to view images within the notebook.
%matplotlib inline
Explanation: Coastline Evolution Model
The Coastline Evolution Model (CEM) addresses predominately sandy, wave-dominated coastlines on time-scales ranging from years to millenia and on spatial scales ranging from kilometers to hundreds of kilometers. Shoreline evolution results from gradients in wave-driven alongshore sediment transport.
At its most basic level, the model follows the standard 'one-line' modeling approach, where the cross-shore dimension is collapsed into a single data point. However, the model allows the planview shoreline to take on arbitrary local orientations, and even fold back upon itself, as complex shapes such as capes and spits form under some wave climates (distributions of wave influences from different approach angles). So the model works on a 2D grid.
The model has been used to represent varying geology underlying a sandy coastline and shoreface in a simplified manner and enables the simulation of coastline evolution when sediment supply from an eroding shoreface may be constrained. CEM also supports the simulation of human manipulations to coastline evolution through beach nourishment or hard structures.
CEM authors & developers: Andrew Ashton, Brad Murray, Jordan Slot, Jaap Nienhuis and others.
This version is adapted from a CSDMS teaching notebook, listed below.
It has been created by Irina Overeem, October 2019 for a Sedimentary Modeling course.
Link to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/cem.ipynb
Install command: $ conda install notebook pymt_cem
Download local copy of notebook:
$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/notebooks/cem.ipynb
Key References
Ashton, A.D., Murray, B., Arnault, O. 2001. Formation of coastline features by large-scale instabilities induced by high-angle waves, Nature 414.
Ashton, A. D., and A. B. Murray (2006), High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes, J. Geophys. Res., 111, F04011, doi:10.1029/2005JF000422.
Links
CEM source code: Look at the files that have deltas in their name.
CEM description on CSDMS: Detailed information on the CEM model.
Interacting with the Coastline Evolution Model BMI using Python
End of explanation
import pymt.models
cem = pymt.models.Cem()
Explanation: Import the Cem class. In Python, a model with a Basic Model Interface (BMI) will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
End of explanation
help(cem)
cem.input_var_names
cem.output_var_names
Explanation: Even though we can't run our waves model yet, we can still get some information about it. Some things we can do with our model are to get help, to get the names of the input variables or output variables.
End of explanation
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'
print("Data type: %s" % cem.get_var_type(angle_name))
print("Units: %s" % cem.get_var_units(angle_name))
print("Grid id: %d" % cem.get_var_grid(angle_name))
print("Number of elements in grid: %d" % cem.get_grid_number_of_nodes(0))
print("Type of grid: %s" % cem.get_grid_type(0))
Explanation: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
End of explanation
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cem.initialize(*args)
Explanation: First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
End of explanation
cem.set_value("sea_surface_water_wave__height", 1.5)
cem.set_value("sea_surface_water_wave__period", 7.)
cem.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.)
Explanation: Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.
End of explanation
# list your answers here
Explanation: Assignment 1
Let's think about the wave conditions that are the input to this CEM model run. For both assignment 1 and 2 it will help to look theory up in the paper by Ashton & Murray 2001, and/or Ashton et al, 2006.
How do wave height and wave period determine sediment transport?
The relationship between sediment transport and wave height and period is non-linear. What are the implications of this non-linearity for the impact of lots of small ocean storms versus a few extreme storms with much higher wave height?
End of explanation
# discuss wave angle here
Explanation: Assignment 2
The other important part of the wave conditions that is input to CEM model is under what angle the waves approach the shore. It will help to read the paper by Ashton & Murray 2001, and the longer version by Ashton et al, 2006.
Explain why incoming wave angle is an important control?
End of explanation
grid_id = cem.get_var_grid('sea_water__depth')
Explanation: The CEM model operates on a grid, consisting of a number of rows and colums with values.
The main output variable for this model is water depth, or bathymetry. In this case, the CSDMS Standard Name is much shorter:
"sea_water__depth"
First we find out which of Cem's grids contains water depth.
End of explanation
grid_type = cem.get_grid_type(grid_id)
grid_rank = cem.get_grid_ndim(grid_id)
print('Type of grid: %s (%dD)' % (grid_type, grid_rank))
Explanation: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid. This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars, or single values.
End of explanation
spacing = np.empty((grid_rank, ), dtype=float)
shape = cem.get_grid_shape(grid_id)
cem.get_grid_spacing(grid_id, out=spacing)
print('The grid has %d rows and %d columns' % (shape[0], shape[1]))
print('The spacing between rows is {:f} m and between columns is {:f} m'.format(spacing[0], spacing[1]))
Explanation: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include:
* get_grid_shape
* get_grid_spacing
* get_grid_origin
End of explanation
z = np.empty(shape, dtype=float)
cem.get_value('sea_water__depth', out=z)
Explanation: Allocate memory for the water depth grid and get the current values from cem.
End of explanation
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
Explanation: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
End of explanation
plot_coast(spacing, z)
Explanation: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain to more than 20 m water depth.
End of explanation
#Allocate memory for the sediment discharge array
# and set the bedload sediment flux at the coastal cell to some value.
qs = np.zeros_like(z)
qs[0, 100] = 750
Explanation: Right now we have waves coming in but no sediment entering the ocean. To add a sediment source and specify its discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean.
End of explanation
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
Explanation: The CSDMS Standard Name for this variable is:
"land_surface_water_sediment~bedload__mass_flow_rate"
You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function get_var_units.
End of explanation
# read in the csv file of bedload measurements in the Rhine River, the Netherlands
# these data were collected over different days over a season in 2004, at nearby locations.
# plot how river discharge controls bedload; Q (x-axis) and Qb (y-axis) data.
# label both axes
Explanation: Assignment 3
Here, we are introducing a river mouth of one gridcell of 200 by 200m. And we just have specified a bedload flux of 750 kg/s. Is this a realistic incoming value?
How much water discharge and slope would you possibly need to transport a bedload flux of that magnitude?
End of explanation
# extrapolate this relationship and calculate how much river discharge, Q,
# would be needed to transport the model specification Qb of 1250 kg/s
cem.time_step, cem.time_units, cem.time
Explanation: Assignment 4
The bedload measurements were a combination of very different methods, and taken at different locations (although nearby). The data is quite scattered. But if you would fit a linear regression line through this data,
you would find that the river discharge of the Rhine can be related to its bedload transport as:
Qb=0.0163*Q
End of explanation
for time in range(3000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
cem.time
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
# this code gives you a handle on retrieving the position of the river mouth over time
val = np.empty((5, ), dtype=float)
cem.get_value("basin_outlet~coastal_center__x_coordinate", val)
val / 100.
print(val)
Explanation: Set the bedload flux and run the model.
End of explanation
# your run description goes here
Explanation: Assignment 5
Describe what the CEM model has simulated in 3000 timesteps. How far has this wave influenced delta prograded?
Recall the R-factor for fluvial dominance (Nienhuis 2015). What would the R-factor be for this simulated system? (smaller then 1, larger then 1)? Motivate.
End of explanation
# introduce a second river here
qs[0, 150] = 1500
for time in range(4000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
Explanation: Assignment 6
Let's add another sediment source with a different flux and update the model. remember that the Basic Model Interface allows you to update values and then continue a simulation
End of explanation
qs.fill(0.)
for time in range(4500):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
Explanation: Here we shut off the sediment supply completely.
End of explanation
import pymt.models
cemLR = pymt.models.Cem()
args = cemLR.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cemLR.initialize(*args)
# Here you will have to change the settings to a different wave climate
cemLR.set_value("sea_surface_water_wave__height", 1.5)
cemLR.set_value("sea_surface_water_wave__period", 7.)
cemLR.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.)
zLR = np.empty(shape, dtype=float)
cemLR.get_value('sea_water__depth', out=zLR)
# set your smaller river input here
Explanation: Assignment 7
Create a new CEM run (remember to create a new cem instance) with a more subdued river influx and higher waves.
End of explanation
# run your new simulation for a similar time as the other first simulation
for time in range(3000):
cemLR.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qsLR)
cemLR.update_until(time)
cemLR.get_value('sea_water__depth', out=zLR)
# hypothesize how your run output would be different
# plot the sea water depth
# save out this figure
Explanation: Assignment 8
End of explanation
## initialize CEM instance
# set the wave angle
# run for 1000 timesteps
# plot intermediate output
# save out an array of this sea water depth at t=1000
# describe what effect you see. Is it to be expected?
#What is the unique theory in the CEM model that drives this behavior?
Explanation: BONUS Assignment 9 - for graduate students
Create a new CEM run (remember to create a new cem instance) that is all similar to your first simulation.
In this experiment we will use a different incoming wave angle, and look at its effect without a river input first, 1000 timesteps and then with a river input for another 2000 timsteps.
End of explanation
# your code to introduce new river input goes here
# run an additional 2000 timesteps
# plot
# describe what effect you see. Is it to be expected?
# Is this a fluvial-dominated delta or a wave-dominated delta?
# Is the delta assymetric?
# save out the array of your final sea water depth
# calculate the deposition and erosion per gridcell between t=1000 and t=3000
Explanation: BONUS Assignment 9 - for graduate students
Use the same CEM run that you have just started.
Keep the incoming wave angle you had specified, and now run the rest of the simulation with a new river input for another 2000 timsteps. 'Place' the rivermouth out of center in the grid (although not too close to the grid boundary, that can give instability problems).
End of explanation |
10,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
http
Step1: Clases
Lun 28 Dic 2016
Step2: 2. Escritura de archivo
Para abrir un archivo para escritura utilizamos el mismo metodo open pero con un carácter adicional "w" (write).
Python
open(string_con_direccion_al_archivo, "w")
Luego le agregamos con el metodo write.
Python
write(string_a_escribir_en_archivo)
OBS
Step3: 2. Escritura de archivo
OBSERVACION
Los archivos quedan por defecto en el directorio en el cual se ejecuta el archivo python (o de donde se lanzó python).
Python
archivo = open('QUIJOTE.txt', 'w')
es distinto a
Python
archivo = open('mi_carpeta/QUIJOTE.txt', 'w')
y es distinto a
Python
archivo = open('mi_carpeta\QUIJOTE.txt', 'w')
3. Anexar contenido a un archivo
Para abrir un archivo para agregar contenido a un archivo ya existente utilizamos el método open pero con un caracter adicional "a" (append).
open(string_con_direccion_al_archivo, "a")
Luego le agregamos líneas con el método write.
write(string_a_escribir_en_archivo)
OBS
Step4: Ejercicio 1
Step5: Ejercicio 1
Step6: Archivo de ejemplo
En la carpeta data/ existe el archivo alumnos.txt que tiene el siguiente contenido
Step7: Ejemplo
Step8: Ejercicio 2
A partir del archivo alumnos.txt generar 2 archivos
Step9: Ejercicio 2
Step10: Ejercicio de Certamen
Step11: Ejercicio de Certamen
Step12: Ejercicio de Certamen | Python Code:
def digitos_faltantes(numero):
digitos_presentes = set(list(str(numero)))
digitos_todos = set(map(str, range(10)))
digitos_que_faltan = digitos_todos - digitos_presentes
digitos_que_faltan = list(digitos_que_faltan)
digitos_que_faltan.sort()
return "".join(digitos_que_faltan)
def estan_todos_los_digitos(numero):
digitos_presentes = set(list(str(numero)))
digitos_todos = set(map(str, range(10)))
digitos_que_faltan = digitos_todos - digitos_presentes
if len(digitos_que_faltan)==0:
print "No faltan digitos"
else:
print "Faltan {0} digitos".format(len(digitos_que_faltan))
return
mi_numero = int(raw_input("Ingrese un numero: "))
print digitos_faltantes(mi_numero)
e-stan_todos_los_digitos(mi_numero)
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
http://progra.usm.cl/
https://www.github.com/usantamaria/iwi131
¿Qué contenido aprenderemos?
Leer archivos de texto
Escribir un archivo de texto
Agregar contenido a un archivo ya existente
¿Porqué aprenderemos ese contenido?
Leer archivos de texto
Escribir un archivo de texto
Agregar contenido a un archivo ya existente
Read, Write y Append son las 3 acciones necesarias para trabajar con archivos de datos, tarea altamente habitual en el mundo real.
Desafío de la clase anterior
Desarrolle un programa que tome cualquier número e indique los dígitos que le faltan en orden.
End of explanation
# Abrir el archivo
archivo = open('data/quijote.txt')
# Leer linea a linea, como si fuera una lista de lineas
for linea in archivo:
print linea[:-1], #.replace("\n","")
#print linea.replace("\n","~")
# Cerrar el archivo
archivo.close()
Explanation: Clases
Lun 28 Dic 2016: Escribir y leer archivos.
Mie 30 Dic 2016: Ejercicios tipo certamen.
Lun 04 Ene 2016: Ejercicios tipo certamen.
Mie 06 Ene 2016: Actividad 5.
Consejo: Baje el libro del curso, lea, aprenda y practique.
Motivación para Archivos de Texto
¿Cómo podríamos hacer un programa que tomara un archivo y contara?
* Número de letras
* Número de palabras
* Número de líneas
Archivo de ejemplo
En la carpeta data/ existe el archivo quijjote.txt que tiene el siguiente contenido:
En un lugar de la Mancha
de cuyo nombre no quiero acordarme
no ha mucho tiempo que vivia un hidalgo
de los de lanza en astillero
adarga antigua, rocin flaco y galgo corredor.
Procesaremos este archivo como ejemplo.
1. Lectura de archivo
Para abrir un archivo para lectura utilizamos el método open
Python
open(string_con_direccion_al_archivo)
o bien
Python
open(string_con_direccion_al_archivo, "r")
donde el carácter "r" se indica para lectura (read).
OBS: Cada línea vendrá con el carácter "\n" de manera explícita.
End of explanation
archivo = open('QUIJOTE.txt', 'w')
archivo.write('En un lugar de La Mancha\n'.upper())
archivo.write('de cuyo nombre\n'.upper())
archivo.write('no quiero acordarme\n'.upper())
archivo.write('no ha mucho tiempo\n'.upper())
archivo.write('que vivia un hidalgo\n'.upper())
archivo.close()
Explanation: 2. Escritura de archivo
Para abrir un archivo para escritura utilizamos el mismo metodo open pero con un carácter adicional "w" (write).
Python
open(string_con_direccion_al_archivo, "w")
Luego le agregamos con el metodo write.
Python
write(string_a_escribir_en_archivo)
OBS: Es necesario indicar los saltos de línea con "\n" de manera explícita. De lo contrario, seguiremos escribiendo siempre en la misma línea.
End of explanation
archivo = open('QUIJOTE.txt', 'a')
archivo.write('de los de lanza en astillero\n'.upper())
archivo.write('adarga antigua, rocin flaco '.upper())
archivo.write('y galgo corredor.\n'.upper())
archivo.close()
Explanation: 2. Escritura de archivo
OBSERVACION
Los archivos quedan por defecto en el directorio en el cual se ejecuta el archivo python (o de donde se lanzó python).
Python
archivo = open('QUIJOTE.txt', 'w')
es distinto a
Python
archivo = open('mi_carpeta/QUIJOTE.txt', 'w')
y es distinto a
Python
archivo = open('mi_carpeta\QUIJOTE.txt', 'w')
3. Anexar contenido a un archivo
Para abrir un archivo para agregar contenido a un archivo ya existente utilizamos el método open pero con un caracter adicional "a" (append).
open(string_con_direccion_al_archivo, "a")
Luego le agregamos líneas con el método write.
write(string_a_escribir_en_archivo)
OBS: Es necesario indicar los saltos de línea con "\n" de manera explícita. De lo contrario, seguiremos escribiendo siempre en la misma línea.
End of explanation
def contar(path_al_archivo):
archivo = open(path_al_archivo)
n_lineas, n_caracteres, n_palabras = 0, 0, 0
for linea in archivo:
pass
archivo.close()
return n_lineas, n_caracteres, n_palabras
mi_archivo = "data/quijote.txt"
print contar(mi_archivo)
mi_archivo = "data/alumnos.txt"
print contar(mi_archivo)
Explanation: Ejercicio 1: Motivación
¿Cómo podríamos hacer un programa tomara un archivo y que contara?
* Número de letras
* Número de palabras
* Número de líneas
Ejercicio 1: Motivación: Análisis
¿Qué tareas son necesarias?
End of explanation
def contar(path_al_archivo):
archivo = open(path_al_archivo)
n_lineas = 0
n_caracteres = 0
n_palabras = 0
for linea in archivo:
# Contar lineas
n_lineas += 1
# Contar palabras
n_palabras += len(linea.split())
# Contar caracteres
n_caracteres += len(linea)
archivo.close()
return n_lineas, n_caracteres, n_palabras
template = "El archivo {0} tiene {1} lineas, {2} caracteres y {3} palabras"
mi_archivo = "data/quijote.txt"
l,c,p = contar(mi_archivo)
print template.format(mi_archivo, l, c, p)
mi_archivo = "data/alumnos.txt"
l,c,p = contar(mi_archivo)
print template.format(mi_archivo, l, c, p)
Explanation: Ejercicio 1: Solución
End of explanation
archivo = open('data/alumnos.txt')
for linea in archivo:
print linea
valores = linea.split(':')
print valores
nombres = valores[0:2]
notas = map(int, valores[2:5])
print nombres[0], notas
archivo.close()
Explanation: Archivo de ejemplo
En la carpeta data/ existe el archivo alumnos.txt que tiene el siguiente contenido:
Esteban:Gutierrez:49:18:32
Luisa:Miranda:68:44:99
Jean Paul:Munoz:48:38:81
Gianfranco:Basso:54:54:50
Romina:Smith:100:98:92
Se nos pide procesar este nuevo archivo.
Este archivo tiene la dificultad que los datos están separados por ":".
OBS: ¿Porqué es necesario utilizar el separador :?
Leer y escribir archivos con separador
El procesamiento de archivos con separador es tan simple como utilizar el método apropiado:
linea.split(":")
Donde obviamente necesitamos conocer el separador de antemando.
La escritura de archivos con separador requiere contar con una lista de strings, y luego utilizar el método join de manera apropiada:
":".join(lista_de_strings)
Ejemplo: Leer archivos con separador
End of explanation
# Generacion de datos
alumnos = [ ("Esteban","Gutierrez",49,18,32),
("Luisa","Miranda",68,44,99),
("Jean Paul","Munoz",48,38,81),
("Gianfranco","Basso",54,54,50),
("Romina","Smith",100,98,92)]
# Cerrar el archivo
archivo = open("data/alumnos.txt", 'w')
# Crear lineas y escribirlas
for alumno in alumnos:
valores = []
for dato in alumno:
valores.append(str(dato))
linea = ':'.join(valores) + '\n'
archivo.write(linea)
# Cerrar el archivo
archivo.close()
Explanation: Ejemplo: Escribir archivos con separador
End of explanation
# Ejemplo de lectura a modificar
archivo = open('data/alumnos.txt')
for linea in archivo:
valores = linea.strip().split(':')
nombres = valores[0:2]
notas = map(int, valores[2:5])
print nombres[0], notas
archivo.close()
Explanation: Ejercicio 2
A partir del archivo alumnos.txt generar 2 archivos:
data/aprobados.txt
Luisa,Miranda,70
Jean Paul,Munoz,56
Romina,Smith,97
data/reprobados.txt
Esteban,Gutierrez,33
Gianfranco,Basso,53
Ejercicio 2 : Análisis
¿Que tareas son necesarias? ¿En cuál orden?
End of explanation
archivo = open('data/alumnos.txt')
aprobados = open('data/aprobados.txt',"w")
reprobados = open('data/reprobados.txt',"w")
template = "{0},{1},{2}\n"
for linea in archivo:
valores = linea.strip().split(':')
nombres = valores[0:2]
notas = map(int, valores[2:5])
promedio = sum(notas)/float(len(notas))
promedio_final = int(round(promedio))
nueva_linea = template.format(nombres[0], nombres[1], promedio_final)
if promedio_final>=55:
aprobados.write(nueva_linea)
else:
reprobados.write(nueva_linea)
archivo.close()
aprobados.close()
reprobados.close()
Explanation: Ejercicio 2 : Solución
¿Qué tareas son necesarias?
* Leer archivo de alumnos
* Procesar nombres y notas
* Calcular promedio de notas
* Calcular promedio final (redondeo)
* Crear la linea a escribir
* Escribir en el archivo apropiado
End of explanation
# Solución
def tiempo_a_tupla(tpo):
return 0
print tiempo_a_tupla('00:00:20,000')
print tiempo_a_tupla('10:20:30,040')
print tiempo_a_tupla('59:59:59,999')
# Solución
def tiempo_a_tupla(tpo):
v = tpo.replace(",",":").split(":")
valores_int = (int(v[0]), int(v[1]), int(v[2]), int(v[3]))
return valores_int
print tiempo_a_tupla('00:00:20,000')
print tiempo_a_tupla('10:20:30,040')
print tiempo_a_tupla('59:59:59,999')
# Solución
def tiempo_a_tupla(tpo):
valores_str = tpo.replace(",",":").split(":")
valores_int = map(int, valores_str)
return tuple(valores_int)
print tiempo_a_tupla('00:00:20,000')
print tiempo_a_tupla('10:20:30,040')
print tiempo_a_tupla('59:59:59,999')
# Solución
def tiempo_a_tupla(tpo):
return tuple(map(int, tpo.replace(",",":").split(":")))
print tiempo_a_tupla('00:00:20,000')
print tiempo_a_tupla('10:20:30,040')
print tiempo_a_tupla('59:59:59,999')
Explanation: Ejercicio de Certamen: CR, CC, 2012-S2
[ 35 % ] Los subtítulos de una película son archivos de texto plano que tiene un formato especial
para que cualquier reproductor multimedia pueda mostrarlos durante la reproducción de una película.
El nombre de este tipo de archivos tiene el formato nombre_archivo.srt.
El formato que tiene este tipo de archivo es: identificación del subtítulo (como Sx, donde x es el número
del subtítulo), tiempo de inicio y fin (expresado en hora:minutos:segundos,milisegundos, el texto del subtítulo,
una línea en blanco y luego se repite el formato para cada subtítulo.
Observe el archivo de ejemplo los_simpsons.srt.
```
S1
00:00:20,000 --> 00:00:24,400
Te voy a pillar Bart
S2
00:00:24,100 --> 00:00:27,800
Maldito demonio
S3
00:00:29,100 --> 00:00:30,651
No Homero ... grrr
```
Los subtítulos no deberían solaparse unos con otros, es decir, el tiempo final de uno no debería
ser mayor que el tiempo inicial del siguiente subtítulo. Sin embargo, esto puede ocurrir.
Asuma, además, que el archivo tiene muchos subtítulos.
Ejercicio de Certamen: CR, CC, 2012-S2
(a) Escriba la función tiempo_a_tupla(tpo) que reciba un tiempo como texto y lo retorne como la tupla correspondiente.
```Python
tiempo_a_tupla('00:00:20,000')
(0,0,20,0)
```
End of explanation
def solapados(nombre_archivo):
archivo = open(nombre_archivo)
lista_solapados = []
for linea in archivo:
pass
archivo.close()
return lista_solapados
solapados('data/los_simpsons.srt')
solapados('data/The.Walking.DeadS06E01.srt')
def solapados(nombre_archivo):
archivo = open(nombre_archivo)
lista_solapados = []
i=0
ti_actual = (0,0,0,-1)
tf_actual = (0,0,0,-1)
S_actual = ""
for linea in archivo:
if i%4==0:
S_anterior = S_actual
S_actual = linea.replace("\n","").replace("\r","")
if i%4==1:
ti_anterior, tf_anterior = ti_actual, tf_actual
si_actual, sf_actual = linea.split("-->")
ti_actual = tiempo_a_tupla(si_actual)
tf_actual = tiempo_a_tupla(sf_actual)
if ti_actual<tf_anterior:
lista_solapados.append((S_anterior, S_actual))
# Incrementar i
i += 1
# Cerrar archivo y regresar solucion
archivo.close()
return lista_solapados
print solapados('data/los_simpsons.srt')
print solapados('data/The.Walking.DeadS06E01.srt')
Explanation: Ejercicio de Certamen: CR, CC, 2012-S2
Escriba la función solapados solapados(nombre_archivo) que reciba como parámetro el nombre del archivo de subtítulos y retorne una lista de tuplas con los subtítulos solapados. Cada tupla debe indicar los 2 subtítulos solapados.
```Python
solapados('los_simpsons.srt')
[('S1', 'S2')]
```
(En el ejemplo hay sólo uno, pero podrían haber más)
End of explanation
# Solucion estudiantes
def transformar_dialogo(nombre_archivo_srt):
archivo_srt = open(nombre_archivo_srt)
for linea in archivo_srt:
pass
archivo_srt.close()
return
transformar_dialogo('data/los_simpsons.srt')
transformar_dialogo('data/The.Walking.DeadS06E01.srt')
def transformar_dialogo(nombre_archivo_srt):
nombre_archivo_dlg = nombre_archivo_srt.replace(".srt",".dlg")
archivo_srt = open(nombre_archivo_srt)
archivo_dlg = open(nombre_archivo_dlg, "w")
i=0
for linea in archivo_srt:
if i%4==2:
archivo_dlg.write("_"*5 + ": " +linea)
# Incrementar i
i += 1
archivo_srt.close()
archivo_dlg.close()
return
transformar_dialogo('data/los_simpsons.srt')
transformar_dialogo('data/The.Walking.DeadS06E01.srt')
def transformar_dialogo(nombre_archivo_srt):
nombre_archivo_dlg = nombre_archivo_srt[:-3] + "dlg"
archivo_srt = open(nombre_archivo_srt)
archivo_dlg = open(nombre_archivo_dlg, "w")
i=0
for linea in archivo_srt:
if i%4==2:
archivo_dlg.write("_"*5 + ": " +linea)
# Incrementar i
i += 1
archivo_srt.close()
archivo_dlg.close()
return
transformar_dialogo('data/los_simpsons.srt')
transformar_dialogo('data/The.Walking.DeadS06E01.srt')
Explanation: Ejercicio de Certamen: CR, CC, 2012-S2
(c) Los subtítulos son una buena fuente para extraer los diálogos para una obra. Escriba la función
transformar_dialogo(nombre_archivo) que reciba como parámetro el nombre del archivo de subtítulos y a partir de éste cree un archivo NOMBRE.dlg (con NOMBRE es el nombre del archivo antes del .srt) con los diálogos.
Al principio de cada línea se deben agregar 5 guiones bajos y dos puntos _____: (ahí se supone que posteriormente se escribirá el nombre del personaje).
En nuestro caso, al ejecutar
```Python
transformar_dialogo('los_simpsons.srt')
```
se debería generar el archivo los_simpsons.dlg:
____: Te voy a pillar Bart
____: Maldito demonio
____: No Homero ... grrr
End of explanation |
10,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1.0/(1.0+np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#print("X=",X.shape)
#print("y=",y.shape)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
#print("hidden_outputs=",hidden_outputs.shape)
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#print("final_outputs=",final_outputs.shape)
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y-final_outputs # Output layer error is the difference between desired target and actual output.
#print("error=",error.shape)
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(error, self.weights_hidden_to_output.T)
#print(hidden_error.shape)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
#print("hidden_error_term=",hidden_error_term.shape)
# Weight step (input to hidden)
delta_weights_i_h += np.outer(X,hidden_error_term)
#print("delta_weights_i_h=",delta_weights_i_h.shape)
# Weight step (hidden to output)
delta_weights_h_o += np.outer(hidden_outputs,output_error_term)
#print("delta_weights_h_o=",delta_weights_h_o.shape)
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 10000
learning_rate = 0.9
hidden_nodes = 16
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
#print("X=",X.shape)
#print("y=",y.shape)
network.train(X, y[:,None])
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'][100:], label='Training loss')
plt.plot(losses['validation'][100:], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
10,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Latent Dirichlet Allocation for Text Data
In this assignment you will
apply standard preprocessing techniques on Wikipedia text data
use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model
explore and interpret the results, including topic keywords and topic assignments for documents
Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.
With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document.
Note to Amazon EC2 users
Step1: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps
Step2: Model fitting and interpretation
In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.
Note
Step3: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
Step4: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
get the top words in each topic and use these to identify topic themes
predict topic distributions for some example documents
compare the quality of LDA "nearest neighbors" to the NN output from the first assignment
understand the role of model hyperparameters alpha and gamma
Load a fitted topic model
The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.
It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model.
We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
Step5: Identifying topic themes by top words
We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA.
In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word
Step6: We propose the following themes for each topic
Step7: Measuring the importance of top words
We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.
We'll do this with two visualizations of the weights for the top words in each topic
Step8: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Next we plot the total weight assigned by each topic to its top 10 words
Step9: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.
Topic distributions for some example documents
As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.
We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.
Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama
Step10: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document
Step11: Quiz Question
Step12: Next we add the TF-IDF document representations
Step13: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model
Step14: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist
Step15: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example
Step16: Changing the hyperparameter alpha
Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
Step17: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
Quiz Question | Python Code:
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
# import wiki data
wiki = gl.SFrame('people_wiki.gl/')
wiki
Explanation: Latent Dirichlet Allocation for Text Data
In this assignment you will
apply standard preprocessing techniques on Wikipedia text data
use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model
explore and interpret the results, including topic keywords and topic assignments for documents
Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.
With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Text Data Preprocessing
We'll start by importing our familiar Wikipedia dataset.
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.
End of explanation
wiki_docs = gl.text_analytics.count_words(wiki['text'])
wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
Explanation: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create:
End of explanation
topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
Explanation: Model fitting and interpretation
In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.
Note: This may take several minutes to run.
End of explanation
topic_model
Explanation: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
End of explanation
topic_model = gl.load_model('lda_assignment_topic_model')
Explanation: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
get the top words in each topic and use these to identify topic themes
predict topic distributions for some example documents
compare the quality of LDA "nearest neighbors" to the NN output from the first assignment
understand the role of model hyperparameters alpha and gamma
Load a fitted topic model
The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.
It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model.
We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
End of explanation
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]
Explanation: Identifying topic themes by top words
We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA.
In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme and that all the topics are relatively distinct.
We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic.
Quiz Question: Identify the top 3 most probable words for the first topic.
Quiz Question: What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?
Let's look at the top 10 words for each topic to see if we can identify any themes:
End of explanation
themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \
'art and publishing','Business','international athletics','Great Britain and Australia','international music']
Explanation: We propose the following themes for each topic:
topic 0: Science and research
topic 2: Team sports
topic 3: Music, TV, and film
topic 4: American college and politics
topic 5: General politics
topic 6: Art and publishing
topic 7: Business
topic 8: International athletics
topic 9: Great Britain and Australia
topic 10: International music
We'll save these themes for later:
End of explanation
for i in range(10):
plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])
plt.xlabel('Word rank')
plt.ylabel('Probability')
plt.title('Probabilities of Top 100 Words in each Topic')
Explanation: Measuring the importance of top words
We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.
We'll do this with two visualizations of the weights for the top words in each topic:
- the weights of the top 100 words, sorted by the size
- the total weight of the top 10 words
Here's a plot for the top 100 words by weight in each topic:
End of explanation
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]
ind = np.arange(10)
width = 0.5
fig, ax = plt.subplots()
ax.bar(ind-(width/2),top_probs,width)
ax.set_xticks(ind)
plt.xlabel('Topic')
plt.ylabel('Probability')
plt.title('Total Probability of Top 10 Words in each Topic')
plt.xlim(-0.5,9.5)
plt.ylim(0,0.15)
plt.show()
Explanation: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Next we plot the total weight assigned by each topic to its top 10 words:
End of explanation
obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])
pred1 = topic_model.predict(obama, output_type='probability')
pred2 = topic_model.predict(obama, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
Explanation: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.
Topic distributions for some example documents
As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.
We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.
Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:
End of explanation
def average_predictions(model, test_document, num_trials=100):
avg_preds = np.zeros((model.num_topics))
for i in range(num_trials):
avg_preds += model.predict(test_document, output_type='probability')[0]
avg_preds = avg_preds/num_trials
result = gl.SFrame({'topics':themes, 'average predictions':avg_preds})
result = result.sort('average predictions', ascending=False)
return result
print average_predictions(topic_model, obama, 100)
Explanation: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:
End of explanation
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
Explanation: Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.
Quiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.
Comparing LDA to nearest neighbors for document retrieval
So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations.
In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment.
We'll start by creating the LDA topic distribution representation for each document:
End of explanation
wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])
wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])
Explanation: Next we add the TF-IDF document representations:
End of explanation
model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],
method='brute_force', distance='cosine')
Explanation: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:
End of explanation
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
Explanation: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:
End of explanation
tpm_low_alpha = gl.load_model('lda_low_alpha')
tpm_high_alpha = gl.load_model('lda_high_alpha')
Explanation: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada.
Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies.
Quiz Question: Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)
Quiz Question: Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)
Understanding the role of LDA model hyperparameters
Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic.
In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words.
Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.
Quiz Question: What was the value of alpha used to fit our original topic model?
Quiz Question: What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words.
We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model:
- tpm_low_alpha, a model trained with alpha = 1 and default gamma
- tpm_high_alpha, a model trained with alpha = 50 and default gamma
End of explanation
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]
b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]
c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]
ind = np.arange(len(a))
width = 0.3
def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):
fig = plt.figure()
ax = fig.add_subplot(111)
b1 = ax.bar(ind, a, width, color='lightskyblue')
b2 = ax.bar(ind+width, b, width, color='lightcoral')
b3 = ax.bar(ind+(2*width), c, width, color='gold')
ax.set_xticks(ind+width)
ax.set_xticklabels(range(10))
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_ylim(0,ylim)
ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])
plt.tight_layout()
param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',
xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')
Explanation: Changing the hyperparameter alpha
Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
End of explanation
del tpm_low_alpha
del tpm_high_alpha
tpm_low_gamma = gl.load_model('lda_low_gamma')
tpm_high_gamma = gl.load_model('lda_high_gamma')
a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
ind = np.arange(len(a))
width = 0.3
param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',
xlab='Topics (sorted by weight of top 100 words)',
ylab='Total Probability of Top 100 Words')
param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',
xlab='Topics (sorted by weight of bottom 1000 words)',
ylab='Total Probability of Bottom 1000 Words')
Explanation: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the low alpha model? Use the average results from 100 topic predictions.
Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions.
Changing the hyperparameter gamma
Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models.
Now we will consider the following two models:
- tpm_low_gamma, a model trained with gamma = 0.02 and default alpha
- tpm_high_gamma, a model trained with gamma = 0.5 and default alpha
End of explanation |
10,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load image
Step2: Calculate Mean Color Of Each Color Channel
Step3: Show Values
Step4: View Mean Image Colors | Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
Explanation: Title: Using Mean Color As A Feature
Slug: using_mean_color_as_a_feature
Summary: How to use the mean color of an image as a feature using OpenCV in Python with the Shi-Tomasi Corner Detector.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Images
Authors: Chris Albon
Preliminaries
End of explanation
# Load image as BGR
image_bgr = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_COLOR)
Explanation: Load image
End of explanation
# Calculate the mean of each channel
channels = cv2.mean(image_bgr)
# Swap blue and red values (making it RGB, not BGR)
observation = np.array([(channels[2], channels[1], channels[0])])
Explanation: Calculate Mean Color Of Each Color Channel
End of explanation
# Show mean channel values
observation
Explanation: Show Values
End of explanation
# Show image
plt.imshow(observation), plt.axis("off")
plt.show()
Explanation: View Mean Image Colors
End of explanation |
10,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>MEME Wrapper Example</h1>
Step1: <h3>E-value of each motif</h3>
Step2: <h2>fit_predict() and fit_transform() example</h2>
Step3: <h3>Print motives as lists</h3>
Step4: <h3>Display Sequence logo of un-aligned motives</h3>
Step5: <h3>Display Logo of specified motif</h3>
Step6: <h3>Multiple Sequence Alignment of motives with Muscle</h3>
Note
Step7: <h3>Display sequence logo of aligned motives</h3>
Step8: <h3>Position Weight Matrices for motifs</h3>
Step9: <h4>Display PWM of single motif</h4>
Step10: <h4>Scoring a sequence w.r.t a motif</h4>
Step11: <h3> Transform with HMM as scoring criteria</h3> | Python Code:
# Meme().display_meme_help()
from eden.util import configure_logging
import logging
configure_logging(logging.getLogger(),verbosity=2)
from utilities import Weblogo
wl = Weblogo(color_scheme='classic')
meme1 = Meme(alphabet="dna", # {ACGT}
gap_in_alphabet=False,
mod="anr", # Any number of repititions
output_dir="meme_anr",
nmotifs=3, # Number of motives to be found
weblogo_obj = wl
)
meme1.fit(fasta_file="seq18.fa")
predictions = meme1.predict(input_seqs=test, return_list=True)
for p in predictions: print p
predictions = meme1.predict(input_seqs="seq9.fa", return_list=False)
for p in predictions: print p
match = meme1.transform(input_seqs=test, return_match=True)
for m in match: print m
match = meme1.transform(input_seqs=test, return_match=False)
for m in match: print m
Explanation: <h1>MEME Wrapper Example</h1>
End of explanation
print meme1.e_values
Explanation: <h3>E-value of each motif</h3>
End of explanation
meme2 = Meme(alphabet="dna", mod="anr", nmotifs=3)
predictions = meme2.fit_predict(fasta_file="seq18.fa", return_list=True)
for p in predictions: print p
matches = meme2.fit_transform(fasta_file="seq18.fa", return_match=True)
for m in matches: print m
Explanation: <h2>fit_predict() and fit_transform() example</h2>
End of explanation
#printing motives as lists
for motif in meme1.motives_list:
for m in motif:
print m
print
Explanation: <h3>Print motives as lists</h3>
End of explanation
meme1.display_logo(do_alignment=False)
Explanation: <h3>Display Sequence logo of un-aligned motives</h3>
End of explanation
meme1.display_logo(motif_num=1)
Explanation: <h3>Display Logo of specified motif</h3>
End of explanation
meme1.align_motives() #MSA with Muscle
motives1=meme1.aligned_motives_list
for m in motives1:
for i in m:
print i
print
Explanation: <h3>Multiple Sequence Alignment of motives with Muscle</h3>
Note: Motives in this example were already aligned, hence no dashes appear in the alignment
End of explanation
meme1.display_logo(do_alignment=True)
Explanation: <h3>Display sequence logo of aligned motives</h3>
End of explanation
meme1.display()
meme1.matrix()
Explanation: <h3>Position Weight Matrices for motifs</h3>
End of explanation
meme1.display(motif_num=3)
Explanation: <h4>Display PWM of single motif</h4>
End of explanation
test_seq = 'GGAGAAAATACCGC' * 10
seq_score = meme1.score(motif_num=2, seq=test_seq)
print seq_score
Explanation: <h4>Scoring a sequence w.r.t a motif</h4>
End of explanation
meme2 = Meme(alphabet="dna", scoring_criteria="hmm", k=1, threshold=1.0,mod="anr", nmotifs=3, minw=7, maxw=9)
matches = meme2.fit_transform(fasta_file="seq9.fa", return_match=True)
for m in matches: print m
%%time
# Markov Model score
mm_score = meme2.score(motif_num=2, seq="ACGT"*10)
print mm_score
Explanation: <h3> Transform with HMM as scoring criteria</h3>
End of explanation |
10,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Boston Housing Dataset
Step2: Split Data Into Training And Test Set
Step3: Create Dummy Regression Always Predicts The Mean Value Of Target
Step4: Create Dummy Regression Always Predicts A Constant Value
Step5: Evaluate Performance Metric | Python Code:
# Load libraries
from sklearn.datasets import load_boston
from sklearn.dummy import DummyRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
Explanation: Title: Create Baseline Regression Model
Slug: create_baseline_regression_model
Summary: How to create a baseline regression model in scikit-learn for machine learning in Python.
Date: 2017-09-14 12:00
Category: Machine Learning
Tags: Model Evaluation
Authors: Chris Albon
Preliminaries
End of explanation
# Load data
boston = load_boston()
# Create features
X, y = boston.data, boston.target
Explanation: Load Boston Housing Dataset
End of explanation
# Make test and training split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
Explanation: Split Data Into Training And Test Set
End of explanation
# Create a dummy regressor
dummy_mean = DummyRegressor(strategy='mean')
# "Train" dummy regressor
dummy_mean.fit(X_train, y_train)
Explanation: Create Dummy Regression Always Predicts The Mean Value Of Target
End of explanation
# Create a dummy regressor
dummy_constant = DummyRegressor(strategy='constant', constant=20)
# "Train" dummy regressor
dummy_constant.fit(X_train, y_train)
Explanation: Create Dummy Regression Always Predicts A Constant Value
End of explanation
# Get R-squared score
dummy_constant.score(X_test, y_test)
Explanation: Evaluate Performance Metric
End of explanation |
10,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
10,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using FastText via Gensim
This tutorial is about using the Gensim wrapper for the FastText library for training FastText models, loading them and performing similarity operations and vector lookups analogous to Word2Vec.
When to use FastText?
The main principle behind FastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings.
FastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.
According to a detailed comparison of Word2Vec and FastText in this notebook, FastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases.
Training time for FastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100).
FastText can be used to obtain vectors for out-of-vocabulary (oov) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data.
Training models
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
You need to have FastText setup locally to be able to train models. See installation instructions for FastText if you don't have FastText installed.
Step1: Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec -
- model
Step2: Continuation of training with FastText models is not supported.
Saving/loading models
Models can be saved and loaded via the load and save methods.
Step3: The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model.
Word vector lookup
FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.
Step4: The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example -
Step5: The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary -
Step6: Similarity operations
Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data.
Step7: Syntactically similar words generally have high similarity in FastText models, since a large number of the component char-ngrams will be the same. As a result, FastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here.
Other similarity operations - | Python Code:
import gensim, os
from gensim.models.wrappers.fasttext import FastText
# Set FastText home to the path to the FastText executable
ft_home = '/home/jayant/Projects/fastText/fasttext'
# Set file names for train and test data
data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = data_dir + 'lee_background.cor'
model = FastText.train(ft_home, lee_train_file)
print(model)
Explanation: Using FastText via Gensim
This tutorial is about using the Gensim wrapper for the FastText library for training FastText models, loading them and performing similarity operations and vector lookups analogous to Word2Vec.
When to use FastText?
The main principle behind FastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings.
FastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.
According to a detailed comparison of Word2Vec and FastText in this notebook, FastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases.
Training time for FastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100).
FastText can be used to obtain vectors for out-of-vocabulary (oov) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data.
Training models
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
You need to have FastText setup locally to be able to train models. See installation instructions for FastText if you don't have FastText installed.
End of explanation
model = FastText.train(ft_home, lee_train_file, size=50, alpha=0.05, min_count=10)
print(model)
Explanation: Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec -
- model: Training architecture. Allowed values: cbow, skipgram (Default cbow)
- size: Size of embeddings to be learnt (Default 100)
- alpha: Initial learning rate (Default 0.025)
- window: Context window size (Default 5)
- min_count: Ignore words with number of occurrences below this (Default 5)
- loss: Training objective. Allowed values: ns, hs, softmax (Default ns)
- sample: Threshold for downsampling higher-frequency words (Default 0.001)
- negative: Number of negative words to sample, for ns (Default 5)
- iter: Number of epochs (Default 5)
- sorted_vocab: Sort vocab by descending frequency (Default 1)
- threads: Number of threads to use (Default 12)
In addition, FastText has two additional parameters -
- min_n: min length of char ngrams to be used (Default 3)
- max_n: max length of char ngrams to be used for (Default 6)
These control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n, no character ngrams are used, and the model effectively reduces to Word2Vec.
End of explanation
model.save('saved_fasttext_model')
loaded_model = FastText.load('saved_fasttext_model')
print(loaded_model)
Explanation: Continuation of training with FastText models is not supported.
Saving/loading models
Models can be saved and loaded via the load and save methods.
End of explanation
print('night' in model.wv.vocab)
print('nights' in model.wv.vocab)
print(model['night'])
print(model['nights'])
Explanation: The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model.
Word vector lookup
FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.
End of explanation
# Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data
model['axe']
Explanation: The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example -
End of explanation
# Tests if word present in vocab
print("word" in model.wv.vocab)
# Tests if vector present for word
print("word" in model)
Explanation: The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary -
End of explanation
print("nights" in model.wv.vocab)
print("night" in model.wv.vocab)
model.similarity("night", "nights")
Explanation: Similarity operations
Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data.
End of explanation
# The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only
model.most_similar("nights")
model.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant'])
model.doesnt_match("breakfast cereal dinner lunch".split())
model.most_similar(positive=['baghdad', 'england'], negative=['london'])
model.accuracy(questions='questions-words.txt')
# Word Movers distance
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
# Remove their stopwords.
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stopwords]
sentence_president = [w for w in sentence_president if w not in stopwords]
# Compute WMD.
distance = model.wmdistance(sentence_obama, sentence_president)
distance
Explanation: Syntactically similar words generally have high similarity in FastText models, since a large number of the component char-ngrams will be the same. As a result, FastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here.
Other similarity operations -
End of explanation |
10,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Synthetic Features and Outliers
Learning Objectives
Step1: First, we'll import the California housing data into a pandas DataFrame
Step3: We'll set up our plot_to_image function to convert the matplotlib plot specified by figure to a PNG image
Step5: Next, we'll define the function for model training
Step6: Task 1
Step7: In the cell below, create a feature called rooms_per_person, and use that as the input_feature to fit_model().
Step8: What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lower
the final RMSE should be.)
Step9: Task 2
Step10: #TODO
Step11: The calibration data shows most scatter points aligned to a line. The line is almost vertical, but we'll come back to that later. Right now let's focus on the ones that deviate from the line. We notice that they are relatively few in number.
If we plot a histogram of rooms_per_person, we find that we have a few outliers in our input data
Step12: Task 3
Step13: To verify that clipping worked, let's train again and print the calibration data once more | Python Code:
!pip install tensorflow==2.0.0-beta1
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import logging
from packaging import version
from IPython.display import display
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
logging.getLogger('tensorflow').disabled = True
import tensorflow as tf
%load_ext tensorboard
Explanation: Synthetic Features and Outliers
Learning Objectives:
* Create a synthetic feature that is the ratio of two other features
* Use this new feature as an input to a linear regression model
* Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data
Setup
Install latest 2.x.x release for tensorflow
End of explanation
from datetime import datetime
import io
logging.getLogger('tensorboard').disabled = True
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
Explanation: First, we'll import the California housing data into a pandas DataFrame:
End of explanation
def plot_to_image(figure):
Converts the matplotlib plot specified by 'figure' to a PNG image and
returns it. The supplied figure is closed and inaccessible after this call.
# Save the plot to a PNG in memory.
buf = io.BytesIO()
plt.savefig(buf, format='png')
# Closing the figure prevents it from being displayed directly inside
# the notebook.
plt.close(figure)
buf.seek(0)
# Convert PNG buffer to TF image
image = tf.image.decode_png(buf.getvalue(), channels=4)
# Add the batch dimension
image = tf.expand_dims(image, 0)
return image
Explanation: We'll set up our plot_to_image function to convert the matplotlib plot specified by figure to a PNG image
End of explanation
def fit_model(learning_rate,
steps_per_epoch,
batch_size,
input_feature):
Trains a linear regression model of one feature.
Args:
learning_rate: A `float`, the learning rate.
steps_per_epoch: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
Returns:
A Pandas `DataFrame` containing targets and the corresponding predictions done
after training the model.
epochs = 10
features = california_housing_dataframe[[input_feature]].values
label = "median_house_value"
labels = california_housing_dataframe[label].values
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='linear', kernel_initializer='zeros')
])
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate, clipnorm=5.0),
loss='mse',
metrics=[tf.keras.metrics.RootMeanSquaredError()])
sample = california_housing_dataframe.sample(n=300)
logdir = "logs/synthetic_features_and_outliers/plots" + datetime.now().strftime("%Y%m%d-%H%M%S")
scalars_logdir = "logs/synthetic_features_and_outliers/scalars" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir)
# Set up to plot the state of our model's line each epoch.
def create_plt_params(feature, label, epochs=10):
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, epochs)]
return (colors,
(sample[feature].min(), sample[feature].max()),
(0, sample[label].max()))
def create_figure(feature, label, epochs=10):
figure = plt.figure(figsize=(15, 6))
plt.title("Learned Line by Epoch")
plt.ylabel(label)
plt.xlabel(feature)
plt.scatter(sample[feature], sample[label])
return figure
colors, x_min_max, y_min_max = create_plt_params(input_feature, label, epochs)
def log(epoch, logs):
root_mean_squared_error = logs["root_mean_squared_error"]
print(" epoch %02d : %0.2f" % (epoch, root_mean_squared_error))
weight, bias = [x.flatten()[0] for x in model.layers[0].get_weights()]
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array(y_min_max)
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
x_min_max[1]),
x_min_max[0])
y_extents = weight * x_extents + bias
figure = create_figure(input_feature, label, epochs)
plt.plot(x_extents, y_extents, color=colors[epoch])
with file_writer.as_default():
tf.summary.image("Learned Line by Epoch",
plot_to_image(figure),
step=epoch)
model_callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda epoch, logs: log(epoch, logs))
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=scalars_logdir,
update_freq='epoch')
print("Train model...")
print("RMSE (on training data):")
history = model.fit(features,
labels,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
callbacks=[model_callback, tensorboard_callback],
verbose=0).history
print("Model training finished.")
calibration_data = pd.DataFrame()
calibration_data["predictions"] = model.predict_on_batch(features).flatten()
calibration_data["targets"] = pd.Series(labels)
display(calibration_data.describe())
root_mean_squared_error = history["root_mean_squared_error"][9]
print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
return calibration_data
Explanation: Next, we'll define the function for model training
End of explanation
!rm -rf logs/synthetic_features_and_outliers
Explanation: Task 1: Try a Synthetic Feature
Both the total_rooms and population features count totals for a given city block.
But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of total_rooms and population.
End of explanation
#
# YOUR CODE HERE
#
california_housing_dataframe["rooms_per_person"] =
calibration_data = fit_model(
learning_rate=0.00005,
steps_per_epoch=500,
batch_size=5,
input_feature="rooms_per_person"
)
Explanation: In the cell below, create a feature called rooms_per_person, and use that as the input_feature to fit_model().
End of explanation
from google.datalab.ml import TensorBoard
TensorBoard().start('logs/synthetic_features_and_outliers')
Explanation: What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lower
the final RMSE should be.)
End of explanation
logdir = "logs/synthetic_features_and_outliers/plots"
file_writer = tf.summary.create_file_writer(logdir + datetime.now().strftime("%Y%m%d-%H%M%S"))
Explanation: Task 2: Identify Outliers
We can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line.
Use Pyplot's scatter() to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained in Task 1.
Do you see any oddities? Trace these back to the source data by looking at the distribution of values in rooms_per_person.
End of explanation
# YOUR CODE HERE
Explanation: #TODO: Plot a scatter graph to show the scatter points.
End of explanation
figure = plt.figure()
plt.subplot(1, 2, 2)
_ = california_housing_dataframe["rooms_per_person"].hist()
with file_writer.as_default():
tf.summary.image("Rooms per person",
plot_to_image(figure),
step=0)
TensorBoard().start('logs/synthetic_features_and_outliers')
Explanation: The calibration data shows most scatter points aligned to a line. The line is almost vertical, but we'll come back to that later. Right now let's focus on the ones that deviate from the line. We notice that they are relatively few in number.
If we plot a histogram of rooms_per_person, we find that we have a few outliers in our input data:
End of explanation
# YOUR CODE HERE
TensorBoard().start('logs/synthetic_features_and_outliers')
Explanation: Task 3: Clip Outliers
See if you can further improve the model fit by setting the outlier values of rooms_per_person to some reasonable minimum or maximum.
For reference, here's a quick example of how to apply a function to a Pandas Series:
clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))
The above clipped_feature will have no values less than 0.
The histogram we created in Task 2 shows that the majority of values are less than 5.
#TODO: Let's clip rooms_per_person to 5, and plot a histogram to double-check the results.
End of explanation
calibration_data = fit_model(
learning_rate=0.05,
steps_per_epoch=1000,
batch_size=5,
input_feature="rooms_per_person")
file_writer = tf.summary.create_file_writer(logdir + datetime.now().strftime("%Y%m%d-%H%M%S"))
figure = plt.figure()
_ = plt.scatter(calibration_data["predictions"], calibration_data["targets"])
with file_writer.as_default():
tf.summary.image("Predictions vs Targets",
plot_to_image(figure),
step=0)
TensorBoard().start('logs/synthetic_features_and_outliers')
Explanation: To verify that clipping worked, let's train again and print the calibration data once more:
End of explanation |
10,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Migrating from Spark to BigQuery via Dataproc -- Part 5
Part 1
Step4: Create reporting function
Step5: Test that the function endpoint works
Step6: Deploy the cloud function
Step7: Try it out
Copy the file to the bucket
Step8: Verify that the Cloud Function is being run. You can do this from the Cloud Functions part of the GCP Console.
Once the function is complete (in about 30 seconds), see if the output folder contains the report | Python Code:
%%bash
wget http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz
gunzip kddcup.data_10_percent.gz
BUCKET='cloud-training-demos-ml' # CHANGE
gsutil cp kdd* gs://$BUCKET/
bq mk sparktobq
Explanation: Migrating from Spark to BigQuery via Dataproc -- Part 5
Part 1: The original Spark code, now running on Dataproc (lift-and-shift).
Part 2: Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native)
Part 3: Automate everything, so that we can run in a job-specific cluster. (cloud-optimized)
Part 4: Load CSV into BigQuery, use BigQuery. (modernize)
Part 5: Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless)
Catch-up cell
End of explanation
%%writefile main.py
from google.cloud import bigquery
import google.cloud.storage as gcs
import tempfile
import os
def create_report(BUCKET, gcsfilename, tmpdir):
Creates report in gs://BUCKET/ based on contents in gcsfilename (gs://bucket/some/dir/filename)
# connect to BigQuery
client = bigquery.Client()
destination_table = 'sparktobq.kdd_cup'
# Specify table schema. Autodetect is not a good idea for production code
job_config = bigquery.LoadJobConfig()
schema = [
bigquery.SchemaField("duration", "INT64"),
]
for name in ['protocol_type', 'service', 'flag']:
schema.append(bigquery.SchemaField(name, "STRING"))
for name in 'src_bytes,dst_bytes,wrong_fragment,urgent,hot,num_failed_logins'.split(','):
schema.append(bigquery.SchemaField(name, "INT64"))
schema.append(bigquery.SchemaField("unused_10", "STRING"))
schema.append(bigquery.SchemaField("num_compromised", "INT64"))
schema.append(bigquery.SchemaField("unused_12", "STRING"))
for name in 'su_attempted,num_root,num_file_creations'.split(','):
schema.append(bigquery.SchemaField(name, "INT64"))
for fieldno in range(16, 41):
schema.append(bigquery.SchemaField("unused_{}".format(fieldno), "STRING"))
schema.append(bigquery.SchemaField("label", "STRING"))
job_config.schema = schema
# Load CSV data into BigQuery, replacing any rows that were there before
job_config.create_disposition = bigquery.CreateDisposition.CREATE_IF_NEEDED
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.skip_leading_rows = 0
job_config.source_format = bigquery.SourceFormat.CSV
load_job = client.load_table_from_uri(gcsfilename, destination_table, job_config=job_config)
print("Starting LOAD job {} for {}".format(load_job.job_id, gcsfilename))
load_job.result() # Waits for table load to complete.
print("Finished LOAD job {}".format(load_job.job_id))
# connections by protocol
sql =
SELECT COUNT(*) AS count
FROM sparktobq.kdd_cup
GROUP BY protocol_type
ORDER by count ASC
connections_by_protocol = client.query(sql).to_dataframe()
connections_by_protocol.to_csv(os.path.join(tmpdir,"connections_by_protocol.csv"))
print("Finished analyzing connections")
# attacks plot
sql =
SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as total_freq,
ROUND(AVG(src_bytes), 2) as mean_src_bytes,
ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_compromised) as total_compromised,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM sparktobq.kdd_cup
GROUP BY protocol_type, state
ORDER BY 3 DESC
attack_stats = client.query(sql).to_dataframe()
ax = attack_stats.plot.bar(x='protocol_type', subplots=True, figsize=(10,25))
ax[0].get_figure().savefig(os.path.join(tmpdir,'report.png'));
print("Finished analyzing attacks")
bucket = gcs.Client().get_bucket(BUCKET)
for blob in bucket.list_blobs(prefix='sparktobq/'):
blob.delete()
for fname in ['report.png', 'connections_by_protocol.csv']:
bucket.blob('sparktobq/{}'.format(fname)).upload_from_filename(os.path.join(tmpdir,fname))
print("Uploaded report based on {} to {}".format(gcsfilename, BUCKET))
def bigquery_analysis_cf(data, context):
# check that trigger is for a file of interest
bucket = data['bucket']
name = data['name']
if ('kddcup' in name) and not ('gz' in name):
filename = 'gs://{}/{}'.format(bucket, data['name'])
print(bucket, filename)
with tempfile.TemporaryDirectory() as tmpdir:
create_report(bucket, filename, tmpdir)
%%writefile requirements.txt
google-cloud-bigquery
google-cloud-storage
pandas
matplotlib
# verify that the code in the CF works
name='kddcup.data_10_percent'
if 'kddcup' in name and not ('gz' in name):
print(True)
Explanation: Create reporting function
End of explanation
# test that the function works
import main as bq
BUCKET='cloud-training-demos-ml' # CHANGE
try:
bq.create_report(BUCKET, 'gs://{}/kddcup.data_10_percent'.format(BUCKET), "/tmp")
except Exception as e:
print(e.errors)
Explanation: Test that the function endpoint works
End of explanation
!gcloud functions deploy bigquery_analysis_cf --runtime python37 --trigger-resource $BUCKET --trigger-event google.storage.object.finalize
Explanation: Deploy the cloud function
End of explanation
!gsutil rm -rf gs://$BUCKET/sparktobq
!gsutil cp kddcup.data_10_percent gs://$BUCKET/
Explanation: Try it out
Copy the file to the bucket:
End of explanation
!gsutil ls gs://$BUCKET/sparktobq
Explanation: Verify that the Cloud Function is being run. You can do this from the Cloud Functions part of the GCP Console.
Once the function is complete (in about 30 seconds), see if the output folder contains the report:
End of explanation |
10,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(working_with_InferenceData)=
Working with InferenceData
Here we present a collection of common manipulations you can use while working with InferenceData.
Step1: display_expand_data=False makes the default view for {class}xarray.DataArray fold the data values to a single line. To explore the values, click on the {fas}database icon on the left of the view, right under the xarray.DataArray text. It has no effect on Dataset objects that already default to folded views.
display_expand_attrs=False folds the attributes in both DataArray and Dataset objects to keep the views shorter. In this page we print DataArrays and Datasets several times and they always have the same attributes.
Step2: Get the dataset corresponding to a single group
Step3:
Step4: Combine chains and draws
Step5: You can also use {meth}xarray.Dataset.stack if you only want to combine the chain and draw dimensions. {func}arviz.extract_dataset is a convenience function aimed at taking care of the most common subsetting operations with MCMC samples. It can
Step6:
Step7: Get the dimension lengths
Let’s check how many groups are in our hierarchical model.
Step8: Get coordinate values
What are the names of the groups in our hierarchical model? You can access them from the coordinate name school in this case
Step9: Get a subset of chains
Let’s keep only chain 0 and 2 here. For the subset to take effect on all relevant InferenceData groups
Step10: Remove the first n draws (burn-in)
Let’s say we want to remove the first 100 samples, from all the chains and all InferenceData groups with draws.
Step11: If you check the burnin object you will see that the groups posterior, posterior_predictive, prior and sample_stats have 400 draws compared to idata that has 500. The group observed_data has not been affected because it does not have the draw dimension. Alternatively, you can specify which group or groups you want to change.
Step12: Compute posterior mean values along draw and chain dimensions
To compute the mean value of the posterior samples, do the following
Step13: This computes the mean along all dimensions. This is probably what you want for mu and tau, which have two dimensions (chain and draw), but maybe not what you expected for theta, which has one more dimension school.
You can specify along which dimension you want to compute the mean (or other functions).
Step14: Compute and store posterior pushforward quantities
We use "posterior pushfoward quantities" to refer to quantities that are not variables in the posterior but deterministic computations using posterior variables.
You can use xarray for these pushforward operations and store them as a new variable in the posterior group. You'll then be able to plot them with ArviZ functions, calculate stats and diagnostics on them (like the {func}~arviz.mcse) or save and share the inferencedata object with the pushforward quantities included.
Compute the rolling mean of $\log(\tau)$ with {meth}xarray.DataArray.rolling, storing the result in the posterior
Step15: Using xarray for pusforward calculations has all the advantages of working with xarray. It also inherits the disadvantages of working with xarray, but we believe those to be outweighed by the advantages, and we have already shown how to extract the data as NumPy arrays. Working with InferenceData is working mainly with xarray objects and this is what is shown in this guide.
Some examples of these advantages are specifying operations with named dimensions instead of positional ones (as seen in some previous sections),
automatic alignment and broadcasting of arrays (as we'll see now),
or integration with Dask (as shown in the {ref}dask_for_arviz guide).
In this cell you will compute pairwise differences between schools on their mean effects (variable theta).
To do so, substract the variable theta after renaming the school dimension to the original variable.
Xarray then aligns and broadcasts the two variables because they have different dimensions, and
the result is a 4d variable with all the pointwise differences.
Eventually, store the result in the theta_school_diff variable
Step16:
Step17: Advanced subsetting
To select the value corresponding to the difference between the Choate and Deerfield schools do
Step18: For more advanced subsetting (the equivalent to what is sometimes called "fancy indexing" in NumPy) you need to provide the indices as {class}~xarray.DataArray objects
Step19: Using lists or NumPy arrays instead of DataArrays does colum/row based indexing. As you can see, the result has 9 values of theta_shool_diff instead of the 3 pairs of difference we selected in the previous cell
Step20: Add new chains using concat
After checking the {func}~arviz.mcse and realizing you need more samples, you rerun the model with two chains
and obtain an idata_rerun object.
Step21: You can combine the two into a single InferenceData object using {func}arviz.concat
Step22: Add groups to InferenceData objects
You can also add new groups to InferenceData objects with the {meth}~arviz.InferenceData.extend (if the new groups are already in an InferenceData object) or with {meth}~arviz.InferenceData.add_groups (if the new groups are dictionaries or xarray.Dataset objects). | Python Code:
import arviz as az
import numpy as np
import xarray as xr
xr.set_options(display_expand_data=False, display_expand_attrs=False);
Explanation: (working_with_InferenceData)=
Working with InferenceData
Here we present a collection of common manipulations you can use while working with InferenceData.
End of explanation
idata = az.load_arviz_data("centered_eight")
idata
Explanation: display_expand_data=False makes the default view for {class}xarray.DataArray fold the data values to a single line. To explore the values, click on the {fas}database icon on the left of the view, right under the xarray.DataArray text. It has no effect on Dataset objects that already default to folded views.
display_expand_attrs=False folds the attributes in both DataArray and Dataset objects to keep the views shorter. In this page we print DataArrays and Datasets several times and they always have the same attributes.
End of explanation
post = idata.posterior
post
Explanation: Get the dataset corresponding to a single group
End of explanation
post["log_tau"] = np.log(post["tau"])
idata.posterior
Explanation: :::{tip}
You'll have noticed we stored the posterior group in a new variable: post. As .copy() was not called, now using idata.posterior or post is equivalent.
Use this to keep your code short yet easy to read. Store the groups you'll need very often as separate variables to use explicitly, but don't delete the InferenceData parent. You'll need it for many ArviZ functions to work properly. For example: {func}~arviz.plot_pair needs data from sample_stats group to show divergences, {func}~arviz.compare needs data from both log_likelihood and posterior groups, {func}~arviz.plot_loo_pit needs not 2 but 3 groups: log_likelihood, posterior_predictive and posterior.
:::
Add a new variable
End of explanation
stacked = az.extract_dataset(idata)
stacked
Explanation: Combine chains and draws
End of explanation
az.extract_dataset(idata, num_samples=100)
Explanation: You can also use {meth}xarray.Dataset.stack if you only want to combine the chain and draw dimensions. {func}arviz.extract_dataset is a convenience function aimed at taking care of the most common subsetting operations with MCMC samples. It can:
- Combine chains and draws
- Return a subset of variables (with optional filtering with regular expressions or string matching)
- Return a subset of samples. Moreover by default it returns a random subset to prevent getting non-representative samples due to bad mixing.
- Acess any group
(idata/random_subset)=
Get a random subset of the samples
End of explanation
stacked.mu.values
Explanation: :::{tip}
Use a random seed to get the same subset from multiple groups: az.extract_dataset(idata, num_samples=100, rng=3) and az.extract_dataset(idata, group="log_likelihood", num_samples=100, rng=3) will continue to have matching samples
:::
Obtain a NumPy array for a given parameter
Let's say we want to get the values for mu as a NumPy array.
End of explanation
len(idata.observed_data.school)
Explanation: Get the dimension lengths
Let’s check how many groups are in our hierarchical model.
End of explanation
idata.observed_data.school
Explanation: Get coordinate values
What are the names of the groups in our hierarchical model? You can access them from the coordinate name school in this case
End of explanation
idata.sel(chain=[0, 2])
Explanation: Get a subset of chains
Let’s keep only chain 0 and 2 here. For the subset to take effect on all relevant InferenceData groups: posterior, sample_stats, log_likelihood, posterior_predictive we will use the {meth}arviz.InferenceData.sel, the method of InferenceData instead of {meth}xarray.Dataset.sel.
End of explanation
idata.sel(draw=slice(100, None))
Explanation: Remove the first n draws (burn-in)
Let’s say we want to remove the first 100 samples, from all the chains and all InferenceData groups with draws.
End of explanation
idata.sel(draw=slice(100, None), groups="posterior")
Explanation: If you check the burnin object you will see that the groups posterior, posterior_predictive, prior and sample_stats have 400 draws compared to idata that has 500. The group observed_data has not been affected because it does not have the draw dimension. Alternatively, you can specify which group or groups you want to change.
End of explanation
post.mean()
Explanation: Compute posterior mean values along draw and chain dimensions
To compute the mean value of the posterior samples, do the following:
End of explanation
post.mean(dim=['chain', 'draw'])
Explanation: This computes the mean along all dimensions. This is probably what you want for mu and tau, which have two dimensions (chain and draw), but maybe not what you expected for theta, which has one more dimension school.
You can specify along which dimension you want to compute the mean (or other functions).
End of explanation
post["mlogtau"] = post["log_tau"].rolling({'draw': 50}).mean()
Explanation: Compute and store posterior pushforward quantities
We use "posterior pushfoward quantities" to refer to quantities that are not variables in the posterior but deterministic computations using posterior variables.
You can use xarray for these pushforward operations and store them as a new variable in the posterior group. You'll then be able to plot them with ArviZ functions, calculate stats and diagnostics on them (like the {func}~arviz.mcse) or save and share the inferencedata object with the pushforward quantities included.
Compute the rolling mean of $\log(\tau)$ with {meth}xarray.DataArray.rolling, storing the result in the posterior
End of explanation
post['theta_school_diff'] = post.theta - post.theta.rename(school="school_bis")
Explanation: Using xarray for pusforward calculations has all the advantages of working with xarray. It also inherits the disadvantages of working with xarray, but we believe those to be outweighed by the advantages, and we have already shown how to extract the data as NumPy arrays. Working with InferenceData is working mainly with xarray objects and this is what is shown in this guide.
Some examples of these advantages are specifying operations with named dimensions instead of positional ones (as seen in some previous sections),
automatic alignment and broadcasting of arrays (as we'll see now),
or integration with Dask (as shown in the {ref}dask_for_arviz guide).
In this cell you will compute pairwise differences between schools on their mean effects (variable theta).
To do so, substract the variable theta after renaming the school dimension to the original variable.
Xarray then aligns and broadcasts the two variables because they have different dimensions, and
the result is a 4d variable with all the pointwise differences.
Eventually, store the result in the theta_school_diff variable:
End of explanation
post
Explanation: :::{note}
:class: dropdown
This same operation using NumPy would require manual alignment of the two arrays to make sure they broadcast correctly. The could would be something like:
python
theta_school_diff = theta[:, :, :, None] - theta[:, :, None, :]
:::
The theta_shool_diff variable in the posterior has kept the named dimensions and coordinates:
End of explanation
post['theta_school_diff'].sel(school="Choate", school_bis="Deerfield")
Explanation: Advanced subsetting
To select the value corresponding to the difference between the Choate and Deerfield schools do:
End of explanation
school_idx = xr.DataArray(["Choate", "Hotchkiss", "Mt. Hermon"], dims=["pairwise_school_diff"])
school_bis_idx = xr.DataArray(["Deerfield", "Choate", "Lawrenceville"], dims=["pairwise_school_diff"])
post['theta_school_diff'].sel(school=school_idx, school_bis=school_bis_idx)
Explanation: For more advanced subsetting (the equivalent to what is sometimes called "fancy indexing" in NumPy) you need to provide the indices as {class}~xarray.DataArray objects:
End of explanation
post['theta_school_diff'].sel(
school=["Choate", "Hotchkiss", "Mt. Hermon"],
school_bis=["Deerfield", "Choate", "Lawrenceville"]
)
Explanation: Using lists or NumPy arrays instead of DataArrays does colum/row based indexing. As you can see, the result has 9 values of theta_shool_diff instead of the 3 pairs of difference we selected in the previous cell:
End of explanation
idata_rerun = idata.sel(chain=[0, 1]).copy().assign_coords(coords={"chain":[4,5]},groups="posterior_groups")
Explanation: Add new chains using concat
After checking the {func}~arviz.mcse and realizing you need more samples, you rerun the model with two chains
and obtain an idata_rerun object.
End of explanation
idata_complete = az.concat(idata, idata_rerun, dim="chain")
idata_complete.posterior.dims["chain"]
Explanation: You can combine the two into a single InferenceData object using {func}arviz.concat:
End of explanation
rng = np.random.default_rng(3)
idata.add_groups(
{"predictions": {"obs": rng.normal(size=(4, 500, 2))}},
dims={"obs": ["new_school"]},
coords={"new_school": ["Essex College", "Moordale"]}
)
idata
Explanation: Add groups to InferenceData objects
You can also add new groups to InferenceData objects with the {meth}~arviz.InferenceData.extend (if the new groups are already in an InferenceData object) or with {meth}~arviz.InferenceData.add_groups (if the new groups are dictionaries or xarray.Dataset objects).
End of explanation |
10,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seaborn - grids and customization
ToC
- Pairgrids
- lmplot() for scatter and regression per category
- FacetGrid
- Customizing grids
- Fig and font size
Step1: Pairgrids
Pairgrid is similar to pairplot, except, it returns back an empty grid that you can fill up with desired plots later. Refresher on pariplot
Step2: lmplot() for scatter and regression per category
Sometimes, you need to do a joinplot() but split it by some categorical column. You can custom build it using FacetGrid shown in next section. However, seaborn provides a convenience function called lmplot(). Note
Step3: FacetGrid
During EDA, you want to find the distribution of data by sub-categories, sub-conditions. You can do so by building FacetGrids. As it means, you get a grid for every facet of the data.
Step4: Suppose we want to visualize total_bill by time of day and wheather or not it was a smoker. You need to filter data out then make dist plots. YOu can do all of that in 1 step with FacetGrids.
Step5: Customizing grids
If you dont like the pale blue background of seaborn plots, you can modify that with set_style.
<blockquote><b>Note
Step6: Fig and font size
You can use matplotlib figsize but have to specify that as a context as well.
Step7: Using seaborn context
You can use the set_context() to pick sizing templates
Step8: Another way to set the size is to access the fig handle direclty | Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
iris= sns.load_dataset('iris')
iris.head()
Explanation: Seaborn - grids and customization
ToC
- Pairgrids
- lmplot() for scatter and regression per category
- FacetGrid
- Customizing grids
- Fig and font size
End of explanation
grd = sns.PairGrid(data=iris)
#then you can assign what you want plotted for diagonal, above diagonal, below diagonal.
# when mapping, pass just function pointers, dont call the function itself.
grd.map_diag(sns.distplot)
grd.map_upper(plt.scatter)
grd.map_lower(sns.kdeplot)
Explanation: Pairgrids
Pairgrid is similar to pairplot, except, it returns back an empty grid that you can fill up with desired plots later. Refresher on pariplot
End of explanation
pgrid = sns.lmplot(x='min_season', y='min_pressure_merged',
col='any_basin', # the column by which you need to split - needs to be categorical
data=set1,
col_wrap=3, # number of columns per row
sharex=False, sharey=False, # will repeat ticks, coords for each plot
line_kws={'color':'green'} # symbol for regression line
)
Explanation: lmplot() for scatter and regression per category
Sometimes, you need to do a joinplot() but split it by some categorical column. You can custom build it using FacetGrid shown in next section. However, seaborn provides a convenience function called lmplot(). Note: In previous pages, you created lmplots() for just 2 columns without any category.
End of explanation
#load tips data
tips = sns.load_dataset('tips')
tips.head()
Explanation: FacetGrid
During EDA, you want to find the distribution of data by sub-categories, sub-conditions. You can do so by building FacetGrids. As it means, you get a grid for every facet of the data.
End of explanation
#for each unique value in `time` you get a row and
# each unique value in `smoker` you get a col
fg = sns.FacetGrid(data=tips, row='time', col='smoker')
#now map a plot for each of the grid
fg.map(sns.distplot, 'total_bill')
Explanation: Suppose we want to visualize total_bill by time of day and wheather or not it was a smoker. You need to filter data out then make dist plots. YOu can do all of that in 1 step with FacetGrids.
End of explanation
sns.set_style(style='ticks') #ticks, white, dark, darkgrid, whitegrid
#redraw the facet grid from above
fg = sns.FacetGrid(data=tips, row='time', col='smoker')
#now map a plot for each of the grid
fg.map(sns.distplot, 'total_bill')
Explanation: Customizing grids
If you dont like the pale blue background of seaborn plots, you can modify that with set_style.
<blockquote><b>Note:</b> Using set_style() will control the appearance for the entire notebook and all future plots</blockquote>
End of explanation
plt.figure(figsize=(5,5)) #generate a fig, sns will piggyback this with the plot
sns.distplot(tips['total_bill'])
Explanation: Fig and font size
You can use matplotlib figsize but have to specify that as a context as well.
End of explanation
sns.set_context(context='poster', font_scale=0.8)
# valid contexts = paper, notebook, talk, poster -
# with notebook being 1:1 and paper being smaller and poster being largest
#draw the facet grid
fg = sns.FacetGrid(data=tips, row='smoker', col='time')
fg.map(sns.distplot, 'total_bill')
Explanation: Using seaborn context
You can use the set_context() to pick sizing templates
End of explanation
#draw the facet grid
fg = sns.FacetGrid(data=tips, row='smoker', col='time')
#set the size
fg.fig.set_size_inches(w=10, h=10)
#plot the fig
fg.map(sns.distplot, 'total_bill')
Explanation: Another way to set the size is to access the fig handle direclty
End of explanation |
10,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST in Keras with Tensorboard
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
Parameters
Step1: Imports
Step3: TPU/GPU detection
Step4: Colab-only auth for this notebook and the TPU
Step5: tf.data.Dataset
Step6: Let's have a look at the data
Step7: Keras model
Step8: Train and validate the model
Step9: Visualize predictions
Step10: Export the model for serving from ML Engine
Step11: Deploy the trained model to AI Platform
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Cloud Configuration
Step12: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https
Step13: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work. | Python Code:
BATCH_SIZE = 64
LEARNING_RATE = 0.02
# GCS bucket for training logs and for saving the trained model
# You can leave this empty for local saving, unless you are using a TPU.
# TPUs do not have access to your local instance and can only write to GCS.
BUCKET="" # a valid bucket name must start with gs://
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: MNIST in Keras with Tensorboard
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
Parameters
End of explanation
import os, re, math, json, time
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
Explanation: Imports
End of explanation
tpu = None
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection relies on TPU_NAME env var
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu, steps_per_run=100)
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
gpus = tf.config.experimental.list_logical_devices("GPU")
if len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])
print("running on multiple GPUs")
else:
strategy = tf.distribute.get_strategy() # the default strategy works on CPU and single GPU
print("Running on {}".format("a single GPU" if len(gpus)==1 else "CPU"))
# adjust batch size and learning rate for distributed computing
global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs.
learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
if tf.executing_eagerly():
# This is the TF 2.0 "eager execution" way of iterating through a tf.data.Dataset
for v_images, v_labels in validation_dataset:
break
for t_images, t_labels in unbatched_train_ds.batch(N):
break
validation_digits = v_images.numpy()
validation_labels = v_labels.numpy()
training_digits = t_images.numpy()
training_labels = t_labels.numpy()
else:
# This is the legacy TF 1.x way of iterating through a tf.data.Dataset
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
Explanation: TPU/GPU detection
End of explanation
#IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
#if IS_COLAB_BACKEND:
# from google.colab import auth
# auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
Explanation: Colab-only auth for this notebook and the TPU
End of explanation
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, global_batch_size)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
# This model trains to 99.4%— sometimes 99.5%— accuracy in 10 epochs (with a batch size of 64)
def make_model():
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm
tf.keras.layers.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before "relu"
tf.keras.layers.Activation('relu'), # activation after batch norm
tf.keras.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=False),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5), # Dropout on dense layer only
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
with strategy.scope(): # the new way of handling distribution strategies in Tensorflow 1.14+
model = make_model()
# print model layers
model.summary()
# set up learning rate decay
lr_decay = tf.keras.callbacks.LearningRateScheduler(lambda epoch: learning_rate * math.pow(0.5, 1+epoch) + learning_rate/200, verbose=True)
# set up Tensorboard logs
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S")
log_dir=os.path.join(BUCKET, 'mnist-logs', timestamp)
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, update_freq=50*global_batch_size)
print("Tensorboard loggs written to: ", log_dir)
Explanation: Keras model: 3 convolutional layers, 2 dense layers
End of explanation
EPOCHS = 10
steps_per_epoch = 60000//global_batch_size # 60,000 items in this dataset
print("Step (batches) per epoch: ", steps_per_epoch)
# Counting steps and batches on TPU: the tpu.keras_to_tpu_model API regards the batch size of the input dataset
# as the per-core batch size. The effective batch size is 8x more because Cloud TPUs have 8 cores. It increments
# the step by +8 everytime a global batch (8 per-core batches) is processed. Therefore batch size and steps_per_epoch
# settings can stay as they are for TPU training. The training will just go faster.
# Warning: this might change in the final version of the Keras/TPU API.
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[lr_decay, tb_callback])
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation
class ServingInput(tf.keras.layers.Layer):
# the important detail in this boilerplate code is "trainable=False"
def __init__(self, name, dtype, batch_input_shape=None):
super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)
def get_config(self):
return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }
def call(self, inputs):
# When the deployed model is called through its REST API,
# the JSON payload is parsed automatically, transformed into
# a tensor and passed to this input layer. You can perform
# additional transformations, such as decoding JPEGs for example,
# before sending the data to your model. However, you can only
# use tf.xxxx operations.
return inputs
# little wrinkle: must copy the model from TPU to CPU manually. This is a temporary workaround.
restored_model = make_model()
restored_model.set_weights(model.get_weights()) # this copied the weights from TPU, does nothing on GPU
# add the serving input layer
serving_model = tf.keras.Sequential()
serving_model.add(ServingInput('serving', tf.float32, (None, 28*28)))
serving_model.add(restored_model)
export_path = os.path.join(BUCKET, 'mnist-export', timestamp)
tf.saved_model.save(serving_model, export_path)
print("Model exported to: ", export_path)
Explanation: Export the model for serving from ML Engine
End of explanation
# Enable model deployment here
DEPLOY = False # #@param {type:"boolean"}
# Create the model only once, after that, create new versions of the same model
CREATE_MODEL = True #@param {type:"boolean"}
# Models are deployed in your cloud project
PROJECT = "" #@param {type:"string"}
MODEL_NAME = "mnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
if DEPLOY:
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', export_path), 'For this part, the model must have been exported to a GCS bucket.'
Explanation: Deploy the trained model to AI Platform
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Cloud Configuration
End of explanation
# Create the model
if DEPLOY and CREATE_MODEL:
!gcloud ai-platform models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
if DEPLOY:
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} --origin="{export_path}" --project={PROJECT} --runtime-version=1.13 --python-version=3.5
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
End of explanation
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because the ServingInput layer was named "serving". Keras appends "_input"
f.write(data+'\n')
if DEPLOY: # Request online predictions from deployed model (REST API) using the "gcloud ai-platform" command line.
predictions = !gcloud ai-platform predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
probabilities = np.stack([json.loads(p) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
predictions = np.argmax(probabilities, axis=1)
display_top_unrecognized(digits, predictions, labels, N, 100//N)
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation |
10,632 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a dataframe whose last column is the target and the rest of the columns are the features. | Problem:
import numpy as np
import pandas as pd
data = load_data()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(data.iloc[:, :-1], data.iloc[:, -1], test_size=0.2,
random_state=42) |
10,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stellar Classification
Background
The
Harvard Spectral Classification system for stars
classifies stars based on their spectral type - where the type of a star is designated as a letter that corresponds to a given temperature range (since different temperatures correspond to different spectra!).
Below there is a list of temperatures in Kelvin of a stellar cluster.
Using the table linked from the above Wikipedia page, let's classify each of the stars in our temperature list.
Step1: Using if-elif for discrete classification
We discussed elif, the useful sibling of if and else, briefly in lecture.
In our current situation where each star will have exactly one spectral type, elif will really come through to make our if statements more efficient.
Unlike using many if statements, elif only executes if no previous statements have been deemed True.
This is nice, especially if we can anticipate which scenarios are most probable.
Here's an example. Let's say we want ot classify
Let's start with a simple classification problem to get into the mindset of if-elif-else logic.
We want to classify if a random number n is between 0 and 100, 101 and 149, or 150 to infinity.
This could be useful if, for example, we wanted to classify a person's IQ score.
Fill in the if-elif-else statements below so that our number, n, will be classified.
Use a print() statement to print out n and its classification (make sure you are descriptive!)
You can use the following template
Step2: Test your statement a few times so that you see if it works for various numbers.
Every time you run the cell, a new random number will be chosen, but you can also set it to make sure that the code works correctly.
Just comment (#) before the random_number() function.
Make sure to also test the boundary numbers, as they may act odd if there is a precarious <=.
The loop
We have a list of stellar classifications above.
Our new classifier will be a lot like the number classifier, but you will need to use
the stellar classification boundaries in Wikipedia's table
instead of our previous boundaries.
Another thing you will need to do is make a loop so that each star in temperatures is classified within one cell!
You can do this with a while-loop, using a dummy index that goes up to len(temperature), or you can try out the for-loop we taught you.
Recall that you can iterate over objects in a list with the following | Python Code:
# These are your stellar temperatures, you're welcome!
temperatures = [5809, 16589, 4698, 1869, 37809, 8634]
Explanation: Stellar Classification
Background
The
Harvard Spectral Classification system for stars
classifies stars based on their spectral type - where the type of a star is designated as a letter that corresponds to a given temperature range (since different temperatures correspond to different spectra!).
Below there is a list of temperatures in Kelvin of a stellar cluster.
Using the table linked from the above Wikipedia page, let's classify each of the stars in our temperature list.
End of explanation
# Fill in the parentheses. Don't forget indentation!
n = random_number(50,250) # this should be given!
if ( n <= 100 ):
print(n,'is less than or equal to 100.')
elif (100 < n <= 150):
print(n,'is between 100 and 150.')
else:
print(n, 'is greater than or equal to 150.')
Explanation: Using if-elif for discrete classification
We discussed elif, the useful sibling of if and else, briefly in lecture.
In our current situation where each star will have exactly one spectral type, elif will really come through to make our if statements more efficient.
Unlike using many if statements, elif only executes if no previous statements have been deemed True.
This is nice, especially if we can anticipate which scenarios are most probable.
Here's an example. Let's say we want ot classify
Let's start with a simple classification problem to get into the mindset of if-elif-else logic.
We want to classify if a random number n is between 0 and 100, 101 and 149, or 150 to infinity.
This could be useful if, for example, we wanted to classify a person's IQ score.
Fill in the if-elif-else statements below so that our number, n, will be classified.
Use a print() statement to print out n and its classification (make sure you are descriptive!)
You can use the following template: print(n, 'your description here')
End of explanation
# Define your loop here
for temp in temperatures:
if temp < 3700:
print('Star',temp,'K is type', 'M')
elif 3700 <= temp < 5200:
print('Star',temp,'K is type', 'K')
elif 5200 <= temp < 6000:
print('Star',temp,'K is type', 'G')
elif 6000 <= temp < 7500:
print('Star',temp,'K is type', 'F')
elif 7500 <= temp < 10000:
print('Star',temp,'K is type', 'A')
elif 10000 <= temp < 30000:
print('Star',temp,'K is type', 'B')
else: # Greater than 30000:
print('Star', temp, 'K is type', 'O')
Explanation: Test your statement a few times so that you see if it works for various numbers.
Every time you run the cell, a new random number will be chosen, but you can also set it to make sure that the code works correctly.
Just comment (#) before the random_number() function.
Make sure to also test the boundary numbers, as they may act odd if there is a precarious <=.
The loop
We have a list of stellar classifications above.
Our new classifier will be a lot like the number classifier, but you will need to use
the stellar classification boundaries in Wikipedia's table
instead of our previous boundaries.
Another thing you will need to do is make a loop so that each star in temperatures is classified within one cell!
You can do this with a while-loop, using a dummy index that goes up to len(temperature), or you can try out the for-loop we taught you.
Recall that you can iterate over objects in a list with the following:
~~~Python
for item in my_list:
print(item) # just prints the item, your code will be different!
~~~
Construct a loop such that, for each temperature in temperature, you will print out the star's temperature and classification.
End of explanation |
10,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This "flow downslope" example involves four sub-directories, layer, rho, sigma and z, in which the model is running in one of four coordinate configurations. To use this notebook it is assumed you have run each of those experiments in place and have kept the output generated in default configuration.
CAVEAT
Step1: Accessing MOM6 output
Let's open some MOM6 output files and see what variables we diagnosed. We'll use scipy.io.netcdf_file but you could also use the netCDF4 package and netCDF4.Dataset with the same syntax. Both functions return objects with members (functions or dictionaries) such as .variables
Step2: The above output is the result of this diag_table
Step3: We actually only asked for "e", "h", "u", "v", "salt" and "temp" in the diag_table. The other variables (such as "xh", "yq", etc) are all 1-dimensional "axis" variables associated with netcdf dimensions. This is a consequence of the CF convention used in MOM6 output.
Coordinates to use for plotting
Because MOM6 uses curvilinear coordinates in the horizontal, the axis variables for the horizontal dimensions are not always useful.
This particular experiment is in Cartesian coordinates so we can use the variables "xh" for the h-point locations and "xq" for the u-points. In this instance we can read the 1D axis variables to use as coordinates
Step4: In addition to output requested in the diag_table, MOM6 normally writes a files with static coordinates and metrics to ocean_geometry.nc
Step5: Here, you'll see some 2D coordinate variables such as geolon and geolonb that facilitates plotting when the coordinates are curvilinear. We can check that the data is Cartesian and matches the CF dimensions-variable we read earlier with
Step6: The above plot just confirms that we have the same values in numerous forms of the horizontal coordinates (for this experiment). In general, to make plan view plots for a 3D configuration you will always what to use the 2D coordinate data such as that in ocean_geometry.nc but here we will be making vertical section plots which only require the 1D form of horizontal coordinates.
Reading bottom depth from ocean_geometry.nc
Now let's plot the topography, which is contained in the variable D of ocean_geometry.nc.
Step7: "Depth" is a positive quantity for the ocean; the bottom is at height $z=-D(x,y)$. The topography visualized for this experiment is a flat shallow shelf on the left, a flat deep ocean on the right, and a linear slope in between.
The "flow downslope" example is ostensibly a 2D configuration in x-z. One would expect the y-dimension to be equal to 1 but it is instead equal to 4. Each of the 4 j-slices of an array will contain the exact same data. This is because a dimension in MOM6 can not be reduced to below the width of a parallelization halo width, which is typically equal to 4 or more.
This quickly illustrates that the model state is identical along the j-axis
Step8: So from here on, we will use j=0 in all plots.
Exploring vertically distributed model output
Now let's look at some model data in multiple coordinate modes. We opened the output files above. The python variable layer_file is a handle to the netcdf file layer/prog.nc which is output using the traditional isopycnal (or stacked shallow water) mode of MOM6. The other models in this experiment are all ALE-mode emulating the z*-coordinate (python variable z_file for z/prog.nc), terrain-following sigma-coordinate (sigma_file for sigma/prog.nc), and continuous isopycnal coordinate (rho_file for rho/prog.nc).
The diagnosed variables in each of these modes was the same. However, some axis data changes meaning. For example, the vertical coordinate in layer mode is a "target density"
Step9: When the model is in ALE mode emulating a z* coordinate, then the vertical coordinate is height (although we report notional depth to aid ferret with plotting)
Step10: Let's look at salinity in the first record written by the model in each of these four coordinates. We'll plot the raw data without coordinates, i.e. in index space.
Step11: There is no topography apparent in the plots and the salinity structure is hard to make sense of! In layer mode there is not even any horizontal structure. This is because in layer mode density is homogeneous along a layer whereas in ALE mode density (in this case salinity) is allowed to vary along layers.
Plotting output from any model in index-space ignores the coordinates which determines where the data is physically located. The apparent absence of topography is a symptom of this. In MOM6, layers always contain data but layers have variable thickness which can even vanish. This is what thickness looks like for the above salinity panels
Step12: The simplest distribution to explain is the terrain-following $\sigma$-coordinate in which the layers are uniformly distributed in each column to fit the topography. Thus $h$ has no vertical structure in panel c.
For the other panels it is important to remember that the k-index increase downward in the model; k=1 (fortran convention) or k=0 (python convention) is at the surface. Panel d has a region of uniform resolution (~100m) at low k which transitions to vanished thickness (~0) at some value of k in each column. You can sort of see the topography (blue region) upside down. Panels a and b look similar and have a lot of vanished regions with even surface layers being vanished.
Before making sense of these thickness distributions, let's check that the total thickness in each column looks like the topography
Step13: We see that although the thickness distributions are quite different between each model, the total thickness of each column reflects the topography we plotted earlier based on the ocean_geometry.nc file.
The layer thickness almost provides enough information to calculate the actual position of quantities with which we could then make plots. The missing information is the absolute position of the top or bottom.
Interfaces delineate, or bound, layers. The thickness of a layer, $h_{i,j,k}$, is related to the absolute position of the interface above, $z_{i,j,k-\frac{1}{2}}$, and below, $z_{i,j,k+\frac{1}{2}}$ by
$$
h_{i,j,k} = z_{i,j,k-\frac{1}{2}} - z_{i,j,k+\frac{1}{2}} \;\;\; \forall \; k=1,2,\ldots,nk
$$
where, by convention, integer-valued indices indicate layer-centered quantities and half-valued indices indicate interface-located quantities. Interface and and layer quantities are thus staggered in the vertical.
The diagnostic variable e is the absolute vertical position of model interfaces. Because half-integer indices are not meaningful in most copmuter languages, there is a offset convention as follows.
In FORTRAN
Step14: You can now begin to discern the nature of the coordinates in each mode. If we zoom in on the shelf-break region we will be able to more clearly see what each coordinate mode is doing.
Step15: The $\sigma$-coordinate (c) always has nk layers with finite thickness (uniformly distributed) in each column. The $z$*-coordinate model (d) seems to have a variable number of layers but in fact the layers thicknesses vanish wherever the layer would be below the topography. The isopycnal coordinates, both in layer-mode (a) and ALE-mode (b), have on one thick layer on the shelf and fewer finite-thickness layers off-shelf than the other models. In these cases, there are vanished layers at both the top and bottom of the column.
So now we know the location of the interfaces we can presume the center of the layer is in between at $(e[k,j,i]+e[k+1,j,i])/2$. Let's use contourf to shade salinity at the layer centers. Note how we have to create a 2D "x" coordinate to pass to contourf since contourf expects both coordinate arrays to be 2D if either one of them is 2D. We do this by using an expression x=xh+0*z which uses numpy's "broadcasting" feature (see http
Step16: The above looks closer to what one imagines things look like but there are some very big problems with the above plots.
1) The apparent topography (white regions at bottom of plots) is quite different between the panels. This happens because contourf only shades between cell centers and so only half of the edge cells are plotted. contourf does not extrapolate beyond the coordinate provided for the data location. We loose a half cell of shading at the top and bottom of the column and also at the left and right of the plot. In the isopycnal-like coordinates, the bottom layer is thick and so we loose a lot.
2) The shading within layers and between columns is interpolated which is introducing interior features and gradients which should not be there. The overlaid interface positions make this apparent for the layer mode for which salinity is absolutely constant along a layer (recall first plot of salinity).
To get the plot we want we essentially need to insert a layer of extra data at the top and bottom of the model and an extra column at both ends. To illustrate lets see how one might do this first for the "layer" output. We'll define a little function to help
Step17: So not the data appears to be plotted from the surface down to the topography. Using this approach for all the coordinates
Step18: A remaining issue is why does there appear to be a salinity inversion in the z*-coordinate model. Technically there is (see salinity plots in i,k-space) but the layers are vanished so we should not be seeing them. This is because contourf is interpolating between thick and vanished layers. The bottom-line is that contourf is assuming the data is smooth and interpreting data inconsistent with the model formulation which considers the data to be piecewise.
Use pcolormesh() to visualize
The most consistent tool for visualizing piecewise data is pcolormesh. An important distinction between contourf and pcolormesh is that the latter takes the coordinates of the corners of cells when shading cell-centered values. Recall we loaded the coordinate yq and inserted an extra value on the left edge - we'll use that for the horizontal coordinate of cell edges. We will horizontally average to get an approximate position for the cell corner heights
Step20: In the above plots, the vanished layers are reasonably hidden and the overall shading for salinity more similar between the plots.
The above method treats each cell as a trapezoid with corners shared between neighboring cells. It does not preserve the mean depth of the cell boundaries. To give more faithful rendering of the what the model can do, a tool is provided in MOM6-examples/tools/analysis/m6toolbox that returns arguments that can be passed straight to pcolormesh consistent with various interpretations of the grid structure, e.g. pcm (piecewise constant thicknesses), plm (piecewise linear), linear (as described above). | Python Code:
%pylab inline
import scipy.io.netcdf
Explanation: This "flow downslope" example involves four sub-directories, layer, rho, sigma and z, in which the model is running in one of four coordinate configurations. To use this notebook it is assumed you have run each of those experiments in place and have kept the output generated in default configuration.
CAVEAT: This is a tutorial of how to make vertical section plots which also illustrates some poor ways to plot data for comparison. Read through to the end.
We will use matplotlib. The line %pylab inline loads all the necessary packages, including numpy and causes images to appear in the page. We will use scipy's netcdf package to read MOM6 output. Note that this only works if MOM6 is compiled with NETCDF=3.
To see this notebook with figures see https://gist.github.com/adcroft/dde8d3fafd77d0caaa5613e64f1d7eff.
End of explanation
layer_file = scipy.io.netcdf_file('layer/prog.nc')
rho_file = scipy.io.netcdf_file('rho/prog.nc')
sigma_file = scipy.io.netcdf_file('sigma/prog.nc')
z_file = scipy.io.netcdf_file('z/prog.nc')
for v in layer_file.variables:
print(v,layer_file.variables[v].shape,layer_file.variables[v].long_name)
Explanation: Accessing MOM6 output
Let's open some MOM6 output files and see what variables we diagnosed. We'll use scipy.io.netcdf_file but you could also use the netCDF4 package and netCDF4.Dataset with the same syntax. Both functions return objects with members (functions or dictionaries) such as .variables:
End of explanation
!head -15 layer/diag_table
Explanation: The above output is the result of this diag_table:
End of explanation
# Use the CF dimension-variable as the horizontal coordinate
xh = layer_file.variables['xh'][:] # This is the coordinate of the cell centers (h-points in 1D)
xq = layer_file.variables['xq'][:] # This is the coordinate of the cell corners (u-points in 1D)
xq = numpy.concatenate(([2*xq[0]-xq[1]],xq)) # Inserts left most edge of domain in to u-point coordinates
Explanation: We actually only asked for "e", "h", "u", "v", "salt" and "temp" in the diag_table. The other variables (such as "xh", "yq", etc) are all 1-dimensional "axis" variables associated with netcdf dimensions. This is a consequence of the CF convention used in MOM6 output.
Coordinates to use for plotting
Because MOM6 uses curvilinear coordinates in the horizontal, the axis variables for the horizontal dimensions are not always useful.
This particular experiment is in Cartesian coordinates so we can use the variables "xh" for the h-point locations and "xq" for the u-points. In this instance we can read the 1D axis variables to use as coordinates:
End of explanation
geom_file = scipy.io.netcdf_file('layer/ocean_geometry.nc')
for v in geom_file.variables:
print(v,geom_file.variables[v].shape,geom_file.variables[v].long_name)
Explanation: In addition to output requested in the diag_table, MOM6 normally writes a files with static coordinates and metrics to ocean_geometry.nc:
End of explanation
plt.plot( xh, '.', label='xh (CF 1D)');
plt.plot( geom_file.variables['lonh'][:].T, '.', label='lonh (1D)');
plt.plot( geom_file.variables['geolon'][:].T, '.', label='geolon (2D)');
plt.legend(loc='lower right');
Explanation: Here, you'll see some 2D coordinate variables such as geolon and geolonb that facilitates plotting when the coordinates are curvilinear. We can check that the data is Cartesian and matches the CF dimensions-variable we read earlier with
End of explanation
plt.plot( geom_file.variables['D'][0,:]); plt.title('Depth');
Explanation: The above plot just confirms that we have the same values in numerous forms of the horizontal coordinates (for this experiment). In general, to make plan view plots for a 3D configuration you will always what to use the 2D coordinate data such as that in ocean_geometry.nc but here we will be making vertical section plots which only require the 1D form of horizontal coordinates.
Reading bottom depth from ocean_geometry.nc
Now let's plot the topography, which is contained in the variable D of ocean_geometry.nc.
End of explanation
print("mean square |h(j=0)-h(j=3)|^2 =",
(( layer_file.variables['h'][-1,:,0,:]-layer_file.variables['h'][-1,:,3,:] )**2).sum() )
Explanation: "Depth" is a positive quantity for the ocean; the bottom is at height $z=-D(x,y)$. The topography visualized for this experiment is a flat shallow shelf on the left, a flat deep ocean on the right, and a linear slope in between.
The "flow downslope" example is ostensibly a 2D configuration in x-z. One would expect the y-dimension to be equal to 1 but it is instead equal to 4. Each of the 4 j-slices of an array will contain the exact same data. This is because a dimension in MOM6 can not be reduced to below the width of a parallelization halo width, which is typically equal to 4 or more.
This quickly illustrates that the model state is identical along the j-axis:
End of explanation
print( layer_file.variables['zl'].long_name, layer_file.variables['zl'].units, layer_file.variables['zl'][:] )
Explanation: So from here on, we will use j=0 in all plots.
Exploring vertically distributed model output
Now let's look at some model data in multiple coordinate modes. We opened the output files above. The python variable layer_file is a handle to the netcdf file layer/prog.nc which is output using the traditional isopycnal (or stacked shallow water) mode of MOM6. The other models in this experiment are all ALE-mode emulating the z*-coordinate (python variable z_file for z/prog.nc), terrain-following sigma-coordinate (sigma_file for sigma/prog.nc), and continuous isopycnal coordinate (rho_file for rho/prog.nc).
The diagnosed variables in each of these modes was the same. However, some axis data changes meaning. For example, the vertical coordinate in layer mode is a "target density":
End of explanation
print( z_file.variables['zl'].long_name, z_file.variables['zl'].units, z_file.variables['zl'][:] )
Explanation: When the model is in ALE mode emulating a z* coordinate, then the vertical coordinate is height (although we report notional depth to aid ferret with plotting):
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(221);
plt.pcolormesh( layer_file.variables['salt'][0,:,0,:] ); plt.colorbar(); plt.title('a) Layer mode S');
plt.subplot(222);
plt.pcolormesh( rho_file.variables['salt'][0,:,0,:] ); plt.colorbar(); plt.title(r'b) $\rho$-coordinate S');
plt.subplot(223);
plt.pcolormesh( sigma_file.variables['salt'][0,:,0,:] ); plt.colorbar(); plt.title(r'c) $\sigma$-coordinate S');
plt.subplot(224);
plt.pcolormesh( z_file.variables['salt'][0,:,0,:] ); plt.colorbar(); plt.title('d) z*-coordinate S');
Explanation: Let's look at salinity in the first record written by the model in each of these four coordinates. We'll plot the raw data without coordinates, i.e. in index space.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(221);
plt.pcolormesh( layer_file.variables['h'][0,:,0,:] ); plt.colorbar(); plt.title('a) Layer mode h');
plt.subplot(222);
plt.pcolormesh( rho_file.variables['h'][0,:,0,:] ); plt.colorbar(); plt.title(r'b) $\rho$-coordinate h');
plt.subplot(223);
plt.pcolormesh( sigma_file.variables['h'][0,:,0,:] ); plt.colorbar(); plt.title(r'c) $\sigma$-coordinate h');
plt.subplot(224);
plt.pcolormesh( z_file.variables['h'][0,:,0,:] ); plt.colorbar(); plt.title('d) z*-coordinate h');
Explanation: There is no topography apparent in the plots and the salinity structure is hard to make sense of! In layer mode there is not even any horizontal structure. This is because in layer mode density is homogeneous along a layer whereas in ALE mode density (in this case salinity) is allowed to vary along layers.
Plotting output from any model in index-space ignores the coordinates which determines where the data is physically located. The apparent absence of topography is a symptom of this. In MOM6, layers always contain data but layers have variable thickness which can even vanish. This is what thickness looks like for the above salinity panels:
End of explanation
plt.plot( layer_file.variables['h'][0,:,0,:].sum(axis=0), label='Layer');
plt.plot( rho_file.variables['h'][0,:,0,:].sum(axis=0), label=r'$\rho$');
plt.plot( sigma_file.variables['h'][0,:,0,:].sum(axis=0), label=r'$\sigma$');
plt.plot( z_file.variables['h'][0,:,0,:].sum(axis=0), label='z*');
plt.legend(loc='lower right'); plt.title('Coloumn total thickness');
Explanation: The simplest distribution to explain is the terrain-following $\sigma$-coordinate in which the layers are uniformly distributed in each column to fit the topography. Thus $h$ has no vertical structure in panel c.
For the other panels it is important to remember that the k-index increase downward in the model; k=1 (fortran convention) or k=0 (python convention) is at the surface. Panel d has a region of uniform resolution (~100m) at low k which transitions to vanished thickness (~0) at some value of k in each column. You can sort of see the topography (blue region) upside down. Panels a and b look similar and have a lot of vanished regions with even surface layers being vanished.
Before making sense of these thickness distributions, let's check that the total thickness in each column looks like the topography:
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(221); plt.plot( layer_file.variables['e'][0,:,0,:].T); plt.title('a) Layer mode e');
plt.subplot(222); plt.plot( rho_file.variables['e'][0,:,0,:].T); plt.title(r'b) $\rho$-coordinate e');
plt.subplot(223); plt.plot( sigma_file.variables['e'][0,:,0,:].T); plt.title(r'c) $\sigma$-coordinate e');
plt.subplot(224); plt.plot( z_file.variables['e'][0,:,0,:].T); plt.title('d) z*-coordinate e');
Explanation: We see that although the thickness distributions are quite different between each model, the total thickness of each column reflects the topography we plotted earlier based on the ocean_geometry.nc file.
The layer thickness almost provides enough information to calculate the actual position of quantities with which we could then make plots. The missing information is the absolute position of the top or bottom.
Interfaces delineate, or bound, layers. The thickness of a layer, $h_{i,j,k}$, is related to the absolute position of the interface above, $z_{i,j,k-\frac{1}{2}}$, and below, $z_{i,j,k+\frac{1}{2}}$ by
$$
h_{i,j,k} = z_{i,j,k-\frac{1}{2}} - z_{i,j,k+\frac{1}{2}} \;\;\; \forall \; k=1,2,\ldots,nk
$$
where, by convention, integer-valued indices indicate layer-centered quantities and half-valued indices indicate interface-located quantities. Interface and and layer quantities are thus staggered in the vertical.
The diagnostic variable e is the absolute vertical position of model interfaces. Because half-integer indices are not meaningful in most copmuter languages, there is a offset convention as follows.
In FORTRAN:
$$
h(i,j,k) = e(i,j,k) - e(i,j,k+1) \;\;\; \forall \; k=1,2,\ldots,nk
$$
where arrays indices normally start at 1.
In python
$$
h[k,j,i] = e[k,j,i] - e[k+1,j,i] \;\;\; \forall \; k=0,1,\ldots,nk-1
$$
where array indices start at 0. We have also indicated the [k,j,i] order of indices that arises from reading data from an model-generated netcdf file.
Let's look at where the interfaces are by plotting a line for each interface (note the use of the transpose .T operator to get these lines plotted in the right direction):
End of explanation
plt.figure(figsize=(12,6))
xl=5,12; yl=-1000,10
plt.subplot(221); plt.plot( layer_file.variables['e'][0,:,0,:].T); plt.xlim(xl); plt.ylim(yl); plt.title('a) Layer mode e');
plt.subplot(222); plt.plot( rho_file.variables['e'][0,:,0,:].T); plt.xlim(xl); plt.ylim(yl); plt.title(r'b) $\rho$-coordinate e');
plt.subplot(223); plt.plot( sigma_file.variables['e'][0,:,0,:].T); plt.xlim(xl); plt.ylim(yl); plt.title(r'c) $\sigma$-coordinate e');
plt.subplot(224); plt.plot( z_file.variables['e'][0,:,0,:].T); plt.xlim(xl); plt.ylim(yl); plt.title('d) z*-coordinate e');
Explanation: You can now begin to discern the nature of the coordinates in each mode. If we zoom in on the shelf-break region we will be able to more clearly see what each coordinate mode is doing.
End of explanation
plt.figure(figsize=(12,6))
xxl=50,120 # This is the zoomed-in region around the shelf break in model coordinates
plt.subplot(221)
z = ( layer_file.variables['e'][0,:-1,0,:] + layer_file.variables['e'][0,1:,0,:] ) / 2
x = xh + 0*z
plt.contourf( x, z, layer_file.variables['salt'][0,:,0,:]); plt.xlim(xxl); plt.ylim(yl); plt.title('a) Layer mode S');
plt.plot( xh, layer_file.variables['e'][0,:,0,:].T, 'k');
plt.subplot(222)
z = ( rho_file.variables['e'][0,:-1,0,:] + rho_file.variables['e'][0,1:,0,:] ) / 2
plt.contourf( x, z, rho_file.variables['salt'][0,:,0,:]); plt.xlim(xxl); plt.ylim(yl); plt.title(r'b) $\rho$ coordinate S');
plt.plot( xh, rho_file.variables['e'][0,:,0,:].T, 'k');
plt.subplot(223)
z = ( sigma_file.variables['e'][0,:-1,0,:] + sigma_file.variables['e'][0,1:,0,:] ) / 2
plt.contourf( x, z, sigma_file.variables['salt'][0,:,0,:]); plt.xlim(xxl); plt.ylim(yl); plt.title(r'c) $\sigma$ coordinate S');
plt.plot( xh, sigma_file.variables['e'][0,:,0,:].T, 'k');
plt.subplot(224)
z = ( z_file.variables['e'][0,:-1,0,:] + z_file.variables['e'][0,1:,0,:] ) / 2
plt.contourf( x, z, z_file.variables['salt'][0,:,0,:]); plt.xlim(xxl); plt.ylim(yl); plt.title('d) z* coordinate S');
plt.plot( xh, z_file.variables['e'][0,:,0,:].T, 'k');
Explanation: The $\sigma$-coordinate (c) always has nk layers with finite thickness (uniformly distributed) in each column. The $z$*-coordinate model (d) seems to have a variable number of layers but in fact the layers thicknesses vanish wherever the layer would be below the topography. The isopycnal coordinates, both in layer-mode (a) and ALE-mode (b), have on one thick layer on the shelf and fewer finite-thickness layers off-shelf than the other models. In these cases, there are vanished layers at both the top and bottom of the column.
So now we know the location of the interfaces we can presume the center of the layer is in between at $(e[k,j,i]+e[k+1,j,i])/2$. Let's use contourf to shade salinity at the layer centers. Note how we have to create a 2D "x" coordinate to pass to contourf since contourf expects both coordinate arrays to be 2D if either one of them is 2D. We do this by using an expression x=xh+0*z which uses numpy's "broadcasting" feature (see http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html for explanation of rules). We will also plot the interface positions on top of the shaded contours:
End of explanation
def fix_contourf(nc_object, record, xh, variable='salt', clim=None, xl=None, yl=None, plot_grid=True):
e = nc_object.variables['e'][record,:,0,:] # Interface positions
z = ( e[:-1,:] + e[1:,:] ) / 2 # Layer centers
S = nc_object.variables[variable][record,:,0,:] # Model output
z = numpy.vstack( ( e[0,:], z, e[-1,:] ) ) # Add a layer at top and bottom
S = numpy.vstack( ( S[0,:], S, S[-1,:] ) ) # Add layer data from top and bottom
x = xh + 0*z
plt.contourf( x, z, S );
if clim is not None: plt.clim(clim);
if plot_grid: plt.plot( xh, e.T, 'k');
if xl is not None: plt.xlim(xl);
if yl is not None: plt.ylim(yl);
plt.figure(figsize=(12,3))
# Same plot as above
plt.subplot(121)
z = ( layer_file.variables['e'][0,:-1,0,:] + layer_file.variables['e'][0,1:,0,:] ) / 2
x = xh + 0*z
plt.contourf( x, z, layer_file.variables['salt'][0,:,0,:]); plt.xlim(xxl); plt.ylim(yl);
plt.title('a) Layer mode S, as above');
plt.plot( xh, layer_file.variables['e'][0,:,0,:].T, 'k');
plt.clim(34,35)
# Now with an extra layer above and below
plt.subplot(122)
fix_contourf(layer_file, 0, xh, xl=xxl, yl=yl, clim=(34,35)); plt.title('b) Layer mode S, plotted with extra layers');
Explanation: The above looks closer to what one imagines things look like but there are some very big problems with the above plots.
1) The apparent topography (white regions at bottom of plots) is quite different between the panels. This happens because contourf only shades between cell centers and so only half of the edge cells are plotted. contourf does not extrapolate beyond the coordinate provided for the data location. We loose a half cell of shading at the top and bottom of the column and also at the left and right of the plot. In the isopycnal-like coordinates, the bottom layer is thick and so we loose a lot.
2) The shading within layers and between columns is interpolated which is introducing interior features and gradients which should not be there. The overlaid interface positions make this apparent for the layer mode for which salinity is absolutely constant along a layer (recall first plot of salinity).
To get the plot we want we essentially need to insert a layer of extra data at the top and bottom of the model and an extra column at both ends. To illustrate lets see how one might do this first for the "layer" output. We'll define a little function to help:
End of explanation
plt.figure(figsize=(12,6))
xxl=50,120 # This is the zoomed-in region around the shelf break in model coordinates
plt.subplot(221); fix_contourf(layer_file, 0, xh, xl=xxl, yl=yl, clim=(34,35)); plt.title('a) Layer mode S')
plt.subplot(222); fix_contourf(rho_file, 0, xh, xl=xxl, yl=yl, clim=(34,35)); plt.title(r'b) $\rho$ coordinate S');
plt.subplot(223); fix_contourf(sigma_file, 0, xh, xl=xxl, yl=yl, clim=(34,35)); plt.title(r'c) $\sigma$ coordinate S');
plt.subplot(224); fix_contourf(z_file, 0, xh, xl=xxl, yl=yl, clim=(34,35)); plt.title('d) z* coordinate S');
Explanation: So not the data appears to be plotted from the surface down to the topography. Using this approach for all the coordinates:
End of explanation
def plot_with_pcolormesh(nc_object, record, xq, variable='salt', clim=None, xl=None, yl=None, plot_grid=True):
e = nc_object.variables['e'][record,:,0,:] # Interface positions for h-columns
ea = numpy.vstack( ( e[:,0].T, (e[:,:-1].T+e[:,1:].T)/2, e[:,-1].T ) ).T # Interface positions averaged to u-columns
plt.pcolormesh( xq+0*ea, ea, nc_object.variables[variable][record,:,0,:] )
if clim is not None: plt.clim(clim);
if plot_grid: plt.plot( xq, ea.T, 'k');
if xl is not None: plt.xlim(xl);
if yl is not None: plt.ylim(yl);
plt.figure(figsize=(12,6))
xxl=50,120 # This is the zoomed-in region around the shelf break in model coordinates
plt.subplot(221); plot_with_pcolormesh(layer_file, 0, xq, xl=xxl, yl=yl, clim=(34,35)); plt.title('a) Layer mode S')
plt.subplot(222); plot_with_pcolormesh(rho_file, 0, xq, xl=xxl, yl=yl, clim=(34,35)); plt.title(r'b) $\rho$ coordinate S');
plt.subplot(223); plot_with_pcolormesh(sigma_file, 0, xq, xl=xxl, yl=yl, clim=(34,35)); plt.title(r'c) $\sigma$ coordinate S');
plt.subplot(224); plot_with_pcolormesh(z_file, 0, xq, xl=xxl, yl=yl, clim=(34,35)); plt.title('d) z* coordinate S');
Explanation: A remaining issue is why does there appear to be a salinity inversion in the z*-coordinate model. Technically there is (see salinity plots in i,k-space) but the layers are vanished so we should not be seeing them. This is because contourf is interpolating between thick and vanished layers. The bottom-line is that contourf is assuming the data is smooth and interpreting data inconsistent with the model formulation which considers the data to be piecewise.
Use pcolormesh() to visualize
The most consistent tool for visualizing piecewise data is pcolormesh. An important distinction between contourf and pcolormesh is that the latter takes the coordinates of the corners of cells when shading cell-centered values. Recall we loaded the coordinate yq and inserted an extra value on the left edge - we'll use that for the horizontal coordinate of cell edges. We will horizontally average to get an approximate position for the cell corner heights:
End of explanation
# These next two lines add the MOM6-examples/tools/analysis/ directory to the search path for python packages
import sys
sys.path.append('../../tools/analysis/')
# m6toolbox is a python package that has a function that helps visualize vertical sections
import m6toolbox
# Define a function to plot a section
def plot_section(file_handle, record, xq, variable='salt', clim=None, xl=None, yl=None, plot_grid=True, rep='pcm'):
Plots a section by reading vertical grid and scalar variable and super-sampling
both in order to plot vertical and horizontal reconstructions.
Optional arguments have defaults for plotting salinity and overlaying the grid.
e = file_handle.variables['e'][record,:,0,:] # Vertical grid positions
s = file_handle.variables[variable][record,:,0,:] # Scalar field to color
x,z,q = m6toolbox.section2quadmesh(xq, e, s, representation=rep) # This yields three areas at twice the model resolution
plt.pcolormesh(x, z, q);
if clim is not None: plt.clim(clim)
if plot_grid: plt.plot(x, z.T, 'k', hold=True);
if xl is not None: plt.xlim(xl)
if yl is not None: plt.ylim(yl)
plt.figure(figsize=(12,6))
plt.subplot(2,2,1); plot_section(layer_file, 0, xq, xl=xxl, yl=yl, clim=(34,35), rep='plm'); plt.title('a) Layer S');
plt.subplot(2,2,2); plot_section(rho_file, 0, xq, xl=xxl, yl=yl, clim=(34,35), rep='plm'); plt.title(r'b) $\rho$-coordinate S');
plt.subplot(2,2,3); plot_section(sigma_file, 0, xq, xl=xxl, yl=yl, clim=(34,35), rep='linear'); plt.title(r'c) $\sigma$-coordinate S');
plt.subplot(2,2,4); plot_section(z_file, 0, xq, xl=xxl, yl=yl, clim=(34,35), rep='pcm'); plt.title('d) z*-coordinate S');
Explanation: In the above plots, the vanished layers are reasonably hidden and the overall shading for salinity more similar between the plots.
The above method treats each cell as a trapezoid with corners shared between neighboring cells. It does not preserve the mean depth of the cell boundaries. To give more faithful rendering of the what the model can do, a tool is provided in MOM6-examples/tools/analysis/m6toolbox that returns arguments that can be passed straight to pcolormesh consistent with various interpretations of the grid structure, e.g. pcm (piecewise constant thicknesses), plm (piecewise linear), linear (as described above).
End of explanation |
10,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this notebook, we use the Bluemix CLI tools to create a new IBM Analytics Engine instance that is configured to use IBM Cloud Object Storage (IBM COS).
Load utility library and set notebook width
To prevent this notebook from getting too cluttered, we use some python utilities. We load them below.
Step1: Let's set this notebook to use the full width of the browser using the utilities
Step2: Read Cloud Foundry endpoint properties
We can read some variables saved when we ran the notebook examples/CLI/CLI_Setup.ipynb to configure our choosen api, org and space
Step3: Save IBM Cloud Object Storage endpoint properties
Create a file ../secrets/cos_s3_endpoint.json with your COS credentials. The file format should be
Step4: Upload IAE bootstrap file to COS
Step5: Provision IAE instance
Before we can provision IAE, we need to login to Bluemix using the Bluemix CLI
Step6: There are a few ways to configure IAE to use IBM COS. Let's automate the process with a custom script.
NOTE
Step7: We can then attempt to create an IBM Analytics Engine Instance using the custom script file that we created in the previous step.
Step8: Note the output from above. If all went ok, the CLI should suggest running cf service myiaeinstance to check the provisioning status. Let's do that now.
NOTE
Step9: When the status is
Step10: Verify COS was successfully configured
Step12: Create hive table and query it
SSH onto the cluster (see ../secrets/iae_service_key.json for endpoints and credentials)
ssh [email protected]
beeline -u 'jdbc
Step13: Submit spark job with Livy API
Execute spark job
Step14: Ensure the bucket_name variable is still set.
Step15: Note that we saved the job id in the previous step. Let's use that to. Let's use that to get job state.
NOTE
Step16: Take a look at the spark job log using the job id
Step17: Debugging errors
This is the processed I followed to debug an issue with my submitted job.
If there is an error we can debug by looking for the yarn application id in the log output, e.g.
17/09/23 06
Step18: Let's keep things tidy and remove our job output. Note that if you don't remove it, next time you run the spark job it will fail because the output file already exists.
Step19: Debugging
TODO | Python Code:
import sys
sys.path.append("./modules")
import iae_examples
Explanation: Introduction
In this notebook, we use the Bluemix CLI tools to create a new IBM Analytics Engine instance that is configured to use IBM Cloud Object Storage (IBM COS).
Load utility library and set notebook width
To prevent this notebook from getting too cluttered, we use some python utilities. We load them below.
End of explanation
iae_examples.set_notebook_full_width()
Explanation: Let's set this notebook to use the full width of the browser using the utilities
End of explanation
(CF_API, CF_ORG, CF_SPACE) = iae_examples.read_cf_target_endpoint_details('../secrets/cf_target_endpoint.json')
Explanation: Read Cloud Foundry endpoint properties
We can read some variables saved when we ran the notebook examples/CLI/CLI_Setup.ipynb to configure our choosen api, org and space
End of explanation
(S3_ACCESS_KEY, S3_PRIVATE_ENDPOINT, S3_PUBLIC_ENDPOINT, S3_SECRET_KEY) = \
iae_examples.read_cos_endpoint_details('../secrets/cos_s3_endpoint.json')
Explanation: Save IBM Cloud Object Storage endpoint properties
Create a file ../secrets/cos_s3_endpoint.json with your COS credentials. The file format should be:
{
"S3_ACCESS_KEY": "<AccessKey-changeme>",
"S3_PRIVATE_ENDPOINT": "<Private-EndPoint-changeme>",
"S3_PUBLIC_ENDPOINT": "<Public-EndPoint-changeme>",
"S3_SECRET_KEY": "<SecretKey-changeme>"
}
Now let's load the cos file into some variables that we will use later
End of explanation
url = 'https://raw.githubusercontent.com/snowch/IBM_Analytics_Engine_Examples/master/scripts/COS_S3.sh'
filename = 'COS_S3_bootstrap.sh'
bucket_name = 'temp-bucket'
iae_examples.save_url_to_cos(url, bucket_name, filename, S3_ACCESS_KEY, S3_SECRET_KEY, S3_PUBLIC_ENDPOINT)
Explanation: Upload IAE bootstrap file to COS
End of explanation
! bx login --apikey @../secrets/apiKeyPersonal.json -a {CF_API} -o {CF_ORG} -s {CF_SPACE}
Explanation: Provision IAE instance
Before we can provision IAE, we need to login to Bluemix using the Bluemix CLI
End of explanation
import json
custom_script = {
"num_compute_nodes": 1,
"hardware_config": "default",
"software_package": "ae-1.0-hadoop-spark",
"customization": [{
"name": "action1",
"type": "bootstrap",
"script": {
"source_type": "CosS3",
"source_props": {
"auth_endpoint": S3_PRIVATE_ENDPOINT,
"access_key_id": S3_ACCESS_KEY,
"secret_access_key": S3_SECRET_KEY
},
"script_path": bucket_name + "/COS_S3_bootstrap.sh"
},
"script_params": [S3_ACCESS_KEY, S3_PRIVATE_ENDPOINT, S3_SECRET_KEY]
}]
}
# write the script to a file in the local directory where we can access it in the next step using the Bluemix CLI
with open('../secrets/custom_script.json', 'w') as f:
f.write(json.dumps(custom_script))
Explanation: There are a few ways to configure IAE to use IBM COS. Let's automate the process with a custom script.
NOTE: These examples prefer automation to manual approaches for configuration. One key benefit of automation is that it supports creating environments in a repeatable and testable way.
End of explanation
! bx cf create-service IBMAnalyticsEngine lite 'myiaeinstance' -c ../secrets/custom_script.json
Explanation: We can then attempt to create an IBM Analytics Engine Instance using the custom script file that we created in the previous step.
End of explanation
! bx cf service myiaeinstance
Explanation: Note the output from above. If all went ok, the CLI should suggest running cf service myiaeinstance to check the provisioning status. Let's do that now.
NOTE: If there is an error output by the above step, jump to the section below on debugging.
End of explanation
! bx cf create-service-key myiaeinstance myiaeinstance_servicekey
! bx cf service-keys myiaeinstance
! bx cf service-key myiaeinstance myiaeinstance_servicekey > ../secrets/iae_service_key.json
# unfortunately, the output of the above command contains some lines of text before the json
# lets remove the first four lines of output and save the raw json
iae_examples.strip_premable_from_service_key('../secrets/iae_service_key.json')
IAE_USER = iae_examples.iae_service_user('../secrets/iae_service_key.json')
IAE_PASSWORD = iae_examples.iae_service_password('../secrets/iae_service_key.json')
IAE_AMBARI_URL = iae_examples.iae_service_endpoint_ambari('../secrets/iae_service_key.json')
IAE_LIVY_URL = iae_examples.iae_service_endpoint_livy('../secrets/iae_service_key.json')
IAE_WEBHDFS_URL = iae_examples.iae_service_endpoint_webhdfs('../secrets/iae_service_key.json')
iae_examples.read_iae_service_keys('../secrets/iae_service_key.json')['cluster']['service_endpoints']['hive_jdbc']
Explanation: When the status is: create succeeded, move on to the next step.
Create service key
Here we create a service key which contains the cluster credentials.
We export the service key information to a file.
We can then read the service key details into python variables so we can use those variables later in this notebook.
End of explanation
# This is broken
# iae_examples.is_s3_access_key_set(IAE_AMBARI_URL, IAE_USER, IAE_PASSWORD, S3_ACCESS_KEY)
Explanation: Verify COS was successfully configured
End of explanation
file_contents =
from __future__ import print_function
from datetime import datetime
from pyspark.sql import SparkSession
if __name__ == "__main__":
spark = SparkSession.builder.appName("PythonPi").getOrCreate()
output = "Hello World at %s" % (str(datetime.now()))
print(output)
output_rdd = spark.sparkContext.parallelize([output])
output_rdd.coalesce(1, True).saveAsTextFile('s3a://{0}/provision_iae_with_cos_spark_job_output.txt')
spark.stop()
.format(bucket_name)
bucket_name = 'temp-bucket'
filename = 'PiEx.py'
(S3_ACCESS_KEY, S3_PRIVATE_ENDPOINT, S3_PUBLIC_ENDPOINT, S3_SECRET_KEY) = \
iae_examples.read_cos_endpoint_details('../secrets/cos_s3_endpoint.json')
iae_examples.save_string_to_cos(
file_contents.encode('ascii'), bucket_name, filename, S3_ACCESS_KEY, S3_SECRET_KEY, S3_PUBLIC_ENDPOINT
)
Explanation: Create hive table and query it
SSH onto the cluster (see ../secrets/iae_service_key.json for endpoints and credentials)
ssh [email protected]
beeline -u 'jdbc:hive2://chs-xxxxx-mn001.bi.services.us-south.bluemix.net:8443/;ssl=true;transportMode=http;httpPath=gateway/default/hive' -n clsadmin -p **********
Next from the beeline session, create the hive table:
CREATE EXTERNAL TABLE avro_hive_table
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 's3a://temp-bucket/transactions/'
TBLPROPERTIES (
'avro.schema.literal'='{
"namespace": "transaction.avro",
"type": "record",
"name": "Transaction",
"fields": [
{"name": "InvoiceNo", "type": "int" },
{"name": "StockCode", "type": "string" },
{"name": "Description", "type": "string" },
{"name": "Quantity", "type": "int" },
{"name": "InvoiceDate", "type": "long" },
{"name": "UnitPrice", "type": "float" },
{"name": "CustomerID", "type": "int" },
{"name": "Country", "type": "string" },
{"name": "LineNo", "type": "int" },
{"name": "InvoiceTime", "type": "string" },
{"name": "StoreID", "type": "int" },
{"name": "TransactionID", "type": "string" }
]
}'
)
;
And query:
select count(*) from avro_hive_table;
Upload spark script to COS
First, let's create a pyspark script
End of explanation
IAE_USER = iae_examples.iae_service_user('../secrets/iae_service_key.json')
IAE_PASSWORD = iae_examples.iae_service_password('../secrets/iae_service_key.json')
IAE_LIVY_URL = iae_examples.iae_service_endpoint_livy('../secrets/iae_service_key.json')
IAE_WEBHDFS_URL = iae_examples.iae_service_endpoint_webhdfs('../secrets/iae_service_key.json')
Explanation: Submit spark job with Livy API
Execute spark job
End of explanation
print(bucket_name)
import requests, json
headers = {
'Content-Type': 'application/json',
'X-Requested-By': 'livy'
}
data = { "file":"s3a://{0}/PiEx.py".format(bucket_name) }
res = requests.post(IAE_LIVY_URL, auth=(IAE_USER, IAE_PASSWORD), headers=headers, data=json.dumps(data))
print(res.text)
id = res.json()['id']
Explanation: Ensure the bucket_name variable is still set.
End of explanation
headers = {
'Content-Type': 'application/json',
'X-Requested-By': 'livy'
}
url = '{0}/{1}'.format(IAE_LIVY_URL, id)
response = requests.get(url, auth=(IAE_USER, IAE_PASSWORD), headers=headers)
print(response.json()['state'])
Explanation: Note that we saved the job id in the previous step. Let's use that to. Let's use that to get job state.
NOTE: keep running the cell below until status is successful or it has failed.
End of explanation
headers = {
'Content-Type': 'application/json',
'X-Requested-By': 'livy'
}
url = '{0}/{1}/log'.format(IAE_LIVY_URL, id)
response = requests.get(url, auth=(IAE_USER, IAE_PASSWORD), headers=headers)
print('\n'.join(response.json()['log']))
Explanation: Take a look at the spark job log using the job id:
End of explanation
data = iae_examples.get_file_content_from_cos(
bucket_name,
'provision_iae_with_cos_spark_job_output.txt/part-00000',
S3_ACCESS_KEY, S3_SECRET_KEY, S3_PUBLIC_ENDPOINT
)
print(data)
Explanation: Debugging errors
This is the processed I followed to debug an issue with my submitted job.
If there is an error we can debug by looking for the yarn application id in the log output, e.g.
17/09/23 06:21:51 INFO Client: Application report for application_1506108548102_0002 (state: ACCEPTED)
Ssh onto the cluster, I ran the following command (change for your application_xxxx value):
$ yarn logs -applicationId application_1506108548102_0002 | less
Burried in the yarn output, I noticed the following:
py4j.protocol.Py4JJavaError: An error occurred while calling o76.saveAsTextFile.
: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3a://temp-bucket/provision_iae_with_cos_spark_job_output.txt already exists
Print out the contents of the COS file created by Spark
Now let's take a look at the file contents. In the spark job, we coalesced the RDD. This causes spark to just save one output file, which will be part-00000:
output_rdd.coalesce(1, True).saveAsTextFile('s3a://{0}/provision_iae_with_cos_spark_job_output.txt')
End of explanation
iae_examples.recursively_delete_file_in_cos(
bucket_name,
'provision_iae_with_cos_spark_job_output.txt',
S3_ACCESS_KEY, S3_SECRET_KEY, S3_PUBLIC_ENDPOINT
)
Explanation: Let's keep things tidy and remove our job output. Note that if you don't remove it, next time you run the spark job it will fail because the output file already exists.
End of explanation
! bx cf space dev --guid
! bx cf services
! bx cf service-keys myiaeinstance
! bx cf delete-service-key myiaeinstance myiaeinstance_servicekey -f
! bx cf delete-service myiaeinstance -f
Explanation: Debugging
TODO
End of explanation |
10,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sea Level Rise in New York City
Data Bootcamp Final Project (Spring 2016)
by Daniel Jung ([email protected])
About This Project
The levels of ocean surfaces (henceforth referred to as the 'sea level') have been observed to be rising around the world. While sea levels are known to fluctuate due to natural processes, researchers have found a steady increase in the sea level during the past 100 years that is anomylous to historical natural fluctuations. This project will use historical monthly sea level data from the National Oceanic and Atmospheric Association in order to estimate the rate of sea level rise. Because sea levels are known to vary throughout localities, this project will focus on the sea level around New York City. We will then use our calculated rate to estimate how long it will take for key areas around the City to be flooded.
About Sea Levels
The sea level refers to the current level at which the ocean surface lies, above which land elevations can be measured. This means that the sea level will always have a level of elevation of 0. Sea levels are generally measured in millimeters relative to the point of measurement. For example, if the sea level lies 2mm below the measurement gauge, the measurement will be -2mm. If the sea level likes 5 mm below the measurement gauge, the measurement will be 5mm. A sea level of 0mm means that the sea is at the same level as where the measurement gauge sits.
Our measurements are measured from the NOAA's station at The Battery in New York City. Our dataset begins in the first month of 1850 and goes all the way through to the present day, recording measurements from the high confidence interval, the low confidence interval and the average trend, labeled ' Linear_Trend' in our dataset. Our dataset measures the sea level in meters relative to the station. The first measurement in January 1850 has an average trend of -0.406m, that is, -0.406m or 406mm below the station's measuring gauge.
About Elevation
Elevation is often measured in feet or meters. Our dataset is obtained from NYC Open Data and measures the elevation at certain points in New York City in feet. It is important to remember that elevation is measured relative to the sea level, so the sea level is always at 0ft.
Packages Used
For this project, we will be using the Python package, Pandas, for data manipulation. We will also be using NumPy for calculations and MatPlotLib for our graphics. We will also use %matplotlib inline to ensure that our graphics appear on this notebook
Step1: Part 1
Step2: Cutting the data to graph results from 1950 to the present day
Step3: Both methods sufficiently show that, while there appears to be a steady rise in the sea level from the time the data collection began to the present day, there appears to be no significant acceleration or deceleration in the rate. It would therefore be appropriate to use a single rate when forecasting future sea level rise. Our second graphic attempts to zoom in on the past 50 years of data to see if a significant change in slope can be observed and there appears to be none.
We will calculate the single rate using the entire dataset from January 1850 to February 2016
Step4: We obtained a rate of 0.2362mm per month, or roughly 2.8344mm per year. Because our elevation data for Part 2 is expressed in feet, we must convert this rate into feet.
Doing this gives us a rate of 0.0007750 ft per month or 0.009300 ft per year.
Part 2
Step5: The elevation for our chosen seven coordinates are as follows | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: Sea Level Rise in New York City
Data Bootcamp Final Project (Spring 2016)
by Daniel Jung ([email protected])
About This Project
The levels of ocean surfaces (henceforth referred to as the 'sea level') have been observed to be rising around the world. While sea levels are known to fluctuate due to natural processes, researchers have found a steady increase in the sea level during the past 100 years that is anomylous to historical natural fluctuations. This project will use historical monthly sea level data from the National Oceanic and Atmospheric Association in order to estimate the rate of sea level rise. Because sea levels are known to vary throughout localities, this project will focus on the sea level around New York City. We will then use our calculated rate to estimate how long it will take for key areas around the City to be flooded.
About Sea Levels
The sea level refers to the current level at which the ocean surface lies, above which land elevations can be measured. This means that the sea level will always have a level of elevation of 0. Sea levels are generally measured in millimeters relative to the point of measurement. For example, if the sea level lies 2mm below the measurement gauge, the measurement will be -2mm. If the sea level likes 5 mm below the measurement gauge, the measurement will be 5mm. A sea level of 0mm means that the sea is at the same level as where the measurement gauge sits.
Our measurements are measured from the NOAA's station at The Battery in New York City. Our dataset begins in the first month of 1850 and goes all the way through to the present day, recording measurements from the high confidence interval, the low confidence interval and the average trend, labeled ' Linear_Trend' in our dataset. Our dataset measures the sea level in meters relative to the station. The first measurement in January 1850 has an average trend of -0.406m, that is, -0.406m or 406mm below the station's measuring gauge.
About Elevation
Elevation is often measured in feet or meters. Our dataset is obtained from NYC Open Data and measures the elevation at certain points in New York City in feet. It is important to remember that elevation is measured relative to the sea level, so the sea level is always at 0ft.
Packages Used
For this project, we will be using the Python package, Pandas, for data manipulation. We will also be using NumPy for calculations and MatPlotLib for our graphics. We will also use %matplotlib inline to ensure that our graphics appear on this notebook:
End of explanation
# Importing the CSV link and assigning the dataset to the variable, 'clvl'
url1 = 'https://tidesandcurrents.noaa.gov/sltrends/downloadMeanSeaLevelTrendsCSV.ht'
url2 = 'm;jsessionid=D79899A1D9FCE54F6DC6A107F9439C5D?stnid=8518750'
url = url1 + url2
clvl = pd.read_csv(url)
# Creating the four section by slicing the dataset
t1 = clvl[:612]
t2 = clvl[612:1212]
t3 = clvl[1212:1716]
t4 = clvl[1716:1994]
# Calculating the trends for each section; We calculate the trend from difference as our data represents
# Note: In this dataset the column label Linear_Trend is preceded by a black space so we must use ' Linear_Trend'
# when calling the data.
dif1 = clvl[' Linear_Trend'][611] - clvl[' Linear_Trend'][0]
dif2 = clvl[' Linear_Trend'][1211] - clvl[' Linear_Trend'][612]
dif3 = clvl[' Linear_Trend'][1715] - clvl[' Linear_Trend'][1212]
dif4 = clvl[' Linear_Trend'][1993] - clvl[' Linear_Trend'][1716]
trend1 = dif1 / len(t1[' Linear_Trend'])
trend2 = dif2 / len(t2[' Linear_Trend'])
trend3 = dif3 / len(t3[' Linear_Trend'])
trend4 = dif4 / len(t4[' Linear_Trend'])
# The results
print(trend1, trend2, trend3, trend4)
# Plotting Linear_Trend
clvl = clvl.set_index(['Year'])
plt.plot(clvl.index, clvl[' Linear_Trend'])
plt.suptitle('Sea level over time', fontsize = 12)
plt.ylabel('Sea Level (in meters from observer)', fontsize = 10)
plt.xlabel('Year', fontsize = 10)
Explanation: Part 1: Calculating Rate of Sea Level Rise
We obtained our CSV dataset on sea levels from the NOAA at the following link: https://tidesandcurrents.noaa.gov/sltrends/downloadMeanSeaLevelTrendsCSV.htm;jsessionid=2D46B64BEEFE3215B19644B8B15DD432?stnid=8518750
This dataset, as mentioned before, measures the sea level in meters from The Battery in Lower Manhattan every month beginning January 1850. Three measurement figures are given: The higher confidence interval, the lower confidence interval, and the linear trend, which is found by averaging the higher and lower confidence intervals. For this project, we are interested in the linear trend levels.
Before we calculate our rate of sea level rise, we want to see if the rate at which the sea level has increased has changed throughout history (that is, if the sea level rise is accelerating or deccelerating). We can do this using two methods. First, we can simply plot the linear trend from the data and see if there is a curve in the line. We can also find acceleration manually by dividing our dataset into sections:
- Section 1: January 1850 - December 1900
- Section 2: January 1901 - December 1950
- Section 3: January 1951 - December 1992
- Section 4: January 1993 - February 2016
These section have been determined according to what many climate scientists have found to be significantly different 'stages' of sea level rise. Section 1 represents pre-industrial levels, Section 2 represents early stage post-industrial levels. 1992 is considered a significant year to climate scientists as this is the year when the global consensus on climate change and sea level rise became mainstream and led to a large amount of climate-related studies being published as well as large-scale initiatives launched to combat climate change and its effects. Section 3 will therefore run from post-industrial levels to 1992, and section 4 will run from 1992 to the present day.
End of explanation
#newclvl = clvl.set_index(['Year'])
newclvl = clvl[1200:1994]
plt.plot(newclvl.index, newclvl[' Linear_Trend'])
plt.suptitle('Sea level over time', fontsize = 12)
plt.ylabel('Sea Level (in meters from observer)', fontsize = 10)
plt.xlabel('Year', fontsize = 10)
Explanation: Cutting the data to graph results from 1950 to the present day:
End of explanation
clvl2 = clvl.reset_index()
total = clvl2[:1994]
diff = clvl2[' Linear_Trend'][1993] - clvl2[' Linear_Trend'][0]
rate = diff / len(total[' Linear_Trend'])
rate
Explanation: Both methods sufficiently show that, while there appears to be a steady rise in the sea level from the time the data collection began to the present day, there appears to be no significant acceleration or deceleration in the rate. It would therefore be appropriate to use a single rate when forecasting future sea level rise. Our second graphic attempts to zoom in on the past 50 years of data to see if a significant change in slope can be observed and there appears to be none.
We will calculate the single rate using the entire dataset from January 1850 to February 2016:
End of explanation
# Importing the CSV link for our elevation data into a variable called 'elev'
eurl = 'https://data.cityofnewyork.us/api/views/9uxf-ng6q/rows.csv?accessType=DOWNLOAD'
elev = pd.read_csv(eurl)
# Creating a loop to search the dataframe for our coordinates
is_in = False
# Where 'x' is the rough coordinate found through a search engine
coord = 'x'
for item in elev:
if coord in elev:
is_in = True
print(Year, Month)
Explanation: We obtained a rate of 0.2362mm per month, or roughly 2.8344mm per year. Because our elevation data for Part 2 is expressed in feet, we must convert this rate into feet.
Doing this gives us a rate of 0.0007750 ft per month or 0.009300 ft per year.
Part 2: How Long Until Flooding?
We obtained our CSV dataset on elevation from NYC Open Data at the following link:
https://data.cityofnewyork.us/api/views/9uxf-ng6q/rows.csv?accessType=DOWNLOAD
This dataset contains thousands of geographical locations, identified by geographic coordinates and their corresponding elevation, measured in feet.
Using the rate we calculated in Part 1, we will determine how long it will take for seven key areas of the City to flood. These seven key areas were chosen to be:
1. The Battery
2. New York Stock Exchange building
3. Times Square
4. Brooklyn Navy Yards
5. LaGuardia Airport
6. John F. Kennedy Airport
7. Coney Island
The above locations were chosen because they cover a large area of the City and are of cultural or economic importance.
First we want to identify the geographical coordinates on the CSV file that correspond to our chosen locations. We will do this by creating a loop to search through the file for coordinates that match or are very close to coordinates that we will find through a search engine. Then, having obtained these coordinates, we will take their corresponding elevation data to find how high the sea must rise in order to flood that location.
End of explanation
# Calculating how long it will take for the sea level to reach each elevation
# Using a simple mathematical script to convert a list of the seven elevations
elevations = [7.858, 17.609, 53.503, 28.370, 5.724, 7.200, 7.924]
for i in elevations:
timeinmonths = i / 0.0007750
print(timeinmonths)
Explanation: The elevation for our chosen seven coordinates are as follows:
1. The Battery - 7.858 ft
2. New York Stock Exchange building - 17.609 ft
3. Times Square - 53.503 ft
4. Brooklyn Navy Yards - 28.370 ft
5. LaGuardia Airport - 5.724 ft
6. John F. Kennedy Airport - 7.200 ft
7. Coney Island - 7.924
End of explanation |
10,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: Again, let's get the basic bird count data for June 2018. We read the data file to a dataframe, then check the columns' names and data types.
Step2: Notice the 'namedLocation' attribute/column in our dataframe. Let's look at some of the values.
Step3: Querying Locations
We can query the location values found in this column, for example 'SJER_013.birdGrid.brd', using the locations/ endpoint in the NEON API. This will allow us to obtain detailed geospatial data on that particular location.
First we'll set up the request and read in the JSON data.
Step4: Similar to other JSON API responses, this contains a single 'data' element which is a dictionary. Let's also look at the keys of 'data'.
Step5: First off, the data includes some basic titles and categories for the location.
Step6: The actual spatial location data includes latitude, longitude, elevation, and Universal Transverse Mercator (UTM) Coordinates.
Step7: Let's take a closer look at the 'locationProperties' element. This is a list of dictionaries, where each dict has one 'locationPropertyName' element and a corresponding 'locationPropertyValue' element. These together provide a more detailed description of the properties of the location.
Step8: Finally there's the name and URLs for the locations' parent location and children locations, if any. The parent location, in this case 'TEAK', is a location of which the current location is a part.
Step9: The children locations are smaller areas within the current location; in our example these are points within the bird grid plot. Requesting data on the children locations through the locations/ endpoint allows us to get spatial data on a finer resolution. | Python Code:
import requests
import json
import pandas as pd
#Define API call componenets
SERVER = 'http://data.neonscience.org/api/v0/'
SITECODE = 'TEAK'
PRODUCTCODE = 'DP1.10003.001'
Explanation: syncID:
title: "Querying Location Data with NEON API and Python"
description: "Querying the 'locations/' NEON API endpoint with Python and navigating the response"
dateCreated: 2020-04-24
authors: Maxwell J. Burner
contributors:
estimatedTime:
packagesLibraries: requests, json
topics: api
languagesTool: python
dataProduct: DP1.10003.001
code1:
tutorialSeries: python-neon-api-series
urlTitle: neon_api_04_locations
In this tutorial we will learn about querying the 'locations/' endpoint of the NEON API using Python.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Query the locations endpoint of the NEON API for data on specific NEON locations
* Parse and navigate responses from the locations endpoint of the NEON API
* Get spatial and geolocation data about NEON sites and plots
* Navigate the parent-child relationships between NEON locations
### Install Python Packages
* **requests**
* **json**
* **pandas**
</div>
In this tutorial we will learn to use the locations/ endpoint of the NEON API to get geoLocation information about NEON sites. While the previous tutorials all covered getting information from the sites/, products/, and data/ endpoints, we will now begin exploring other endpoints of the NEON API.
Looking through some of the data we obtained in previous tutorials, you might have noticed that references to location tend to be vague; location names and labels are used without, but lack geospatial information such as the geographic coordinates or size of the location. For more detail, we can use the locations/ endpoint of the NEON API to get spatial information on specific locations used in NEON data collection.
Get a Named Location
The locations/ endpoint is usually used to provide context to locations referenced in NEON data products. Let's start by looking at the bird count data for Lower Teakettle (TEAK) again.
End of explanation
#Request data file list for 2018-04
data_request = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2018-06')
data_json = data_request.json()
for file in data_json['data']['files']:
if('count' in file['name']):
if('basic' in file['name']):
bird_url = file['url']
df_bird = pd.read_csv(bird_url)
print(df_bird.dtypes)
Explanation: Again, let's get the basic bird count data for June 2018. We read the data file to a dataframe, then check the columns' names and data types.
End of explanation
df_bird['namedLocation'][0:5]
Explanation: Notice the 'namedLocation' attribute/column in our dataframe. Let's look at some of the values.
End of explanation
loc_request = requests.get(SERVER+'locations/'+'TEAK_010.birdGrid.brd')
loc_json = loc_request.json()
Explanation: Querying Locations
We can query the location values found in this column, for example 'SJER_013.birdGrid.brd', using the locations/ endpoint in the NEON API. This will allow us to obtain detailed geospatial data on that particular location.
First we'll set up the request and read in the JSON data.
End of explanation
for key in loc_json['data'].keys():
print(key)
Explanation: Similar to other JSON API responses, this contains a single 'data' element which is a dictionary. Let's also look at the keys of 'data'.
End of explanation
print('Description: ',loc_json['data']['locationDescription'])
print('Name: ',loc_json['data']['locationName'])
print('Type: ',loc_json['data']['locationType'])
print('Domain: ',loc_json['data']['domainCode'])
Explanation: First off, the data includes some basic titles and categories for the location.
End of explanation
print('Latitude: ',loc_json['data']['locationDecimalLatitude'])
print('Longitude: ',loc_json['data']['locationDecimalLongitude'])
print('Elevation: ',loc_json['data']['locationElevation'])
print('UTM Easting: ',loc_json['data']['locationUtmEasting'])
print('UTM Northing: ',loc_json['data']['locationUtmNorthing'])
print('Hemisphere: ', loc_json['data']['locationUtmHemisphere'])
print('UTM Zone: ', loc_json['data']['locationUtmZone'])
Explanation: The actual spatial location data includes latitude, longitude, elevation, and Universal Transverse Mercator (UTM) Coordinates.
End of explanation
#Print location property names and values
for locationProperty in loc_json['data']['locationProperties']:
print(locationProperty['locationPropertyName'][9:], #trim 'Value for ' off beginning of each locationPropertyName
': ',locationProperty['locationPropertyValue'])
Explanation: Let's take a closer look at the 'locationProperties' element. This is a list of dictionaries, where each dict has one 'locationPropertyName' element and a corresponding 'locationPropertyValue' element. These together provide a more detailed description of the properties of the location.
End of explanation
#Print name and API url of parent location
print(loc_json['data']['locationParent'], loc_json['data']['locationParentUrl'])
Explanation: Finally there's the name and URLs for the locations' parent location and children locations, if any. The parent location, in this case 'TEAK', is a location of which the current location is a part.
End of explanation
#Print names and API urls of child locations
for child in zip(loc_json['data']['locationChildren'], loc_json['data']['locationChildrenUrls']):
print(child[0], child[1])
Explanation: The children locations are smaller areas within the current location; in our example these are points within the bird grid plot. Requesting data on the children locations through the locations/ endpoint allows us to get spatial data on a finer resolution.
End of explanation |
10,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 7
Step1: Part 1
Step2: Part 2 | Python Code:
with open("input/day7.txt", "r") as f:
inputLines = tuple(line.strip() for line in f)
import re
Explanation: Day 7: Internet Protocol Version 7
End of explanation
def isABBA(text):
# Use a negative lookahead assertion to avoid matching four equal characters.
return re.search(r"(.)(?!\1)(.)\2\1", text) is not None
assert isABBA("abba")
assert isABBA("xabba")
assert not isABBA("aaaa")
assert isABBA("abcoxxoxyz")
assert isABBA("aabba")
assert isABBA("aaabba")
assert isABBA("aaaabba")
def ipAddressSequences(ipAddress):
# We use a pattern for the hypernet sequences for splitting.
# Moreover, we capture the letters in the hypernet sequences, such that
# normal and hypernet sequences will be alternating in the result.
sequences = re.split(r"\[([^\]]+)\]", ipAddress)
normalSequences = tuple(sequences[::2])
hypernetSequences = tuple(sequences[1::2])
return normalSequences, hypernetSequences
assert ipAddressSequences("abba[mnop]qrst") == (("abba", "qrst"), ("mnop",))
assert ipAddressSequences("abcd[bddb]xyyx") == (("abcd", "xyyx"), ("bddb",))
assert ipAddressSequences("aaaa[qwer]tyui") == (("aaaa", "tyui"), ("qwer",))
assert ipAddressSequences("ioxxoj[asdfgh]zxcvbn") == (("ioxxoj", "zxcvbn"), ("asdfgh",))
assert ipAddressSequences("a[b]") == (("a", ""), ("b",))
assert ipAddressSequences("[b]a") == (("", "a"), ("b",))
assert ipAddressSequences("[b]") == (("", ""), ("b",))
def supportsTLS(ipAddress):
normal, hypernet = ipAddressSequences(ipAddress)
return any(isABBA(s) for s in normal) and not any(isABBA(s) for s in hypernet)
assert supportsTLS("abba[mnop]qrst")
assert not supportsTLS("abcd[bddb]xyyx")
assert not supportsTLS("aaaa[qwer]tyui")
assert supportsTLS("ioxxoj[asdfgh]zxcvbn")
sum(1 for ipAddress in inputLines if supportsTLS(ipAddress))
Explanation: Part 1: ABBA pattern in address, but not in hypernet sequences
End of explanation
def supportsSSL(ipAddress):
# The idea is that the ABA and the BAB patterns are separated by an odd number of brackets.
return re.search(# first the ABA pattern
r"([a-z])(?!\1)([a-z])\1"
# then an arbitrary number of letters
+ r"[a-z]*"
# then an opening or closing bracket
+ r"[\[\]]"
# then any number of blocks which contain letters, a bracket, more letters, and another bracket
+ r"([a-z]*[\[\]][a-z]*[\[\]]]*)*"
# then an arbitrary number of letters
+ r"[^\[\]]*"
# finally, the BAB pattern
+ r"\2\1\2",
ipAddress) is not None
assert supportsSSL("aba[bab]xyz")
assert not supportsSSL("xyx[xyx]xyx")
assert supportsSSL("aaa[kek]eke")
assert supportsSSL("zazbz[bzb]cdb")
sum(1 for ipAddress in inputLines if supportsSSL(ipAddress))
Explanation: Part 2: ABA and corresponding BAB pattern in normal and hypernet parts
End of explanation |
10,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Collection
Step1: Ordinal Genres
Below, we make the genres ordinal to fit in the random forest classifiers. We add a new column to our dataframe to do so, write a function to populate it, and run it across the dataframe.
Step2: We add in some boolean genre classifiers to make our analysis more fine-grained. Rather than saying "we predict this video is country with 50% confidence", we could say "we predict this video is not edm with 90% confidence" and so on.
Step3: Test and Train Sets
We create our training and test sets by splitting all_genres by genre, and making 10 of each genre train and 10 test. We aggregate by genre to make our full train and full test sets, each containing 50 records of various genres.
Step4: Generating Random Forest - Viewer Statistics
We start generating our random forests, and output a relative accuracy and a confusion matrix. In this first one, we simply factor in non-color variables (rating, likes, dislikes, length and viewcount), and run it across all records to predict an ordinal genre value.
Step5: As shown above, this method yields relatively poor results. This is because there's no distinct clusters being created by our random forest, and simple viewer statistics tell us nothing about what kind of video we're watching. However, we see that country, rap and pop are initially somewhat distinct (diagonal is the highest value), and rock and edm are getting mistaken for one another. Let's see if we can't make something of this.
Random Forest - Only Color Statistics
Below, we do the same random forest as above, but going strictly off of average frame color for the video.
We found the most commonly appearing color in each frame and called it the 'frame mode'. We then took all of the frame modes and found the 10 most common of them. Those became the 'color data' we use to analyze videos.
Step6: This actually yields worse results than just the viewer statistics, because the color of a video by itself does not determine the genre. If rappers only had red in their videos and rockers only had black this might be somewhat accurate, but that's just not the case. But, what if we pair these findings with our initial viewer statistics?
Random Forest - All Features
Step7: Singling Out Pop and Rap
Scores are expectedly low. It seems as if we're trying to make the classifier do way too much work, and are giving it very mediocre data to go off of. Recall that we're actually trying to determine WHICH genre a video is by the above code, not whether or not a video is of ONE specific genre. This brings back the binary classifiers that we created above, let's put those to use to see if we can improve these scores.
We try pop and rap first, since they seem to be the most distinct by what we've gathered above.
Step8: What we're seeing above is a confusion matrix that, based on our training data, predicts whether or not a video in the test set is a pop video or not. In the "predicted" row, 0 means it predicts it's not a pop video, and that the 1 is. Likewise with the actual, 0 shows that the video actually wasn't a pop video, and the 1 shows that it was.
The confusion matrix above is our first effort at utilizing these binary classifiers. Most of our videos aren't pop videos, and the model did a good job of picking out those that aren't pop. However, we could use some improvement in the realm of "false negatives", where the model classified a video as not pop when it actually was.
We do these tests 50 times for sake of average score.
Rather than hard-coding each time we wanted to run something for average, we wrote a function that does it for us. All we have to do is pass in the boolean classifier in quotes ("is_rock", etc.), and the number of iterations that we want. Results are displayed below.
Step9: The following creates several files that describe our classifiers. Our website will later
Step10: We ran the above test with all genres, and as shown in above analysis, our country and edm typically have very low accuracy. We've seen above that edm and rock videos are getting mixed up with one another, so we assume that something is characteristic of these 2 genres that's not of everything else. We take out the edm values from our training and test datasets, hoping to improve accuracy.
Step11: So, what does this tell us? Based on our training data, we have the best chance of accurately classifying something as pop or not pop (under these conditions).
We want to find out which 2 are the most distinct, so we can make build our model based on that classification.
Step12: Rock and EDM have suprisingly distinct classifiers. We should dive into the videos and see what this means.
Step13: Selecting Most Valuable Features per Genre - Rock | Python Code:
import pandas as pd
from os import path
from sklearn.ensemble import RandomForestClassifier
import numpy as np
from sklearn.ensemble import ExtraTreesClassifier
import sklearn
# Edit path if need be (shouldn't need to b/c we all have the same folder structure)
CSV_PATH_1 = '../Videos/all_data'
CSV_PATH_2 = '../Videos2/all_data2'
FILE_EXTENSION = '_all.csv'
GENRES = ['country', 'edm', 'pop', 'rap', 'rock']
# Containers for the data frames
genre_dfs = {}
all_genres = None
# Read in the 5 genre's of CV's
for genre in GENRES:
genre_csv_path_1 = path.join(CSV_PATH_1, genre) + FILE_EXTENSION
genre_csv_path_2 = path.join(CSV_PATH_2, genre) + FILE_EXTENSION
df_1 = pd.read_csv(genre_csv_path_1)
df_2 = pd.read_csv(genre_csv_path_2)
df_1 = df_1.drop('Unnamed: 0',1)
df_2 = df_2.drop('Unnamed: 0',1)
df_combined = pd.concat([df_1,df_2],ignore_index=True)
genre_dfs[genre] = df_combined
all_genres = pd.concat(genre_dfs.values())
all_genres.head()
# genre_dfs is now a dictionary that contains the 5 different data frames
# all_genres is a dataframe that contains all of the data
Explanation: Data Collection
End of explanation
def genre_to_ordinal(genre_in):
if(genre_in == "country"):
return 0
elif(genre_in == "pop"):
return 1
elif(genre_in == "rock"):
return 2
elif(genre_in == "edm"):
return 3
elif(genre_in == "rap"):
return 4
else:
return genre_in
all_genres['genre_ordinal'] = all_genres.genre.apply(genre_to_ordinal)
Explanation: Ordinal Genres
Below, we make the genres ordinal to fit in the random forest classifiers. We add a new column to our dataframe to do so, write a function to populate it, and run it across the dataframe.
End of explanation
# Adding is_country flag
def is_country(genre_in):
if(genre_in == "country"):
return 1
else:
return 0
all_genres['is_country'] = all_genres.genre.apply(is_country)
# Adding is_country flag
def is_rock(genre_in):
if(genre_in == "rock"):
return 1
else:
return 0
all_genres['is_rock'] = all_genres.genre.apply(is_rock)
# Adding is_edm flag
def is_edm(genre_in):
if(genre_in == "edm"):
return 1
else:
return 0
all_genres['is_edm'] = all_genres.genre.apply(is_edm)
# Adding is_rap flag
def is_rap(genre_in):
if(genre_in == "rap"):
return 1
else:
return 0
all_genres['is_rap'] = all_genres.genre.apply(is_rap)
# Adding is_country flag
def is_pop(genre_in):
if(genre_in == "pop"):
return 1
else:
return 0
all_genres['is_pop'] = all_genres.genre.apply(is_pop)
Explanation: We add in some boolean genre classifiers to make our analysis more fine-grained. Rather than saying "we predict this video is country with 50% confidence", we could say "we predict this video is not edm with 90% confidence" and so on.
End of explanation
# Subset all_genres to group by individual genres
country_records = all_genres[all_genres["genre"] == "country"]
rock_records = all_genres[all_genres["genre"] == "rock"]
pop_records = all_genres[all_genres["genre"] == "pop"]
edm_records = all_genres[all_genres["genre"] == "edm"]
rap_records = all_genres[all_genres["genre"] == "rap"]
# From the subsets above, create train and test sets from each
country_train = country_records.head(len(country_records) / 2)
country_test = country_records.tail(len(country_records) / 2)
rock_train = rock_records.head(len(rock_records) / 2)
rock_test = rock_records.tail(len(rock_records) / 2)
pop_train = pop_records.head(len(pop_records) / 2)
pop_test = pop_records.tail(len(pop_records) / 2)
edm_train = edm_records.head(len(edm_records) / 2)
edm_test = edm_records.tail(len(edm_records) / 2)
rap_train = rap_records.head(len(rap_records) / 2)
rap_test = rap_records.tail(len(rap_records) / 2)
# Create big training and big test set for analysis
training_set = pd.concat([country_train,rock_train,pop_train,edm_train,rap_train])
test_set = pd.concat([country_test,rock_test,pop_test,edm_test,rap_test])
training_set = training_set.fillna(0)
test_set = test_set.fillna(0)
print "Training Records:\t" , len(training_set)
print "Test Records:\t\t" , len(test_set)
# training_set.head()
Explanation: Test and Train Sets
We create our training and test sets by splitting all_genres by genre, and making 10 of each genre train and 10 test. We aggregate by genre to make our full train and full test sets, each containing 50 records of various genres.
End of explanation
# Predicting based solely on non-color features, using RF
clf = RandomForestClassifier(n_estimators=11)
meta_data_features = ['rating', 'likes','dislikes','length','viewcount']
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[meta_data_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[meta_data_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[meta_data_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: Generating Random Forest - Viewer Statistics
We start generating our random forests, and output a relative accuracy and a confusion matrix. In this first one, we simply factor in non-color variables (rating, likes, dislikes, length and viewcount), and run it across all records to predict an ordinal genre value.
End of explanation
def gen_new_headers(old_headers):
headers = ['colors_' + str(x+1) + '_' for x in range(10)]
h = []
for x in headers:
h.append(x + 'red')
h.append(x + 'blue')
h.append(x + 'green')
return old_headers + h + ['genre']
clf = RandomForestClassifier(n_estimators=11)
color_features = gen_new_headers([])[:-1]
# Predicting based solely on colors
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[color_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[color_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[color_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: As shown above, this method yields relatively poor results. This is because there's no distinct clusters being created by our random forest, and simple viewer statistics tell us nothing about what kind of video we're watching. However, we see that country, rap and pop are initially somewhat distinct (diagonal is the highest value), and rock and edm are getting mistaken for one another. Let's see if we can't make something of this.
Random Forest - Only Color Statistics
Below, we do the same random forest as above, but going strictly off of average frame color for the video.
We found the most commonly appearing color in each frame and called it the 'frame mode'. We then took all of the frame modes and found the 10 most common of them. Those became the 'color data' we use to analyze videos.
End of explanation
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: This actually yields worse results than just the viewer statistics, because the color of a video by itself does not determine the genre. If rappers only had red in their videos and rockers only had black this might be somewhat accurate, but that's just not the case. But, what if we pair these findings with our initial viewer statistics?
Random Forest - All Features
End of explanation
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
print all_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['is_pop'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['is_pop'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.is_pop, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['is_rap'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['is_rap'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.is_rap, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: Singling Out Pop and Rap
Scores are expectedly low. It seems as if we're trying to make the classifier do way too much work, and are giving it very mediocre data to go off of. Recall that we're actually trying to determine WHICH genre a video is by the above code, not whether or not a video is of ONE specific genre. This brings back the binary classifiers that we created above, let's put those to use to see if we can improve these scores.
We try pop and rap first, since they seem to be the most distinct by what we've gathered above.
End of explanation
def multi_RF_averages(is_genre,num_iterations):
clf = RandomForestClassifier(n_estimators=11)
loop_indices = range(0,num_iterations)
cumsum = 0
for i in loop_indices:
y, _ = pd.factorize(training_set[is_genre])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set[is_genre])
cumsum = cumsum + clf.score(test_set[all_features],z)
print "Average Score for",len(loop_indices),is_genre,"iterations:", cumsum/len(loop_indices)
return clf
pop_class = multi_RF_averages("is_pop",50)
rap_class = multi_RF_averages("is_rap",50)
rock_class = multi_RF_averages("is_rock",50)
edm_class = multi_RF_averages("is_edm",50)
country_class = multi_RF_averages("is_country",50)
Explanation: What we're seeing above is a confusion matrix that, based on our training data, predicts whether or not a video in the test set is a pop video or not. In the "predicted" row, 0 means it predicts it's not a pop video, and that the 1 is. Likewise with the actual, 0 shows that the video actually wasn't a pop video, and the 1 shows that it was.
The confusion matrix above is our first effort at utilizing these binary classifiers. Most of our videos aren't pop videos, and the model did a good job of picking out those that aren't pop. However, we could use some improvement in the realm of "false negatives", where the model classified a video as not pop when it actually was.
We do these tests 50 times for sake of average score.
Rather than hard-coding each time we wanted to run something for average, we wrote a function that does it for us. All we have to do is pass in the boolean classifier in quotes ("is_rock", etc.), and the number of iterations that we want. Results are displayed below.
End of explanation
from sklearn.externals import joblib
# only use these to generate pickle files for website
# joblib.dump(pop_class, 'classifiers/pop_class.pkl')
# joblib.dump(rap_class, 'classifiers/rap_class.pkl')
# joblib.dump(rock_class, 'classifiers/rock_class.pkl')
# joblib.dump(edm_class, 'classifiers/edm_class.pkl')
# joblib.dump(country_class, 'classifiers/country_class.pkl')
Explanation: The following creates several files that describe our classifiers. Our website will later
End of explanation
# Removing EDM for better analysis - makes is_pop and is_rap much more accurate
training_set = pd.concat([country_train,rock_train,pop_train,rap_train])
test_set = pd.concat([country_test,rock_test,pop_test,rap_test])
multi_RF_averages("is_pop",50)
multi_RF_averages("is_rap",50)
multi_RF_averages("is_rock",50)
multi_RF_averages("is_edm",50)
multi_RF_averages("is_country",50)
Explanation: We ran the above test with all genres, and as shown in above analysis, our country and edm typically have very low accuracy. We've seen above that edm and rock videos are getting mixed up with one another, so we assume that something is characteristic of these 2 genres that's not of everything else. We take out the edm values from our training and test datasets, hoping to improve accuracy.
End of explanation
training_set = pd.concat([country_train,rock_train,edm_train,rap_train,pop_train])
test_set = pd.concat([rock_test])
multi_RF_averages("is_rock",50)
test_set = pd.concat([rap_test])
multi_RF_averages("is_rap",50)
test_set = pd.concat([country_test])
multi_RF_averages("is_country",50)
test_set = pd.concat([pop_test])
multi_RF_averages("is_pop",50)
test_set = pd.concat([edm_test])
multi_RF_averages("is_edm",50)
Explanation: So, what does this tell us? Based on our training data, we have the best chance of accurately classifying something as pop or not pop (under these conditions).
We want to find out which 2 are the most distinct, so we can make build our model based on that classification.
End of explanation
test_set = pd.concat([edm_test,rock_test])
multi_RF_averages("is_edm",50)
multi_RF_averages("is_rock",50)
Explanation: Rock and EDM have suprisingly distinct classifiers. We should dive into the videos and see what this means.
End of explanation
model = ExtraTreesClassifier()
training_set = pd.concat([country_train,pop_train,rap_train,rock_train,edm_train])
y, _ = pd.factorize(training_set['is_rock'])
model.fit(training_set[all_features], y)
# display the relative importance of each attribute
print model.feature_importances_
df = pd.DataFrame()
df['index'] = all_features
y, _ = pd.factorize(training_set['is_rap'])
model.fit(training_set[all_features], y)
df['rap'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_rock'])
model.fit(training_set[all_features], y)
df['rock'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_country'])
model.fit(training_set[all_features], y)
df['country'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_edm'])
model.fit(training_set[all_features], y)
df['edm'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_pop'])
model.fit(training_set[all_features], y)
df['pop'] = model.feature_importances_
df = df.set_index('index')
df = df.transpose()
df.head()
import plotly.offline as py
import plotly.graph_objs as go
df = df.set_index('index')
df = df.transpose()
df.head()
lol = df.values.tolist()
cols = []
for x in df.columns:
cols.append(x)
py.init_notebook_mode()
title = 'Feature Importance By Genre'
labels = ['rap','rock','country','edm','pop']
x_data = cols
y_data = df.values.tolist()
traces = []
for i in range(0, 5):
traces.append(go.Scatter(
x=x_data,
y=y_data[i],
mode='lines',
connectgaps=True,
name = labels[i]
))
layout = go.Layout(
yaxis=dict(
showgrid=False,
zeroline=False,
showline=False,
showticklabels=False,
),
autosize=False,
margin=dict(
autoexpand=True,
l=100,
r=20,
t=110,
),
showlegend=False,
)
layout = dict(title = 'Feature Importance by Genre',
xaxis = dict(title = 'Feature'),
yaxis = dict(title = 'Percent Importance (All Features Sum to 1.0)',
showgrid=False),
margin=go.Margin(
l=80,
r=50,
b=170,
t=100,
pad=8
),
)
fig = go.Figure(data=traces, layout=layout)
py.iplot(fig, filename='news-source')
Explanation: Selecting Most Valuable Features per Genre - Rock
End of explanation |
10,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back
Asynchronous Widgets
This notebook covers two scenarios where we'd like widget-related code to run without blocking the kernel from acting on other execution requests
Step1: We define a new function that returns a future for when a widget attribute changes.
Step2: And we finally get to our function where we will wait for widget changes. We'll do 10 units of work, and pause after each one until we observe a change in the widget. Notice that the widget's value is available to us, since it is what the wait_for_change future has as a result.
Run this function, and change the slider 10 times.
Step4: Generator approach
If you can't take advantage of the async/await syntax, or you don't want to modify the event loop, you can also do this with generator functions.
First, we define a decorator which hooks a generator function up to widget change events.
Step5: Then we set up our generator.
Step6: Modifications
The above two approaches both waited on widget change events, but can be modified to wait for other things, such as button event messages (as in a "Continue" button), etc.
Updating a widget in the background
Sometimes you'd like to update a widget in the background, allowing the kernel to also process other execute requests. We can do this with threads. In the example below, the progress bar will update in the background and will allow the main kernel to do other computations. | Python Code:
%gui asyncio
Explanation: Index - Back
Asynchronous Widgets
This notebook covers two scenarios where we'd like widget-related code to run without blocking the kernel from acting on other execution requests:
Pausing code to wait for user interaction with a widget in the frontend
Updating a widget in the background
Waiting for user interaction
You may want to pause your Python code to wait for some user interaction with a widget from the frontend. Typically this would be hard to do since running Python code blocks any widget messages from the frontend until the Python code is done.
We'll do this in two approaches: using the event loop integration, and using plain generator functions.
Event loop integration
If we take advantage of the event loop integration IPython offers, we can have a nice solution using the async/await syntax in Python 3.
First we invoke our asyncio event loop. This requires ipykernel 4.7 or later.
End of explanation
import asyncio
def wait_for_change(widget, value):
future = asyncio.Future()
def getvalue(change):
# make the new value available
future.set_result(change.new)
widget.unobserve(getvalue, value)
widget.observe(getvalue, value)
return future
Explanation: We define a new function that returns a future for when a widget attribute changes.
End of explanation
from ipywidgets import IntSlider, Output
slider = IntSlider()
out = Output()
async def f():
for i in range(10):
out.append_stdout('did work ' + str(i) + '\n')
x = await wait_for_change(slider, 'value')
out.append_stdout('async function continued with value ' + str(x) + '\n')
asyncio.ensure_future(f())
slider
out
Explanation: And we finally get to our function where we will wait for widget changes. We'll do 10 units of work, and pause after each one until we observe a change in the widget. Notice that the widget's value is available to us, since it is what the wait_for_change future has as a result.
Run this function, and change the slider 10 times.
End of explanation
from functools import wraps
def yield_for_change(widget, attribute):
Pause a generator to wait for a widget change event.
This is a decorator for a generator function which pauses the generator on yield
until the given widget attribute changes. The new value of the attribute is
sent to the generator and is the value of the yield.
def f(iterator):
@wraps(iterator)
def inner():
i = iterator()
def next_i(change):
try:
i.send(change.new)
except StopIteration as e:
widget.unobserve(next_i, attribute)
widget.observe(next_i, attribute)
# start the generator
next(i)
return inner
return f
Explanation: Generator approach
If you can't take advantage of the async/await syntax, or you don't want to modify the event loop, you can also do this with generator functions.
First, we define a decorator which hooks a generator function up to widget change events.
End of explanation
from ipywidgets import IntSlider, VBox, HTML
slider2=IntSlider()
@yield_for_change(slider2, 'value')
def f():
for i in range(10):
print('did work %s'%i)
x = yield
print('generator function continued with value %s'%x)
f()
slider2
Explanation: Then we set up our generator.
End of explanation
import threading
from IPython.display import display
import ipywidgets as widgets
import time
progress = widgets.FloatProgress(value=0.0, min=0.0, max=1.0)
def work(progress):
total = 100
for i in range(total):
time.sleep(0.2)
progress.value = float(i+1)/total
thread = threading.Thread(target=work, args=(progress,))
display(progress)
thread.start()
Explanation: Modifications
The above two approaches both waited on widget change events, but can be modified to wait for other things, such as button event messages (as in a "Continue" button), etc.
Updating a widget in the background
Sometimes you'd like to update a widget in the background, allowing the kernel to also process other execute requests. We can do this with threads. In the example below, the progress bar will update in the background and will allow the main kernel to do other computations.
End of explanation |
10,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DICS for power mapping
In this tutorial, we're going to simulate two signals originating from two
locations on the cortex. These signals will be sine waves, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll be using dynamic imaging of coherent sources (DICS) [1]_ to map out
spectral power along the cortex. Let's see if we can find our two simulated
sources.
Step1: Setup
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
Step3: Data simulation
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
Step4: Let's simulate two timeseries and plot some basic information about them.
Step5: Now we put the signals at two locations on the cortex. We construct a
Step6: Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
Step7: Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
Some noise is added based on the baseline noise covariance matrix from the
sample dataset, scaled to implement the desired SNR.
Step8: We create an
Step9: Power mapping
With our simulated dataset ready, we can now pretend to be researchers that
have just recorded this from a real subject and are going to study what parts
of the brain communicate with each other.
First, we'll create a source estimate of the MEG data. We'll use both a
straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
which is specifically designed to work with oscillatory data.
Computing the inverse using MNE-dSPM
Step10: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
A beamformer will construct for each vertex a spatial filter that aims to
pass activity originating from the vertex, while dampening activity from
other sources as much as possible.
The | Python Code:
# Author: Marijn van Vliet <[email protected]>
#
# License: BSD (3-clause)
Explanation: DICS for power mapping
In this tutorial, we're going to simulate two signals originating from two
locations on the cortex. These signals will be sine waves, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll be using dynamic imaging of coherent sources (DICS) [1]_ to map out
spectral power along the cortex. Let's see if we can find our two simulated
sources.
End of explanation
import os.path as op
import numpy as np
from scipy.signal import welch, coherence
from mayavi import mlab
from matplotlib import pyplot as plt
import mne
from mne.simulation import simulate_raw
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# Suppress irrelevant output
mne.set_log_level('ERROR')
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
mri_path = op.join(subjects_dir, 'sample')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
raw_fname = op.join(meg_path, 'sample_audvis_raw.fif')
trans_fname = op.join(meg_path, 'sample_audvis_raw-trans.fif')
src_fname = op.join(mri_path, 'bem/sample-oct-6-src.fif')
bem_fname = op.join(mri_path, 'bem/sample-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
cov_fname = op.join(meg_path, 'sample_audvis-cov.fif')
# Seed for the random number generator
rand = np.random.RandomState(42)
Explanation: Setup
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
End of explanation
sfreq = 50. # Sampling frequency of the generated signal
times = np.arange(10. * sfreq) / sfreq # 10 seconds of signal
n_times = len(times)
def coh_signal_gen():
Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
signal = np.sin(2.0 * np.pi *
(base_freq * np.arange(n_times) / sfreq +
np.cumsum(t_rand * rand.randn(n_times))))
# Add some random fluctuations to the signal.
signal += std * rand.randn(n_times)
# Scale the signal to be in the right order of magnitude (~100 nAm)
# for MEG data.
signal *= 100e-9
return signal
Explanation: Data simulation
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
End of explanation
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
fig, axes = plt.subplots(2, 2, figsize=(8, 4))
# Plot the timeseries
ax = axes[0][0]
ax.plot(times, 1e9 * signal1, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',
title='Signal 1')
ax = axes[0][1]
ax.plot(times, 1e9 * signal2, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
ax = axes[1][0]
# Only plot the first 100 frequencies
ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],
ylabel='Power (dB)', title='Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
ax = axes[1][1]
ax.plot(f[:50], coh[:50], lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',
title='Coherence between the timeseries')
fig.tight_layout()
Explanation: Let's simulate two timeseries and plot some basic information about them.
End of explanation
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
source_vert1 = 146374
source_vert2 = 33830
# The timeseries at each vertex: one part signal, one part silence
timeseries1 = np.hstack([signal1, np.zeros_like(signal1)])
timeseries2 = np.hstack([signal2, np.zeros_like(signal2)])
# Construct a SourceEstimate object that describes the signal at the cortical
# level.
stc = mne.SourceEstimate(
np.vstack((timeseries1, timeseries2)), # The two timeseries
vertices=[[source_vert1], [source_vert2]], # Their locations
tmin=0,
tstep=1. / sfreq,
subject='sample', # We use the brain model of the MNE-Sample dataset
)
Explanation: Now we put the signals at two locations on the cortex. We construct a
:class:mne.SourceEstimate object to store them in.
The timeseries will have a part where the signal is active and a part where
it is not. The techniques we'll be using in this tutorial depend on being
able to contrast data that contains the signal of interest versus data that
does not (i.e. it contains only noise).
End of explanation
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
Explanation: Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
End of explanation
# Read the info from the sample dataset. This defines the location of the
# sensors and such.
info = mne.io.read_info(raw_fname)
info.update(sfreq=sfreq, bads=[])
# Only use gradiometers
picks = mne.pick_types(info, meg='grad', stim=True, exclude=())
mne.pick_info(info, picks, copy=False)
# This is the raw object that will be used as a template for the simulation.
raw = mne.io.RawArray(np.zeros((info['nchan'], len(stc.times))), info)
# Define a covariance matrix for the simulated noise. In this tutorial, we use
# a simple diagonal matrix.
cov = mne.cov.make_ad_hoc_cov(info)
cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR
# Simulate the raw data, with a lowpass filter on the noise
raw = simulate_raw(raw, stc, trans_fname, src_fname, bem_fname, cov=cov,
random_state=rand, iir_filter=[4, -4, 0.8])
Explanation: Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
Some noise is added based on the baseline noise covariance matrix from the
sample dataset, scaled to implement the desired SNR.
End of explanation
t0 = raw.first_samp # First sample in the data
t1 = t0 + n_times - 1 # Sample just before the second trial
epochs = mne.Epochs(
raw,
events=np.array([[t0, 0, 1], [t1, 0, 2]]),
event_id=dict(signal=1, noise=2),
tmin=0, tmax=10,
preload=True,
)
# Plot some of the channels of the simulated data that are situated above one
# of our simulated sources.
picks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal'))
epochs.plot(picks=picks)
Explanation: We create an :class:mne.Epochs object containing two trials: one with
both noise and signal and one with just noise
End of explanation
# Compute the inverse operator
fwd = mne.read_forward_solution(fwd_fname)
inv = make_inverse_operator(epochs.info, fwd, cov)
# Apply the inverse model to the trial that also contains the signal.
s = apply_inverse(epochs['signal'].average(), inv)
# Take the root-mean square along the time dimension and plot the result.
s_rms = np.sqrt((s ** 2).mean())
brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,
size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('MNE-dSPM inverse (RMS)', height=0.9)
Explanation: Power mapping
With our simulated dataset ready, we can now pretend to be researchers that
have just recorded this from a real subject and are going to study what parts
of the brain communicate with each other.
First, we'll create a source estimate of the MEG data. We'll use both a
straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
which is specifically designed to work with oscillatory data.
Computing the inverse using MNE-dSPM:
End of explanation
# Estimate the cross-spectral density (CSD) matrix on the trial containing the
# signal.
csd_signal = csd_morlet(epochs['signal'], frequencies=[10])
# Compute the spatial filters for each vertex, using two approaches.
filters_approach1 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=True,
inversion='single', weight_norm=None)
filters_approach2 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=False,
inversion='matrix', weight_norm='unit-noise-gain')
# Compute the DICS power map by applying the spatial filters to the CSD matrix.
power_approach1, f = apply_dics_csd(csd_signal, filters_approach1)
power_approach2, f = apply_dics_csd(csd_signal, filters_approach2)
# Plot the DICS power maps for both approaches.
for approach, power in enumerate([power_approach1, power_approach2], 1):
brain = power.plot('sample', subjects_dir=subjects_dir, hemi='both',
figure=approach + 1, size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('DICS power map, approach %d' % approach, height=0.9)
Explanation: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
A beamformer will construct for each vertex a spatial filter that aims to
pass activity originating from the vertex, while dampening activity from
other sources as much as possible.
The :func:make_dics function has many switches that offer precise control
over the way the filter weights are computed. Currently, there is no clear
consensus regarding the best approach. This is why we will demonstrate two
approaches here:
The approach as described in [2]_, which first normalizes the forward
solution and computes a vector beamformer.
The scalar beamforming approach based on [3]_, which uses weight
normalization instead of normalizing the forward solution.
End of explanation |
10,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing Clustering on Dataframe
We load the Dataframe using code found in stackoverflow("http
Step1: We can see that only 6 of the clusters have significant occupancies. So we are probably better off with a differentclusters. Also, the cluster centers for the 4 clusters with very low occupancies have completely nonsensical coordinates, which can be seen by the diverging spikes on the above plot.
Testing Kmeans with different number of clusters
Step2: Lets see what is going on with 10 clusters
Step3: Some of the clusters always end up almost empty. Basically the Kmeans algorithm is failing. Still let us look at the cluster centers of the well populated clusters
Let's see what the spaceGroup numbers are of the cluster center points.
Step4: Lets plot the cluster centers in stoichiometric space. We dont plot clusters with low occupancy as those have garbage values. | Python Code:
import scipy.sparse
import numpy as np
import sklearn as skl
import pylab as plt
%matplotlib inline
def load_sparse_csr(filename):
loader = np.load(filename)
return scipy.sparse.csr_matrix(( loader['data'], loader['indices'], loader['indptr']),
shape = loader['shape'])
Dataframe=load_sparse_csr("Dataframe.npz")
from sklearn.cross_validation import train_test_split
train_Dat,test_Dat=train_test_split(Dataframe,test_size=0.2,random_state=42)
from sklearn.cluster import KMeans
clust=KMeans(n_clusters=10)
Dat=train_Dat.toarray()
clusters=clust.fit_predict(Dat)
print clusters[0:10]
cluster_freq=np.zeros(10,dtype=float)
for i in clusters:
cluster_freq[i]+=1
print map(int,cluster_freq)
plt.figure(figsize=(8,8))
for i in range(10):
plt.plot(np.arange(len(Dat[0])),clust.cluster_centers_[i],label="cluster"+str(i))
plt.xlabel("Feature Number")
plt.ylabel("value of feature for cluster center")
plt.legend()
Explanation: Testing Clustering on Dataframe
We load the Dataframe using code found in stackoverflow("http://stackoverflow.com/questions/8955448/save-load-scipy-sparse-csr-matrix-in-portable-data-format") and apply clustering algorithms on them
Testing Kmeans with 10 clusters
End of explanation
Dat=Dataframe.toarray()
print "Number of clusters Number of points in each cluster Inertia"
for nclt in range(2,20):
clust2=KMeans(n_clusters=nclt)
clusters2=clust2.fit_predict(Dat)
cluster_freq=np.zeros(nclt,dtype=float)
for i in clusters2:
cluster_freq[i]+=1
print nclt,map(int,cluster_freq),clust2.inertia_
Explanation: We can see that only 6 of the clusters have significant occupancies. So we are probably better off with a differentclusters. Also, the cluster centers for the 4 clusters with very low occupancies have completely nonsensical coordinates, which can be seen by the diverging spikes on the above plot.
Testing Kmeans with different number of clusters
End of explanation
nclt=10
clust2=KMeans(n_clusters=nclt,n_init=50,random_state=42)
clusters2=clust2.fit_predict(Dat)
cluster_freq=np.zeros(nclt,dtype=float)
for i in clusters2:
cluster_freq[i]+=1
print nclt,map(int,cluster_freq),clust2.inertia_
Explanation: Lets see what is going on with 10 clusters
End of explanation
print clust2.cluster_centers_[:,105]
Explanation: Some of the clusters always end up almost empty. Basically the Kmeans algorithm is failing. Still let us look at the cluster centers of the well populated clusters
Let's see what the spaceGroup numbers are of the cluster center points.
End of explanation
plt.figure(figsize=(8,8))
num_x=104
for i in range(10):
if i not in [2,4,5,8]:
plt.plot(np.arange(num_x),clust2.cluster_centers_[i][0:num_x],label="cluster"+str(i))
plt.xlabel("Feature Number")
plt.ylabel("value of feature for cluster center")
plt.legend()
nclt=10
clust3=KMeans(n_clusters=nclt,n_init=50,random_state=42)
X_new=clust3.fit_transform(Dat)
print X_new[0]
#min_dist=zeros(len(X_new))
plt.figure(figsize=(10,10))
min_dist=np.amin(X_new,axis=1)
plt.plot(np.arange(len(X_new)),min_dist)
nclt=50
clust4=KMeans(n_clusters=nclt,n_init=10,init='random',random_state=42)
clusters4=clust4.fit_predict(Dat)
cluster_freq=np.zeros(nclt,dtype=float)
for i in clusters4:
cluster_freq[i]+=1
print nclt,map(int,cluster_freq),clust4.inertia_
print(clust4.cluster_centers_[:,105])
Explanation: Lets plot the cluster centers in stoichiometric space. We dont plot clusters with low occupancy as those have garbage values.
End of explanation |
10,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
High-performance Simulation with Kubernetes
This tutorial will describe how to set up high-performance simulation using a
TFF runtime running on Kubernetes. The model is the same as in the previous
tutorial, High-performance simulations with TFF. The only difference is that
here we use a worker pool instead of a local executor.
This tutorial refers to Google Cloud's GKE to create the Kubernetes cluster,
but all the steps after the cluster is created can be used with any Kubernetes
installation.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: 훈련할 모델 정의하기
Step2: 원격 실행기 설정하기
기본적으로 TFF는 모든 계산을 로컬에서 실행합니다. 이 단계에서는 위에서 설정한 Kubernetes 서비스에 연결하도록 TFF에 지시합니다. 서비스의 IP 주소를 복사해야 합니다.
Step3: 훈련 실행하기 | Python Code:
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
Explanation: High-performance Simulation with Kubernetes
This tutorial will describe how to set up high-performance simulation using a
TFF runtime running on Kubernetes. The model is the same as in the previous
tutorial, High-performance simulations with TFF. The only difference is that
here we use a worker pool instead of a local executor.
This tutorial refers to Google Cloud's GKE to create the Kubernetes cluster,
but all the steps after the cluster is created can be used with any Kubernetes
installation.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/high_performance_simulation_with_kubernetes"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
GKE에서 TFF 작업자 시작하기
참고 : 이 가이드에서는 사용자에게 기존 GCP 프로젝트가 있다고 가정합니다.
Kubernetes 클러스터 생성하기
다음 단계는 한 번만 수행하면 됩니다. 클러스터는 향후 워크로드에 재사용할 수 있습니다.
GKE 지침에 따라 컨테이너 클러스터를 만듭니다. 이 가이드의 나머지 부분에서는 클러스터의 이름이 tff-cluster라고 가정하지만, 실제 이름은 중요하지 않습니다. "5단계: 애플리케이션 배포하기"에 도달하면 지침를 따르지 마세요.
TFF 작업자 애플리케이션 배포하기
GCP와 상호 작용하는 명령은 로컬 또는 Google Cloud Shell에서 실행할 수 있습니다. 추가 설정이 필요하지 않으므로 Google Cloud Shell을 사용하는 것이 좋습니다.
다음 명령을 실행하여 Kubernetes 애플리케이션을 시작합니다.
$ kubectl create deployment tff-workers --image=gcr.io/tensorflow-federated/remote-executor-service:latest
애플리케이션에 대한 로드 밸런서를 추가합니다.
$ kubectl expose deployment tff-workers --type=LoadBalancer --port 80 --target-port 8000
참고: 이렇게 하면 배포가 인터넷에 노출되며 데모용으로만 사용됩니다. 운영 용도의 경우, 방화벽과 인증을 강력히 권장합니다.
Google Cloud Console에서 로드 밸런서의 IP 주소를 조회합니다. 나중에 훈련 루프를 작업자 앱에 연결하는 데 필요합니다.
(또는) 로컬로 Docker 컨테이너 시작하기
$ docker run --rm -p 8000:8000 gcr.io/tensorflow-federated/remote-executor-service:latest
TFF 환경 설정
End of explanation
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
input_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=input_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for round in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('Round {}: loss {}, round time {}'.format(round, metrics.loss, t2 - t1))
Explanation: 훈련할 모델 정의하기
End of explanation
import grpc
ip_address = '0.0.0.0' #@param {type:"string"}
port = 80 #@param {type:"integer"}
channels = [grpc.insecure_channel(f'{ip_address}:{port}') for _ in range(10)]
tff.backends.native.set_remote_execution_context(channels)
Explanation: 원격 실행기 설정하기
기본적으로 TFF는 모든 계산을 로컬에서 실행합니다. 이 단계에서는 위에서 설정한 Kubernetes 서비스에 연결하도록 TFF에 지시합니다. 서비스의 IP 주소를 복사해야 합니다.
End of explanation
evaluate()
Explanation: 훈련 실행하기
End of explanation |
10,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Analytics Demo
In this demo we will showcase the powerful capabilities of the Data Ninja services by contructing a Text Analytics pipeline from scratch. By combining open source tools and packages with Data Ninja we will show you how the sematic content from unstructured data can be easily obtained and leveraged in your analytics pipeline.
We will walk through the following steps
Step1: Collect and print the set of links
Step2: NoSQL integration
Next we show how to store the results from Data Ninja in a MongoDB collection
Step3: Accessing the Data Ninja services
Please sign-up at https
Step4: Fetch Smart Content
The Smart Content service analyzes the text to produce concepts, categories, keywords and sentiments in JSON output format. Here is an example
Step5: Store Smart Content output in MongoDB
Using the PyMongo client created earlier, we will show how to save the Data Ninja
output to a MongoDB collection.
Step6: Article Text Extraction
Built into our Smart Content Service is the ability to extract the main text from a web page using machine learning techniques. Here is an example
Step7: Fetch Smart Content for a set of links
Now we will extract the Smart Content for the list of URLs we obtained from Google News earlier. We will specifically prepare a list of extracted text for topic clustering in the next step.
Step8: Apache Spark Integration
We will take advantage of the many data formats Apache Spark natively supports. Recall that we stored the Concepts and Categories returned from Data Ninja in HDFS in the previous step. Here, we will show how to use PySpark to read the data into a Spark Dataframe and perform some simple aggregations to find the top-n Concepts and Categories.
Step9: Visualizing the results
We will convert the final results from Spark Dataframe to a Pandas Dataframe and visualize the trending Concepts and Categories. | Python Code:
from bs4 import BeautifulSoup
import requests
# Sites to exclude from our trending news URL collection
exclusions = ['google.com','youtube.com','wikipedia.org','blogspot.com']
prefix = 'http://'
def include_url(url):
for excl in exclusions:
if url.find(excl) > 0:
return False
return True
# Fetch the page content and extract the links
def fetch_links(url):
response = requests.get(prefix + url)
page_content = response.text
soup = BeautifulSoup(page_content, "lxml")
links = soup.find_all('a')
return links
Explanation: Text Analytics Demo
In this demo we will showcase the powerful capabilities of the Data Ninja services by contructing a Text Analytics pipeline from scratch. By combining open source tools and packages with Data Ninja we will show you how the sematic content from unstructured data can be easily obtained and leveraged in your analytics pipeline.
We will walk through the following steps:
1. Fetch trending URLs
First we will scrape a news aggregation website (in this case Google News, but the idea can be extended to other news sites as well) and obtain a list of URLs that point to valid news articles.
2. Extract article text from URLs
We will show how the Data Ninja text extraction service can be used to identify and extract the main text from an HTML page removing all the boilerplate content (such as running headers/footers, menus, ads).
3. Extract semantic content from article text
Once the text has been extracted from a webpage, we will then use another Data Ninja service to tag the article with entities and sentiment. Our content tagging system is capable of identifying the broader context of an article as we will show.
4. Text Analytics at Scale
We will use Apache Spark to show how to perform Text Analytics at scale. We will show how to store the Data Ninja JSON output to Hadoop HDFS and use Spark to perform some quick aggregations.
5. Visualization
The insights obtained from previous stages can be communicated using a graphical visualization library. We will show a new Data Ninja app called Newsbot Ninja that brings many of these ideas together: https://newsbot.dataninja.net/
6. Clustering to find topics (optional)
Semantic content extracted from the text can be utilized as features and Machine Learning techniques can be used to derive isights from unstructured data. One simple example is to use a common text clustering technique like LDA to identify the topics from a collection of articles.
Let's get started!
Demo
We will scrape a news aggregator site (Google News) to collect a list URLs that point to trending news articles. We need to remove links that are not likely to be news articles from popular media sites (such as Wikipedia or Youtube).
Helper methods for harvesting the links
End of explanation
import os
hdfs_location = '/Users/Projects/Current/Notebook'
linkset = set()
links = fetch_links('news.google.com')
# Collect the article links applying the URL filters
for link in links:
href = link.get('href')
if str(href).startswith(prefix) and include_url(str(href)):
linkset.add(link.get('href').strip())
# print str(href)
# Store links in HDFS
outfile = open(hdfs_location + os.path.sep + 'links' + os.path.sep + 'links.txt', "wb")
outfile.write("\n".join(linkset))
print 'Links harvested: ', str(len(linkset))
# Take 100 links for the demo
links100 = list(linkset)[:100]
Explanation: Collect and print the set of links
End of explanation
# Code to store data in MongoDB using PyMongo client
from pymongo import MongoClient
from bson import json_util
def connect_to_db():
client = MongoClient('mongodb://localhost:27017/')
db = client.dndemo
return db.dailynews
Explanation: NoSQL integration
Next we show how to store the results from Data Ninja in a MongoDB collection
End of explanation
import json
with open('mashape_key.txt', 'r') as keyfile:
mashape_key = keyfile.read().rstrip()
# Please add your own Data Ninja API Mashape key here -->
# mashape_key = <your-mashape-key>
smartcontent_url = 'https://smartcontent.dataninja.net/smartcontent/tag'
headers = {'Content-Type': 'application/json',
'Accept': 'application/json',
'X-Mashape-User': 'Newsbot',
'X-Mashape-Key': mashape_key}
# If you are using AWS API Gateway, please add the X-API-Key: <your-AWS-key>
# in place of 'X-Mashape-Key': mashape_key and use the following link to access
# the service: https://api.dataninja.net/smartcontent/tag
def fetch_smartcontent(link):
payload = {'url': link, 'max_size': 10}
response = requests.post(smartcontent_url, headers=headers, data=json.dumps(payload))
return response.json()
Explanation: Accessing the Data Ninja services
Please sign-up at https://market.mashape.com/dataninja/smart-content and obtain your free Data Ninja API key. We will access the Smart Content service to analyze the semantic content of each article obtained in the previous step. The Smart Content service is based on our pre-built knowledge graph database.
Alternatively, you can use the Amazon Web Serivices API Gateway to access our services (using your AWS account): https://auth.dataninja.net/cart
End of explanation
data = fetch_smartcontent('http://www.macrumors.com/roundup/macbook-pro/')
# Display the JSON output from Smart Content
print json.dumps(data, indent=4)
Explanation: Fetch Smart Content
The Smart Content service analyzes the text to produce concepts, categories, keywords and sentiments in JSON output format. Here is an example:
http://www.macrumors.com/roundup/macbook-pro/
End of explanation
def write_to_db(data, db):
return db.insert_one(json_util.loads(data)).inserted_id
def write_to_hdfs(data, location, filename):
outname = location + os.path.sep + filename
outfile = open(outname, 'w')
outfile.write(data)
outfile.close()
# return hdfs.write(data, location)
Explanation: Store Smart Content output in MongoDB
Using the PyMongo client created earlier, we will show how to save the Data Ninja
output to a MongoDB collection.
End of explanation
# Dispay the extracted text from Smart Content
print data['text']
Explanation: Article Text Extraction
Built into our Smart Content Service is the ability to extract the main text from a web page using machine learning techniques. Here is an example:
End of explanation
import json
# Call the Smart Content service and collect the article text into a list
documents = []
# Create a MongoDB connection
db = connect_to_db()
con_index = 0
cat_index = 0
for link in linkset:
data = fetch_smartcontent(link)
if 'text' in data and len(data['text']) > 100:
documents.append(data['text'])
doc_id = write_to_db(json.dumps(data), db)
if 'concept_list' in data:
for concept in data['concept_list']:
write_to_hdfs(json.dumps(concept), hdfs_location + os.path.sep + 'concepts',
'concept_' + str(con_index) + '.json')
con_index += 1
if 'category_list' in data:
for category in data['category_list']:
write_to_hdfs(json.dumps(category), hdfs_location + os.path.sep + 'categories',
'category_' + str(cat_index) + '.json')
cat_index += 1
print 'Documents in collection: ', str(len(documents))
Explanation: Fetch Smart Content for a set of links
Now we will extract the Smart Content for the list of URLs we obtained from Google News earlier. We will specifically prepare a list of extracted text for topic clustering in the next step.
End of explanation
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('dataninja-pyspark')
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
def dndemo_spark():
# A JSON dataset in HDFS.
# The path can be either a single text file or a directory storing text files.
concepts = sqlContext.read.json(hdfs_location + os.path.sep + 'concepts')
concepts.printSchema()
concepts.registerTempTable('concepts')
categories = sqlContext.read.json(hdfs_location + os.path.sep + 'categories')
categories.printSchema()
categories.registerTempTable('categories')
# Run this only once!
# You can only have once SparkContext and SqlContext
dndemo_spark()
count = sqlContext.sql('SELECT count(*) as num_concepts FROM concepts')
print count.show()
count = sqlContext.sql('SELECT count(*) as num_categories FROM categories')
print count.show()
trending_con = sqlContext.sql('SELECT concept_title, sum(score) as total_score ' +
'FROM concepts GROUP BY concept_title ORDER BY total_score desc')
trending_cat = sqlContext.sql('SELECT category_title, sum(score) as total_score ' +
'FROM categories GROUP BY category_title ORDER BY total_score desc')
# print trending.show()
Explanation: Apache Spark Integration
We will take advantage of the many data formats Apache Spark natively supports. Recall that we stored the Concepts and Categories returned from Data Ninja in HDFS in the previous step. Here, we will show how to use PySpark to read the data into a Spark Dataframe and perform some simple aggregations to find the top-n Concepts and Categories.
End of explanation
from IPython.display import display, HTML
from tabulate import tabulate
import pandas as pd
display(trending_con)
display(trending_cat)
df_con = trending_con.toPandas().head(n=40)
df_cat = trending_cat.toPandas().head(n=40)
print 'Top 40 trending concepts:'
print tabulate(df_con)
print 'Top 40 trending categories'
print tabulate(df_cat)
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
%matplotlib inline
df_con.plot(x='concept_title', y='total_score', kind='bar', title='Trending Concepts', color='green', figsize=(20,10))
df_cat.plot(x='category_title', y='total_score', kind='bar', title='Trending Categories', color='orange', figsize=(20,10))
Explanation: Visualizing the results
We will convert the final results from Spark Dataframe to a Pandas Dataframe and visualize the trending Concepts and Categories.
End of explanation |
10,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Predictions have an accuracy of 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append(passenger.Sex=='female')
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Predictions have an accuracy of 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if passenger.Sex=='female':
predictions.append(1)
elif passenger.Age < 10: #passed first if mean it is male (do not need to explicit)
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Embarked', [ "Sex == 'female'", 'Pclass == 3','Age < 20'])
Explanation: Answer: Predictions have an accuracy of 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
if passenger.Pclass ==3:
if passenger.Sex=='female' and passenger.Age<20 and passenger.Embarked!='S':
predictions.append(1)
else:
predictions.append(0)
elif passenger.Sex=='female':
predictions.append(1)
elif passenger.Age < 10:
if passenger.SibSp >= 3:
predictions.append(0)
else:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
10,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Raw data
Step1: The visualization module (
Step2: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
scrollbar on right side of the browser window also tells us that two of the
channels are marked as bad. Bad channels are color coded gray. By
clicking the lines or channel names on the left, you can mark or unmark a bad
channel interactively. You can use +/- keys to adjust the scale (also = works
for magnifying the data). Note that the initial scaling factors can be set
with parameter scalings. If you don't know the scaling factor for
channels, you can automatically set them by passing scalings='auto'. With
pageup/pagedown and home/end keys you can adjust the amount of data
viewed at once. To see all the interactive features, hit ? or click
help in the lower left corner of the browser window.
The channels are sorted by channel type by default. You can use the order
parameter of
Step3: We read the events from a file and passed it as a parameter when calling the
method. The events are plotted as vertical lines so you can see how they
align with the raw data.
We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
Step4: We used ch_groups='position' to color code the different regions. It uses
the same algorithm for dividing the regions as order='position' of
Step5: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
Step6: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See
Step7: Plotting channel-wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data. | Python Code:
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'),
add_eeg_ref=False)
raw.set_eeg_reference() # set EEG average reference
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
Explanation: Visualize Raw data
End of explanation
raw.plot(block=True)
Explanation: The visualization module (:mod:mne.viz) contains all the plotting functions
that work in combination with MNE data structures. Usually the easiest way to
use them is to call a method of the data container. All of the plotting
method names start with plot. If you're using Ipython console, you can
just write raw.plot and ask the interpreter for suggestions with a
tab key.
To visually inspect your raw data, you can use the python equivalent of
mne_browse_raw.
End of explanation
raw.plot(order='selection')
Explanation: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
scrollbar on right side of the browser window also tells us that two of the
channels are marked as bad. Bad channels are color coded gray. By
clicking the lines or channel names on the left, you can mark or unmark a bad
channel interactively. You can use +/- keys to adjust the scale (also = works
for magnifying the data). Note that the initial scaling factors can be set
with parameter scalings. If you don't know the scaling factor for
channels, you can automatically set them by passing scalings='auto'. With
pageup/pagedown and home/end keys you can adjust the amount of data
viewed at once. To see all the interactive features, hit ? or click
help in the lower left corner of the browser window.
The channels are sorted by channel type by default. You can use the order
parameter of :func:raw.plot <mne.io.Raw.plot> to group the channels in a
different way. order='selection' uses the same channel groups as MNE-C's
mne_browse_raw (see CACCJEJD). The selections are defined in
mne-python/mne/data/mne_analyze.sel and by modifying the channels there,
you can define your own selection groups. Notice that this also affects the
selections returned by :func:mne.read_selection. By default the selections
only work for Neuromag data, but order='position' tries to mimic this
behavior for any data with sensor positions available. The channels are
grouped by sensor positions to 8 evenly sized regions. Notice that for this
to work effectively, all the data channels in the channel array must be
present. The order parameter can also be passed as an array of ints
(picks) to plot the channels in the given order.
End of explanation
raw.plot_sensors(kind='3d', ch_type='mag', ch_groups='position')
Explanation: We read the events from a file and passed it as a parameter when calling the
method. The events are plotted as vertical lines so you can see how they
align with the raw data.
We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
End of explanation
projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif'))
raw.add_proj(projs)
raw.plot_projs_topomap()
Explanation: We used ch_groups='position' to color code the different regions. It uses
the same algorithm for dividing the regions as order='position' of
:func:raw.plot <mne.io.Raw.plot>. You can also pass a list of picks to
color any channel group with different colors.
Now let's add some ssp projectors to the raw data. Here we read them from a
file and plot them.
End of explanation
raw.plot()
Explanation: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
End of explanation
raw.plot_psd()
Explanation: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See :func:mne.io.Raw.del_proj to actually remove the
projectors.
Raw container also lets us easily plot the power spectra over the raw data.
See the API documentation for more info.
End of explanation
layout = mne.channels.read_layout('Vectorview-mag')
layout.plot()
raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout)
Explanation: Plotting channel-wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data.
End of explanation |
10,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this example, we cluster our alanine dipeptide trajectory using the RMSD distance metric and hierarchical clustering.
Step1: Let's load up our trajectory. This is the trajectory that we generated in the "Running a simulation in OpenMM and analyzing the results with mdtraj" example. The first step is to build the rmsd cache, which precalculates some values for the RMSD computation.
Step2: Lets compute all pairwise rmsds between conformations.
Step3: scipy.cluster implements the average linkage algorithm (among others)
Step4: Lets plot the resulting dendrogram. | Python Code:
from __future__ import print_function
%matplotlib inline
import mdtraj as md
import numpy as np
import matplotlib.pyplot as plt
import scipy.cluster.hierarchy
from scipy.spatial.distance import squareform
Explanation: In this example, we cluster our alanine dipeptide trajectory using the RMSD distance metric and hierarchical clustering.
End of explanation
traj = md.load('ala2.h5')
Explanation: Let's load up our trajectory. This is the trajectory that we generated in the "Running a simulation in OpenMM and analyzing the results with mdtraj" example. The first step is to build the rmsd cache, which precalculates some values for the RMSD computation.
End of explanation
distances = np.empty((traj.n_frames, traj.n_frames))
for i in range(traj.n_frames):
distances[i] = md.rmsd(traj, traj, i)
print('Max pairwise rmsd: %f nm' % np.max(distances))
Explanation: Lets compute all pairwise rmsds between conformations.
End of explanation
# Clustering only accepts reduced form. Squareform's checks are too stringent
assert np.all(distances - distances.T < 1e-6)
reduced_distances = squareform(distances, checks=False)
linkage = scipy.cluster.hierarchy.linkage(reduced_distances, method='average')
Explanation: scipy.cluster implements the average linkage algorithm (among others)
End of explanation
plt.title('RMSD Average linkage hierarchical clustering')
_ = scipy.cluster.hierarchy.dendrogram(linkage, no_labels=True, count_sort='descendent')
Explanation: Lets plot the resulting dendrogram.
End of explanation |
10,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
===========================================================
Plot single trial activity, grouped by ROI and sorted by RT
===========================================================
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file, which contains an experiment with button press
responses to simple visual stimuli, is read in and response times are
calculated.
Regions of Interest are determined by the channel types (in 10/20 channel
notation, even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorting by response time.
Step1: Load EEGLAB example data (a small EEG dataset)
Step2: Create Epochs
Step3: Plot using
Step4: Plot using median | Python Code:
# Authors: Jona Sassenhagen <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.event import define_target_events
from mne.channels import make_1020_channel_selections
print(__doc__)
Explanation: ===========================================================
Plot single trial activity, grouped by ROI and sorted by RT
===========================================================
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file, which contains an experiment with button press
responses to simple visual stimuli, is read in and response times are
calculated.
Regions of Interest are determined by the channel types (in 10/20 channel
notation, even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorting by response time.
End of explanation
data_path = mne.datasets.testing.data_path()
fname = data_path + "/EEGLAB/test_raw.set"
event_id = {"rt": 1, "square": 2} # must be specified for str events
raw = mne.io.read_raw_eeglab(fname)
mapping = {
'EEG 000': 'Fpz', 'EEG 001': 'EOG1', 'EEG 002': 'F3', 'EEG 003': 'Fz',
'EEG 004': 'F4', 'EEG 005': 'EOG2', 'EEG 006': 'FC5', 'EEG 007': 'FC1',
'EEG 008': 'FC2', 'EEG 009': 'FC6', 'EEG 010': 'T7', 'EEG 011': 'C3',
'EEG 012': 'C4', 'EEG 013': 'Cz', 'EEG 014': 'T8', 'EEG 015': 'CP5',
'EEG 016': 'CP1', 'EEG 017': 'CP2', 'EEG 018': 'CP6', 'EEG 019': 'P7',
'EEG 020': 'P3', 'EEG 021': 'Pz', 'EEG 022': 'P4', 'EEG 023': 'P8',
'EEG 024': 'PO7', 'EEG 025': 'PO3', 'EEG 026': 'POz', 'EEG 027': 'PO4',
'EEG 028': 'PO8', 'EEG 029': 'O1', 'EEG 030': 'Oz', 'EEG 031': 'O2'
}
raw.rename_channels(mapping)
raw.set_channel_types({"EOG1": 'eog', "EOG2": 'eog'})
raw.set_montage('standard_1020')
events = mne.events_from_annotations(raw, event_id)[0]
Explanation: Load EEGLAB example data (a small EEG dataset)
End of explanation
# define target events:
# 1. find response times: distance between "square" and "rt" events
# 2. extract A. "square" events B. followed by a button press within 700 msec
tmax = .7
sfreq = raw.info["sfreq"]
reference_id, target_id = 2, 1
new_events, rts = define_target_events(events, reference_id, target_id, sfreq,
tmin=0., tmax=tmax, new_id=2)
epochs = mne.Epochs(raw, events=new_events, tmax=tmax + .1,
event_id={"square": 2})
Explanation: Create Epochs
End of explanation
# Parameters for plotting
order = rts.argsort() # sorting from fast to slow trials
selections = make_1020_channel_selections(epochs.info, midline="12z")
# The actual plots (GFP)
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='gfp',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
Explanation: Plot using :term:Global Field Power <GFP>
End of explanation
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='median',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
Explanation: Plot using median
End of explanation |
10,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch and create geospatial coordinate and indexes.
Install using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.
Step1: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
It allows you to create different engines to connect to serveral dbs.
Step2: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
If you are using csv files just change the commands to pd.read_csv() in this link you can find the documentation.
Before doing this I already checked that the data is properly organized, crate new cells to explore the data beforehand if needed
excel_file = 'substations_table.xlsx'
tab_name = 'sheet1'
schema_for_upload = 'geographic_data'
pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=100)
Step3: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this
Step4: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.
Step5: The reason why we use postgis is to improve geospatial queries and provide a better data structure for geospatial operations. Many of the ST_ functions have improved performance when a geospatial index is created. The process implemented here comes from this workshop. This re-creates the process using python functions so that it can be easily replicated for many tables.
The query to create a geospatial index is as follows | Python Code:
import pandas as pd
from sqlalchemy import create_engine
Explanation: This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch and create geospatial coordinate and indexes.
Install using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.
End of explanation
def connection(user,passwd,dbname, echo_i=False):
str1 = ('postgresql+pg8000://' + user +':' + passw + '@switch-db2.erg.berkeley.edu:5432/'
+ dbname + '?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory')
engine = create_engine(str1,echo=echo_i,isolation_level='AUTOCOMMIT')
return engine
user = 'jdlara'
passw = 'Amadeus-2010'
dbname = 'apl_cec'
engine_db= connection(user,passw,dbname)
Explanation: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
It allows you to create different engines to connect to serveral dbs.
End of explanation
#excel_file = 'substations_table.xlsx'
#tab_name = 'sheet1'
csv_name = ['LEMMA_ADS_AllSpp_2016_Turbo_01252016.csv']
schema_for_upload = 'lemma2016'
for name in csv_name:
pd_data = pd.read_csv(name, encoding='UTF-8')
pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=1000)
Explanation: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
If you are using csv files just change the commands to pd.read_csv() in this link you can find the documentation.
Before doing this I already checked that the data is properly organized, crate new cells to explore the data beforehand if needed
excel_file = 'substations_table.xlsx'
tab_name = 'sheet1'
schema_for_upload = 'geographic_data'
pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=100)
End of explanation
def create_geom(table,schema,engine, projection=4326):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
print query
k.execute(query)
query = ('alter table ' + table + ' drop column if exists geom;')
print query
k.execute(query)
query = 'SELECT AddGeometryColumn (\''+ schema + '\',\''+ table + '\',\'geom\''+',4326,\'POINT\',2);'
print query
k.execute(query)
query = ('UPDATE ' + table + ' set geom = ST_SetSRID(st_makepoint(' + table + '.lon, ' +
table + '.lat),' + str(projection) + ')::geometry;')
k.execute(query)
print query
k = engine.dispose()
return 'geom column added with SRID ' + str(projection)
table = 'substation_table'
schema = 'geographic_data'
create_geom(table,schema,engine_db)
Explanation: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this:
PGSQL
set search_path = SCHEMA, public;
alter table vTABLE drop column if exists geom;
SELECT AddGeometryColumn ('SCHEMA','vTABLE','geom',4326,'POINT',2);
UPDATE TABLE set geom = ST_SetSRID(st_makepoint(vTABLE.lon, vTABLE.lat), 4326)::geometry;
where SCHEMA and vTABLE are the variable portions. Also note, that this query assumes that your columns with latitude and longitude are named lat and lon respectively; moreover, it also assumes that the coordinates are in the 4326 projection.
The following function runs the query for you, considering again that the data is clean and nice.
End of explanation
def create_pk(table,schema,column,engine):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
print query
k.execute(query)
query = ('alter table ' + table + ' ADD CONSTRAINT '+ table +'_pk PRIMARY KEY (' + column + ')')
print query
k.execute(query)
k = engine.dispose()
return 'Primary key created with column' + column
col = ''
create_pk(table,schema,col,engine_db)
Explanation: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.
End of explanation
def create_gidx(table,schema,engine,column='geom'):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
k.execute(query)
print query
query = ('CREATE INDEX ' + table + '_gix ON ' + table + ' USING GIST (' + column + ');')
k.execute(query)
print query
query = ('VACUUM ' + table + ';')
k.execute(query)
print query
query = ('CLUSTER ' + table + ' USING ' + table + '_gix;')
k.execute(query)
print query
query = ('ANALYZE ' + table + ';')
k.execute(query)
print query
k = engine.dispose()
return k
create_gidx(table,schema,engine_db)
Explanation: The reason why we use postgis is to improve geospatial queries and provide a better data structure for geospatial operations. Many of the ST_ functions have improved performance when a geospatial index is created. The process implemented here comes from this workshop. This re-creates the process using python functions so that it can be easily replicated for many tables.
The query to create a geospatial index is as follows:
SQL
set search_path = SCHEMA, public;
CREATE INDEX vTABLE_gix ON vTABLE USING GIST (geom);
This assumes that the column name with the geometry is named geom. If the process follows from the previous code, it will work ok.
The following step is to run a VACUUM, creating an index is not enough to allow PostgreSQL to use it effectively. VACUUMing must be performed when ever a new index is created or after a large number of UPDATEs, INSERTs or DELETEs are issued against a table.
SQL
VACUUM ANALYZE vTABLE;
The final step corresponds to CLUSTERING, this process re-orders the table according to the geospatial index we created. This ensures that records with similar attributes have a high likelihood of being found in the same page, reducing the number of pages that must be read into memory for some types of queries. When a query to find nearest neighbors or within a certain are is needed, geometries that are near each other in space are near each other on disk. The query to perform this clustering is as follows:
CLUSTER vTABLE USING vTABLE_gix;
ANALYZE vTABLE;
End of explanation |
10,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transformada de Fourier con Sympy
En este notebook usaremos el módulo de matemática simbólica Sympy, que incluye la función fourier_transform.
Primero cargamos el módulo y activamos la opción para que los resultados se despliegen en forma más amigable en el notebook
Step1: A continuación, definimos algunas variables que usaremos, algunas de ellas definidas como reales, y otras como reales positivos
Step2: Gaussiana
Como primer ejemplo, definimos una gaussiana
Step3: La implementación de la función fourier_transform requiere ingresar como argumento la función a transformar, la variable respecto a la cual se calcula la transformada, y el valor de la frecuencia correspondiente. Esto significa que si queremos transformar la función $f(t)$ debemos ingresar $(f,t,\omega/2\pi)$ como argumento
Step4: Sinusoidal amorgtiuada con función gaussiana
Step5: Función escalonada
Step6: Esta forma debería ser equivalente a la calculada en clases. Podemos simplificarla un poco
Step7: Delta de Dirac | Python Code:
from sympy import *
init_printing()
Explanation: Transformada de Fourier con Sympy
En este notebook usaremos el módulo de matemática simbólica Sympy, que incluye la función fourier_transform.
Primero cargamos el módulo y activamos la opción para que los resultados se despliegen en forma más amigable en el notebook:
End of explanation
x, k, xi, t, omega = symbols('x, k, xi, t, omega', real=True)
a, k0, alpha = symbols('a, k_0, alpha', positive=True)
Explanation: A continuación, definimos algunas variables que usaremos, algunas de ellas definidas como reales, y otras como reales positivos
End of explanation
f1 = exp(-a*t**2)
f1
Explanation: Gaussiana
Como primer ejemplo, definimos una gaussiana
End of explanation
fourier_transform(f1,t,omega/(2*pi))
Explanation: La implementación de la función fourier_transform requiere ingresar como argumento la función a transformar, la variable respecto a la cual se calcula la transformada, y el valor de la frecuencia correspondiente. Esto significa que si queremos transformar la función $f(t)$ debemos ingresar $(f,t,\omega/2\pi)$ como argumento:
End of explanation
f2 = exp(-a*x**2)*cos(k0*x)
f2
fourier_transform(f2,x,k/(2*pi))
Explanation: Sinusoidal amorgtiuada con función gaussiana
End of explanation
f3 = Heaviside(x+a)*Heaviside(a-x)
f3
Tf3 = fourier_transform(f3,x,k/(2*pi))
Tf3
Explanation: Función escalonada
End of explanation
simplify(Tf3.as_real_imag())[0]
Explanation: Esta forma debería ser equivalente a la calculada en clases. Podemos simplificarla un poco:
End of explanation
f4 = DiracDelta(x-xi)
f4
fourier_transform(f4,x,k/(2*pi))
Explanation: Delta de Dirac
End of explanation |
10,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: Manifold Learning
Step2: Let's call the function and visualize the resulting data
Step3: The output is two dimensional, and consists of points drawn in the shape of the word, "HELLO".
This data form will help us to see visually what these algorithms are doing.
Multidimensional Scaling (MDS)
Looking at data like this, we can see that the particular choice of x and y values of the dataset are not the most fundamental description of the data
Step4: This tells us that the x and y values are not necessarily fundamental to the relationships in the data.
What is fundamental, in this case, is the distance between each point and the other points in the dataset.
A common way to represent this is to use a distance matrix
Step5: As promised, for our N=1,000 points, we obtain a 1000×1000 matrix, which can be visualized as shown here
Step6: If we similarly construct a distance matrix for our rotated and translated data, we see that it is the same
Step7: This distance matrix gives us a representation of our data that is invariant to rotations and translations, but the visualization of the matrix above is not entirely intuitive.
In the representation shown in this figure, we have lost any visible sign of the interesting structure in the data
Step8: The MDS algorithm recovers one of the possible two-dimensional coordinate representations of our data, using only the $N\times N$ distance matrix describing the relationship between the data points.
MDS as Manifold Learning
The usefulness of this becomes more apparent when we consider the fact that distance matrices can be computed from data in any dimension.
So, for example, instead of simply rotating the data in the two-dimensional plane, we can project it into three dimensions using the following function (essentially a three-dimensional generalization of the rotation matrix used earlier)
Step9: Let's visualize these points to see what we're working with
Step10: We can now ask the MDS estimator to input this three-dimensional data, compute the distance matrix, and then determine the optimal two-dimensional embedding for this distance matrix.
The result recovers a representation of the original data
Step11: This is essentially the goal of a manifold learning estimator
Step12: This is again three-dimensional data, but we can see that the embedding is much more complicated
Step13: The fundamental relationships between the data points are still there, but this time the data has been transformed in a nonlinear way
Step14: The best two-dimensional linear embeding does not unwrap the S-curve, but instead throws out the original y-axis.
Nonlinear Manifolds
Step15: The result remains somewhat distorted compared to our original manifold, but captures the essential relationships in the data!
Some Thoughts on Manifold Methods
Though this story and motivation is compelling, in practice manifold learning techniques tend to be finicky enough that they are rarely used for anything more than simple qualitative visualization of high-dimensional data.
The following are some of the particular challenges of manifold learning, which all contrast poorly with PCA
Step16: We have 2,370 images, each with 2,914 pixels.
In other words, the images can be thought of as data points in a 2,914-dimensional space!
Let's quickly visualize several of these images to see what we're working with
Step17: We would like to plot a low-dimensional embedding of the 2,914-dimensional data to learn the fundamental relationships between the images.
One useful way to start is to compute a PCA, and examine the explained variance ratio, which will give us an idea of how many linear features are required to describe the data
Step18: We see that for this data, nearly 100 components are required to preserve 90% of the variance
Step19: The output is a two-dimensional projection of all the input images.
To get a better idea of what the projection tells us, let's define a function that will output image thumbnails at the locations of the projections
Step20: Calling this function now, we see the result
Step21: The result is interesting
Step22: This consists of 70,000 images, each with 784 pixels (i.e. the images are 28×28).
As before, we can take a look at the first few images
Step23: This gives us an idea of the variety of handwriting styles in the dataset.
Let's compute a manifold learning projection across the data.
For speed here, we'll only use 1/30 of the data, which is about ~2000 points
(because of the relatively poor scaling of manifold learning, I find that a few thousand samples is a good number to start with for relatively quick exploration before moving to a full calculation)
Step24: The resulting scatter plot shows some of the relationships between the data points, but is a bit crowded.
We can gain more insight by looking at just a single number at a time | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
<!--NAVIGATION-->
< In Depth: Principal Component Analysis | Contents | In Depth: k-Means Clustering >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.10-Manifold-Learning.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
In-Depth: Manifold Learning
We have seen how principal component analysis (PCA) can be used in the dimensionality reduction task—reducing the number of features of a dataset while maintaining the essential relationships between the points.
While PCA is flexible, fast, and easily interpretable, it does not perform so well when there are nonlinear relationships within the data; we will see some examples of these below.
To address this deficiency, we can turn to a class of methods known as manifold learning—a class of unsupervised estimators that seeks to describe datasets as low-dimensional manifolds embedded in high-dimensional spaces.
When you think of a manifold, I'd suggest imagining a sheet of paper: this is a two-dimensional object that lives in our familiar three-dimensional world, and can be bent or rolled in that two dimensions.
In the parlance of manifold learning, we can think of this sheet as a two-dimensional manifold embedded in three-dimensional space.
Rotating, re-orienting, or stretching the piece of paper in three-dimensional space doesn't change the flat geometry of the paper: such operations are akin to linear embeddings.
If you bend, curl, or crumple the paper, it is still a two-dimensional manifold, but the embedding into the three-dimensional space is no longer linear.
Manifold learning algorithms would seek to learn about the fundamental two-dimensional nature of the paper, even as it is contorted to fill the three-dimensional space.
Here we will demonstrate a number of manifold methods, going most deeply into a couple techniques: multidimensional scaling (MDS), locally linear embedding (LLE), and isometric mapping (IsoMap).
We begin with the standard imports:
End of explanation
def make_hello(N=1000, rseed=42):
# Make a plot with "HELLO" text; save as PNG
fig, ax = plt.subplots(figsize=(4, 1))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1)
ax.axis('off')
ax.text(0.5, 0.4, 'HELLO', va='center', ha='center', weight='bold', size=85)
fig.savefig('hello.png')
plt.close(fig)
# Open this PNG and draw random points from it
from matplotlib.image import imread
data = imread('hello.png')[::-1, :, 0].T
rng = np.random.RandomState(rseed)
X = rng.rand(4 * N, 2)
i, j = (X * data.shape).astype(int).T
mask = (data[i, j] < 1)
X = X[mask]
X[:, 0] *= (data.shape[0] / data.shape[1])
X = X[:N]
return X[np.argsort(X[:, 0])]
Explanation: Manifold Learning: "HELLO"
To make these concepts more clear, let's start by generating some two-dimensional data that we can use to define a manifold.
Here is a function that will create data in the shape of the word "HELLO":
End of explanation
X = make_hello(1000)
colorize = dict(c=X[:, 0], cmap=plt.cm.get_cmap('rainbow', 5))
plt.scatter(X[:, 0], X[:, 1], **colorize)
plt.axis('equal');
Explanation: Let's call the function and visualize the resulting data:
End of explanation
def rotate(X, angle):
theta = np.deg2rad(angle)
R = [[np.cos(theta), np.sin(theta)],
[-np.sin(theta), np.cos(theta)]]
return np.dot(X, R)
X2 = rotate(X, 20) + 5
plt.scatter(X2[:, 0], X2[:, 1], **colorize)
plt.axis('equal');
Explanation: The output is two dimensional, and consists of points drawn in the shape of the word, "HELLO".
This data form will help us to see visually what these algorithms are doing.
Multidimensional Scaling (MDS)
Looking at data like this, we can see that the particular choice of x and y values of the dataset are not the most fundamental description of the data: we can scale, shrink, or rotate the data, and the "HELLO" will still be apparent.
For example, if we use a rotation matrix to rotate the data, the x and y values change, but the data is still fundamentally the same:
End of explanation
from sklearn.metrics import pairwise_distances
D = pairwise_distances(X)
D.shape
Explanation: This tells us that the x and y values are not necessarily fundamental to the relationships in the data.
What is fundamental, in this case, is the distance between each point and the other points in the dataset.
A common way to represent this is to use a distance matrix: for $N$ points, we construct an $N \times N$ array such that entry $(i, j)$ contains the distance between point $i$ and point $j$.
Let's use Scikit-Learn's efficient pairwise_distances function to do this for our original data:
End of explanation
plt.imshow(D, zorder=2, cmap='Blues', interpolation='nearest')
plt.colorbar();
Explanation: As promised, for our N=1,000 points, we obtain a 1000×1000 matrix, which can be visualized as shown here:
End of explanation
D2 = pairwise_distances(X2)
np.allclose(D, D2)
Explanation: If we similarly construct a distance matrix for our rotated and translated data, we see that it is the same:
End of explanation
from sklearn.manifold import MDS
model = MDS(n_components=2, dissimilarity='precomputed', random_state=1)
out = model.fit_transform(D)
plt.scatter(out[:, 0], out[:, 1], **colorize)
plt.axis('equal');
Explanation: This distance matrix gives us a representation of our data that is invariant to rotations and translations, but the visualization of the matrix above is not entirely intuitive.
In the representation shown in this figure, we have lost any visible sign of the interesting structure in the data: the "HELLO" that we saw before.
Further, while computing this distance matrix from the (x, y) coordinates is straightforward, transforming the distances back into x and y coordinates is rather difficult.
This is exactly what the multidimensional scaling algorithm aims to do: given a distance matrix between points, it recovers a $D$-dimensional coordinate representation of the data.
Let's see how it works for our distance matrix, using the precomputed dissimilarity to specify that we are passing a distance matrix:
End of explanation
def random_projection(X, dimension=3, rseed=42):
assert dimension >= X.shape[1]
rng = np.random.RandomState(rseed)
C = rng.randn(dimension, dimension)
e, V = np.linalg.eigh(np.dot(C, C.T))
return np.dot(X, V[:X.shape[1]])
X3 = random_projection(X, 3)
X3.shape
Explanation: The MDS algorithm recovers one of the possible two-dimensional coordinate representations of our data, using only the $N\times N$ distance matrix describing the relationship between the data points.
MDS as Manifold Learning
The usefulness of this becomes more apparent when we consider the fact that distance matrices can be computed from data in any dimension.
So, for example, instead of simply rotating the data in the two-dimensional plane, we can project it into three dimensions using the following function (essentially a three-dimensional generalization of the rotation matrix used earlier):
End of explanation
from mpl_toolkits import mplot3d
ax = plt.axes(projection='3d')
ax.scatter3D(X3[:, 0], X3[:, 1], X3[:, 2],
**colorize)
ax.view_init(azim=70, elev=50)
Explanation: Let's visualize these points to see what we're working with:
End of explanation
model = MDS(n_components=2, random_state=1)
out3 = model.fit_transform(X3)
plt.scatter(out3[:, 0], out3[:, 1], **colorize)
plt.axis('equal');
Explanation: We can now ask the MDS estimator to input this three-dimensional data, compute the distance matrix, and then determine the optimal two-dimensional embedding for this distance matrix.
The result recovers a representation of the original data:
End of explanation
def make_hello_s_curve(X):
t = (X[:, 0] - 2) * 0.75 * np.pi
x = np.sin(t)
y = X[:, 1]
z = np.sign(t) * (np.cos(t) - 1)
return np.vstack((x, y, z)).T
XS = make_hello_s_curve(X)
Explanation: This is essentially the goal of a manifold learning estimator: given high-dimensional embedded data, it seeks a low-dimensional representation of the data that preserves certain relationships within the data.
In the case of MDS, the quantity preserved is the distance between every pair of points.
Nonlinear Embeddings: Where MDS Fails
Our discussion thus far has considered linear embeddings, which essentially consist of rotations, translations, and scalings of data into higher-dimensional spaces.
Where MDS breaks down is when the embedding is nonlinear—that is, when it goes beyond this simple set of operations.
Consider the following embedding, which takes the input and contorts it into an "S" shape in three dimensions:
End of explanation
from mpl_toolkits import mplot3d
ax = plt.axes(projection='3d')
ax.scatter3D(XS[:, 0], XS[:, 1], XS[:, 2],
**colorize);
Explanation: This is again three-dimensional data, but we can see that the embedding is much more complicated:
End of explanation
from sklearn.manifold import MDS
model = MDS(n_components=2, random_state=2)
outS = model.fit_transform(XS)
plt.scatter(outS[:, 0], outS[:, 1], **colorize)
plt.axis('equal');
Explanation: The fundamental relationships between the data points are still there, but this time the data has been transformed in a nonlinear way: it has been wrapped-up into the shape of an "S."
If we try a simple MDS algorithm on this data, it is not able to "unwrap" this nonlinear embedding, and we lose track of the fundamental relationships in the embedded manifold:
End of explanation
from sklearn.manifold import LocallyLinearEmbedding
model = LocallyLinearEmbedding(n_neighbors=100, n_components=2, method='modified',
eigen_solver='dense')
out = model.fit_transform(XS)
fig, ax = plt.subplots()
ax.scatter(out[:, 0], out[:, 1], **colorize)
ax.set_ylim(0.15, -0.15);
Explanation: The best two-dimensional linear embeding does not unwrap the S-curve, but instead throws out the original y-axis.
Nonlinear Manifolds: Locally Linear Embedding
How can we move forward here? Stepping back, we can see that the source of the problem is that MDS tries to preserve distances between faraway points when constructing the embedding.
But what if we instead modified the algorithm such that it only preserves distances between nearby points?
The resulting embedding would be closer to what we want.
Visually, we can think of it as illustrated in this figure:
figure source in Appendix
Here each faint line represents a distance that should be preserved in the embedding.
On the left is a representation of the model used by MDS: it tries to preserve the distances between each pair of points in the dataset.
On the right is a representation of the model used by a manifold learning algorithm called locally linear embedding (LLE): rather than preserving all distances, it instead tries to preserve only the distances between neighboring points: in this case, the nearest 100 neighbors of each point.
Thinking about the left panel, we can see why MDS fails: there is no way to flatten this data while adequately preserving the length of every line drawn between the two points.
For the right panel, on the other hand, things look a bit more optimistic. We could imagine unrolling the data in a way that keeps the lengths of the lines approximately the same.
This is precisely what LLE does, through a global optimization of a cost function reflecting this logic.
LLE comes in a number of flavors; here we will use the modified LLE algorithm to recover the embedded two-dimensional manifold.
In general, modified LLE does better than other flavors of the algorithm at recovering well-defined manifolds with very little distortion:
End of explanation
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=30)
faces.data.shape
Explanation: The result remains somewhat distorted compared to our original manifold, but captures the essential relationships in the data!
Some Thoughts on Manifold Methods
Though this story and motivation is compelling, in practice manifold learning techniques tend to be finicky enough that they are rarely used for anything more than simple qualitative visualization of high-dimensional data.
The following are some of the particular challenges of manifold learning, which all contrast poorly with PCA:
In manifold learning, there is no good framework for handling missing data. In contrast, there are straightforward iterative approaches for missing data in PCA.
In manifold learning, the presence of noise in the data can "short-circuit" the manifold and drastically change the embedding. In contrast, PCA naturally filters noise from the most important components.
The manifold embedding result is generally highly dependent on the number of neighbors chosen, and there is generally no solid quantitative way to choose an optimal number of neighbors. In contrast, PCA does not involve such a choice.
In manifold learning, the globally optimal number of output dimensions is difficult to determine. In contrast, PCA lets you find the output dimension based on the explained variance.
In manifold learning, the meaning of the embedded dimensions is not always clear. In PCA, the principal components have a very clear meaning.
In manifold learning the computational expense of manifold methods scales as O[N^2] or O[N^3]. For PCA, there exist randomized approaches that are generally much faster (though see the megaman package for some more scalable implementations of manifold learning).
With all that on the table, the only clear advantage of manifold learning methods over PCA is their ability to preserve nonlinear relationships in the data; for that reason I tend to explore data with manifold methods only after first exploring them with PCA.
Scikit-Learn implements several common variants of manifold learning beyond Isomap and LLE: the Scikit-Learn documentation has a nice discussion and comparison of them.
Based on my own experience, I would give the following recommendations:
For toy problems such as the S-curve we saw before, locally linear embedding (LLE) and its variants (especially modified LLE), perform very well. This is implemented in sklearn.manifold.LocallyLinearEmbedding.
For high-dimensional data from real-world sources, LLE often produces poor results, and isometric mapping (IsoMap) seems to generally lead to more meaningful embeddings. This is implemented in sklearn.manifold.Isomap
For data that is highly clustered, t-distributed stochastic neighbor embedding (t-SNE) seems to work very well, though can be very slow compared to other methods. This is implemented in sklearn.manifold.TSNE.
If you're interested in getting a feel for how these work, I'd suggest running each of the methods on the data in this section.
Example: Isomap on Faces
One place manifold learning is often used is in understanding the relationship between high-dimensional data points.
A common case of high-dimensional data is images: for example, a set of images with 1,000 pixels each can be thought of as a collection of points in 1,000 dimensions – the brightness of each pixel in each image defines the coordinate in that dimension.
Here let's apply Isomap on some faces data.
We will use the Labeled Faces in the Wild dataset, which we previously saw in In-Depth: Support Vector Machines and In Depth: Principal Component Analysis.
Running this command will download the data and cache it in your home directory for later use:
End of explanation
fig, ax = plt.subplots(4, 8, subplot_kw=dict(xticks=[], yticks=[]))
for i, axi in enumerate(ax.flat):
axi.imshow(faces.images[i], cmap='gray')
Explanation: We have 2,370 images, each with 2,914 pixels.
In other words, the images can be thought of as data points in a 2,914-dimensional space!
Let's quickly visualize several of these images to see what we're working with:
End of explanation
# from sklearn.decomposition import RandomizedPCA
from sklearn.decomposition import PCA as RandomizedPCA
model = RandomizedPCA(100).fit(faces.data)
plt.plot(np.cumsum(model.explained_variance_ratio_))
plt.xlabel('n components')
plt.ylabel('cumulative variance');
Explanation: We would like to plot a low-dimensional embedding of the 2,914-dimensional data to learn the fundamental relationships between the images.
One useful way to start is to compute a PCA, and examine the explained variance ratio, which will give us an idea of how many linear features are required to describe the data:
End of explanation
from sklearn.manifold import Isomap
model = Isomap(n_components=2)
proj = model.fit_transform(faces.data)
proj.shape
Explanation: We see that for this data, nearly 100 components are required to preserve 90% of the variance: this tells us that the data is intrinsically very high dimensional—it can't be described linearly with just a few components.
When this is the case, nonlinear manifold embeddings like LLE and Isomap can be helpful.
We can compute an Isomap embedding on these faces using the same pattern shown before:
End of explanation
from matplotlib import offsetbox
def plot_components(data, model, images=None, ax=None,
thumb_frac=0.05, cmap='gray'):
ax = ax or plt.gca()
proj = model.fit_transform(data)
ax.plot(proj[:, 0], proj[:, 1], '.k')
if images is not None:
min_dist_2 = (thumb_frac * max(proj.max(0) - proj.min(0))) ** 2
shown_images = np.array([2 * proj.max(0)])
for i in range(data.shape[0]):
dist = np.sum((proj[i] - shown_images) ** 2, 1)
if np.min(dist) < min_dist_2:
# don't show points that are too close
continue
shown_images = np.vstack([shown_images, proj[i]])
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(images[i], cmap=cmap),
proj[i])
ax.add_artist(imagebox)
Explanation: The output is a two-dimensional projection of all the input images.
To get a better idea of what the projection tells us, let's define a function that will output image thumbnails at the locations of the projections:
End of explanation
fig, ax = plt.subplots(figsize=(10, 10))
plot_components(faces.data,
model=Isomap(n_components=2),
images=faces.images[:, ::2, ::2])
Explanation: Calling this function now, we see the result:
End of explanation
# DEPRECATED
# from sklearn.datasets import fetch_mldata
# mnist = fetch_mldata('MNIST original')
# MLDATA SERVER IS DOWN
# mldata.org seems to still be down
# DOES NOT WORK SOMETIMES
# this step might fail based on permssions and network access
# if in Docker, specify --network=host
# if in docker-compose specify version 3.4 and build -> network: host
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784')
mdata=mnist.data.astype(int)
mtarget=mnist.target.astype(int)
# ALTERNATIVE SOLUTION
# https://stackoverflow.com/a/54986237/2080425
# from scipy.io import loadmat
# mnist = loadmat('data/mnist-original.mat')
# mdata=mnist['data'].T
# mtarget=mnist['label'].T
# mdata.shape
Explanation: The result is interesting: the first two Isomap dimensions seem to describe global image features: the overall darkness or lightness of the image from left to right, and the general orientation of the face from bottom to top.
This gives us a nice visual indication of some of the fundamental features in our data.
We could then go on to classify this data (perhaps using manifold features as inputs to the classification algorithm) as we did in In-Depth: Support Vector Machines.
Example: Visualizing Structure in Digits
As another example of using manifold learning for visualization, let's take a look at the MNIST handwritten digits set.
This data is similar to the digits we saw in In-Depth: Decision Trees and Random Forests, but with many more pixels per image.
It can be downloaded from http://mldata.org/ with the Scikit-Learn utility:
End of explanation
fig, ax = plt.subplots(6, 8, subplot_kw=dict(xticks=[], yticks=[]))
for i, axi in enumerate(ax.flat):
axi.imshow(mdata[1250 * i].reshape(28, 28), cmap='gray_r')
Explanation: This consists of 70,000 images, each with 784 pixels (i.e. the images are 28×28).
As before, we can take a look at the first few images:
End of explanation
# use only 1/30 of the data: full dataset takes a long time!
data = mdata[::30]
target = mtarget[::30]
model = Isomap(n_components=2)
proj = model.fit_transform(data)
plt.scatter(proj[:, 0], proj[:, 1], c=target, cmap=plt.cm.get_cmap('jet', 10))
plt.colorbar(ticks=range(10))
plt.clim(-0.5, 9.5);
Explanation: This gives us an idea of the variety of handwriting styles in the dataset.
Let's compute a manifold learning projection across the data.
For speed here, we'll only use 1/30 of the data, which is about ~2000 points
(because of the relatively poor scaling of manifold learning, I find that a few thousand samples is a good number to start with for relatively quick exploration before moving to a full calculation):
End of explanation
from sklearn.manifold import Isomap
# Choose 1/4 of the "1" digits to project
# data = mdata[[mtarget == 1]][::4]
data = mdata[np.array([mtarget == 1]).flatten()][::4]
fig, ax = plt.subplots(figsize=(10, 10))
model = Isomap(n_neighbors=5, n_components=2, eigen_solver='dense')
plot_components(data, model, images=data.reshape((-1, 28, 28)),
ax=ax, thumb_frac=0.05, cmap='gray_r')
Explanation: The resulting scatter plot shows some of the relationships between the data points, but is a bit crowded.
We can gain more insight by looking at just a single number at a time:
End of explanation |
10,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Corriger une position en fonction du couple mesuré sur un moteur
Compétences visées par cette activité
Step1: Utilisation des primitives
Step2: Réalisation d'une primitive permettant d'enregistrer les données des moteurs au cours du temps
Step3: Test de cette primitive pour monitorer le present_load
Step4: Le present_load dans le simulateur est trés variable. Nous réaliserons donc un objet permettant d'en faire une moyenne glissante
Step5: L'objet Load_primitive permet l'enregistrement du present_load lissé
Step6: On peut tester notre present_load corrigé
Step7: Le present_load filtré peut donc être utilisé pour corriger des positions car il est suffisament stable dans le temps
Step8: L'objet Load_primitive_correction permet de corriger la position du moteur en fonction du present_load filtré
Step9: La correction permet de maintenir la position verticale. il n'est pas possible d'avoir un temps de réponse rapide car le lissage des present_load brut prend environ une demi seconde et décale d'autant la réaction en cas de changement de position.
Step10: Sur le robot réel
Step11: Le present_load est évidement bien différent de celui du simulateur. Le problème réside maintenant dans la stabilité de la mesure. Même en l'absence de changement,le present load effectue des sauts relativment important. En outre la sensibilité n'est pas bonne car il faut un changment de couple important pour faire varier la mesure. De la même façon qu'avec le simulateur, il est possible de faire une moyenne qui filtrera les variations mais fera perdre du temps de réaction.
Step12: Le robot réagit bien en corrigeant sa position pour atteindre la verticale, il faut cependant garder des valeurs PID très faibles ce qui entraine un temps de réaction encore ralongé. En effet, sur le robot réel, le mouvement du moteur entraine des variations de couple importante dans le même sens que la correction. Si la correction est trop forte le robot entre en oscillations croissantes ( si on corrige trop fort, on augmente le couple dans le sens opposé et on trouvera une correction de plus en forte jusqu'à que le robot se bloque dans un angle limite). | Python Code:
from poppy.creatures import Poppy4dofArmMini
mini_dof = Poppy4dofArmMini(simulator='vrep')
import time
%pylab inline
Explanation: Corriger une position en fonction du couple mesuré sur un moteur
Compétences visées par cette activité :
Mettre en place un asservissement PID lié au couple mesuré sur un moteur. En cherchant à atteindre le couple minimum, on cherche en fait à garder une position verticale.
Utiliser les primitives de Pypot.
Plus d'information sur l'activité (vidéo, résultats, commentaires...) :
http://www.poppy-prof.fr/?page_id=4&id=76
Sur le robot dans le simulateur V-REP :
Instanciation du robot et paramétrage des notebooks pour utiliser les graphiques
End of explanation
from pypot.primitive import Primitive
Explanation: Utilisation des primitives :
End of explanation
class Graph_primitive(Primitive):
def __init__(self,robot,motors_name):
self.robot = robot
Primitive.__init__(self, robot)
self.fake_motors={}
for name in motors_name:
self.fake_motors[name] = getattr(self.robot, name)
self.position={}
self.load={}
self.speed={}
def setup(self):
for m in self.fake_motors.keys():
self.position[m] = []
self.speed[m] = []
self.load[m] = []
self.python_time=[]
self.pypot_time=[]
def run(self):
t0 = time.time()
while not self.should_stop():
for m in self.fake_motors.keys():
self.position[m].append(self.fake_motors[m].present_position)
self.load[m].append(self.fake_motors[m].present_load)
self.speed[m].append(self.fake_motors[m].present_speed)
self.python_time.append(time.time()-t0)
self.pypot_time.append(self.elapsed_time)
time.sleep(0.02)
Explanation: Réalisation d'une primitive permettant d'enregistrer les données des moteurs au cours du temps :
End of explanation
graph = Graph_primitive(mini_dof,['m3',])
graph.start()
mini_dof.m2.goto_position(90,2,wait=True)
mini_dof.m2.goto_position(-90,4,wait=True)
mini_dof.m2.goto_position(90,4,wait=True)
mini_dof.m2.goto_position(0,2,wait=True)
graph.stop()
figure(1)
plot(graph.pypot_time,graph.load['m3'])
xlabel('elapsed time seconds')
ylabel('load')
title ('Record load function of elapsed time')
Explanation: Test de cette primitive pour monitorer le present_load :
End of explanation
class Filter_PID:
def __init__(self,nb_record=10,goal=0):
self.nb_record = nb_record
self.goal = goal
self.record_pos=[0 for i in range(nb_record)]
self.filter_load=[[0,0] for i in range(nb_record*10)]
def add(self,l,t):
self.record_pos.append(l-self.goal)
del self.record_pos[0]
self.filter_load.append([t,sum(self.record_pos)/len(self.record_pos)])
del self.filter_load[0]
def integrate(self,nb_values=10):
x=[i[0] for i in self.filter_load]
y=[i[1] for i in self.filter_load]
return np.trapz(y[-nb_values-1:-1],x[-nb_values-1:-1])
def derivative(self):
if self.filter_load[-1][0] != 0:
return ((self.filter_load[-1][1]+self.filter_load[-2][1])/2-(self.filter_load[-10][1]+self.filter_load[-9][1])/2)/((self.filter_load[-1][0]+self.filter_load[-2][0])/2-(self.filter_load[-10][0]+self.filter_load[-9][0])/2)
else :
return 0
def last(self):
return self.filter_load[-1][1]
Explanation: Le present_load dans le simulateur est trés variable. Nous réaliserons donc un objet permettant d'en faire une moyenne glissante :
End of explanation
class Load_primitive(Primitive):
def __init__(self,robot,motors_name):
self.robot = robot
Primitive.__init__(self, robot)
self.fake_motors = getattr(self.robot, motors_name)
def setup(self):
self.load=Filter_PID(30)
self.filter_load=[]
self.python_time=[]
self.pypot_time=[]
def run(self):
t0 = time.time()
while not self.should_stop():
self.load.add(self.fake_motors.present_load,self.elapsed_time)
self.filter_load.append(self.load.last())
self.python_time.append(time.time()-t0)
self.pypot_time.append(self.elapsed_time)
time.sleep(0.02)
Explanation: L'objet Load_primitive permet l'enregistrement du present_load lissé :
End of explanation
load = Load_primitive(mini_dof,'m3')
load.start()
mini_dof.m2.goto_position(90,2,wait=True)
mini_dof.m2.goto_position(-90,4,wait=True)
mini_dof.m2.goto_position(90,4,wait=True)
mini_dof.m2.goto_position(0,2,wait=True)
load.stop()
figure(1)
plot(load.pypot_time,load.filter_load)
xlabel('elapsed time seconds')
ylabel('Filter load')
title ('Record filter load function of elapsed time')
Explanation: On peut tester notre present_load corrigé :
End of explanation
load.start()
mini_dof.m2.goto_position(90,2,wait=True)
time.sleep(2)
mini_dof.m3.goto_position(-90,2,wait=True)
time.sleep(2)
load.stop()
mini_dof.m2.goto_position(0,2)
mini_dof.m3.goto_position(0,2,wait=True)
figure(1)
plot(load.pypot_time,load.filter_load)
xlabel('elapsed time seconds')
ylabel('Filter load')
title ('Record filter load function of elapsed time')
Explanation: Le present_load filtré peut donc être utilisé pour corriger des positions car il est suffisament stable dans le temps :
End of explanation
class Load_primitive_correction(Primitive):
def __init__(self,robot,motors_name):
self.robot = robot
Primitive.__init__(self, robot)
self.fake_motors = getattr(self.robot, motors_name)
self.P=4
self.I=1
self.D=1
def setup(self):
self.load=Filter_PID(40)
self.filter_load=[]
self.python_time=[]
self.pypot_time=[]
self.P_record=[]
self.I_record=[]
self.D_record=[]
self.correction=[]
self.angle=self.fake_motors.present_position
self.angle_record=[]
def run(self):
t0 = time.time()
while not self.should_stop():
# ajout du oresent load à l'objet Filter PID
self.load.add(self.fake_motors.present_load,self.elapsed_time)
#ajout de la moyenne des load calculer par Filter PID
self.filter_load.append(self.load.last())
# ajout du temps
self.python_time.append(time.time()-t0)
self.pypot_time.append(self.elapsed_time)
# calcul de la correction en fonction de l'écart avec l'objectif
P = self.P*self.load.last()
I = self.I*self.load.integrate(30)
D = self.D*self.load.derivative()
self.P_record.append(P)
self.I_record.append(I)
self.D_record.append(D)
correction_value = P + D + I
self.correction.append(correction_value)
self.angle_record.append(self.angle)
self.angle = self.fake_motors.present_position - correction_value
self.fake_motors.goal_position = self.angle
# pause déterminant la fréquance de la boucle
t1 = self.elapsed_time
while self.elapsed_time-t1<0.02:
time.sleep(0.001)
load = Load_primitive_correction(mini_dof,'m3')
load.start()
mini_dof.m2.goto_position(90,4,wait=True)
mini_dof.m2.goto_position(0,4,wait=True)
time.sleep(1)
load.stop()
mini_dof.m3.goto_position(0,2)
figure(1)
plot(load.pypot_time,load.P_record,'b-')
plot(load.pypot_time,load.I_record,'r-')
plot(load.pypot_time,load.D_record,'g-')
plot(load.pypot_time,load.correction,'c-')
twinx()
plot(load.pypot_time,load.angle_record,'b-')
xlabel('elapsed time seconds')
ylabel('Correction')
title ('Record filter load function of elapsed time')
figure(2)
plot(load.pypot_time,load.correction,'c-')
plot(load.pypot_time,load.filter_load,'g-')
xlabel('elapsed time seconds')
ylabel('Filter load')
title ('Record filter load function of elapsed time')
Explanation: L'objet Load_primitive_correction permet de corriger la position du moteur en fonction du present_load filtré :
End of explanation
mini_dof.reset_simulation()
Explanation: La correction permet de maintenir la position verticale. il n'est pas possible d'avoir un temps de réponse rapide car le lissage des present_load brut prend environ une demi seconde et décale d'autant la réaction en cas de changement de position.
End of explanation
mini_dof = Poppy4dofArmMini()
for m in mini_dof.motors:
m.compliant = False
m.goto_position(0,1)
graph = Graph_primitive(mini_dof,['m3',])
graph.start()
time.sleep(1)
mini_dof.m2.goto_position(90,1,wait=True)
time.sleep(1)
mini_dof.m2.goto_position(0,1,wait=True)
time.sleep(1)
graph.stop()
figure(1)
plot(graph.pypot_time,graph.load['m3'])
xlabel('elapsed time seconds')
ylabel('load')
title ('Record load function of elapsed time')
Explanation: Sur le robot réel
End of explanation
class Load_primitive_correction(Primitive):
def __init__(self,robot,motors_name):
self.robot = robot
Primitive.__init__(self, robot)
self.fake_motors = getattr(self.robot, motors_name)
self.P=0.5
self.I=0.3
self.D=0
def setup(self):
self.load=Filter_PID(10)
self.filter_load=[]
self.python_time=[]
self.pypot_time=[]
self.P_record=[]
self.I_record=[]
self.D_record=[]
self.correction=[]
self.angle=self.fake_motors.present_position
self.angle_record=[]
def run(self):
t0 = time.time()
while not self.should_stop():
# ajout du present load à l'objet Filter PID
self.load.add(self.fake_motors.present_load,self.elapsed_time)
#ajout de la moyenne des load calculer par Filter PID
self.filter_load.append(self.load.last())
# ajout du temps
self.python_time.append(time.time()-t0)
self.pypot_time.append(self.elapsed_time)
# calcul de la correction en fonction de l'écart avec l'objectif
P = self.P*self.load.last()
I = self.I*self.load.integrate(30)
D = self.D*self.load.derivative()
self.P_record.append(P)
self.I_record.append(I)
self.D_record.append(D)
correction_value = P + D + I
self.correction.append(correction_value)
self.angle_record.append(self.angle)
self.angle = self.fake_motors.present_position - correction_value
self.fake_motors.goal_position = self.angle
# pause déterminant la fréquance de la boucle
t1 = self.elapsed_time
while self.elapsed_time-t1<0.02:
time.sleep(0.001)
load = Load_primitive_correction(mini_dof,'m3')
load.start()
load.stop()
time.sleep(3)
load.start()
mini_dof.m2.goto_position(90,3,wait=True)
time.sleep(4)
mini_dof.m2.goto_position(0,3,wait=True)
time.sleep(4)
load.stop()
mini_dof.m3.goto_position(0,2)
figure(1)
plot(load.pypot_time,load.P_record,'b-')
plot(load.pypot_time,load.I_record,'r-')
plot(load.pypot_time,load.D_record,'g-')
plot(load.pypot_time,load.correction,'c-')
twinx()
plot(load.pypot_time,load.angle_record,'b-')
xlabel('elapsed time seconds')
ylabel('Correction')
title ('Record filter load function of elapsed time')
figure(2)
plot(load.pypot_time,load.correction,'c-')
plot(load.pypot_time,load.filter_load,'g-')
xlabel('elapsed time seconds')
ylabel('Filter load')
title ('Record filter load function of elapsed time')
Explanation: Le present_load est évidement bien différent de celui du simulateur. Le problème réside maintenant dans la stabilité de la mesure. Même en l'absence de changement,le present load effectue des sauts relativment important. En outre la sensibilité n'est pas bonne car il faut un changment de couple important pour faire varier la mesure. De la même façon qu'avec le simulateur, il est possible de faire une moyenne qui filtrera les variations mais fera perdre du temps de réaction.
End of explanation
for m in mini_dof.motors :
m.goto_position(0,1)
mini_dof.close()
Explanation: Le robot réagit bien en corrigeant sa position pour atteindre la verticale, il faut cependant garder des valeurs PID très faibles ce qui entraine un temps de réaction encore ralongé. En effet, sur le robot réel, le mouvement du moteur entraine des variations de couple importante dans le même sens que la correction. Si la correction est trop forte le robot entre en oscillations croissantes ( si on corrige trop fort, on augmente le couple dans le sens opposé et on trouvera une correction de plus en forte jusqu'à que le robot se bloque dans un angle limite).
End of explanation |
10,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features
Step1: A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.
Before proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.
Introductory Notation
The continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. Note
Step2: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step3: With the nuclides we defined, we will now create a material for the homogeneous medium.
Step4: With our material, we can now create a Materials object that can be exported to an actual XML file.
Step5: Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
Step6: With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
Step7: OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
Step8: We now must create a geometry that is assigned a root universe and export it to XML.
Step9: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step10: Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
Step11: We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class
Step12: Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.
Step13: The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input file for OpenMC.
Step14: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step15: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
Step16: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.
The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
Step17: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
Let's first inspect our total cross section by printing it to the screen.
Step18: Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
Step19: Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
Step20: The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.
Step21: Comparing MGXS with Tally Arithmetic
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.
Step22: Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.
Step23: Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity. | Python Code:
from IPython.display import Image
Image(filename='images/mgxs.png', width=350)
Explanation: This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:
General equations for scalar-flux averaged multi-group cross sections
Creation of multi-group cross sections for an infinite homogeneous medium
Use of tally arithmetic to manipulate multi-group cross sections
Introduction to Multi-Group Cross Sections (MGXS)
Many Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. An example of U-235's continuous-energy fission cross section along with a 16-group cross section computed for a light water reactor spectrum is displayed below.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.mgxs as mgxs
Explanation: A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.
Before proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.
Introductory Notation
The continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.
Spatial and Energy Discretization
The energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.
Multi-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.
General Scalar-Flux Weighted MGXS
The multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\sigma_{n,x,k,g}$ as follows:
$$\sigma_{n,x,k,g} = \frac{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,x}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator.
Multi-Group Scattering Matrices
The general multi-group cross section $\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes.
We denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\sigma_{n,s}(\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\sigma_{n,s,k,g \to g'}$ as follows:
$$\sigma_{n,s,k,g\rightarrow g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,s}(\mathbf{r},E'\rightarrow E'')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters.
Multi-Group Fission Spectrum
The energy spectrum of neutrons emitted from fission is denoted by $\chi_{n}(\mathbf{r},E' \rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\chi_{n}(\mathbf{r},E)$ with outgoing energy $E$.
Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\sigma_{n,f}(\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\nu_{n}(\mathbf{r},E)$. The multi-group fission spectrum $\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$.
Similar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\chi_{n,k,g}$ as follows:
$$\chi_{n,k,g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\chi_{n}(\mathbf{r},E'\rightarrow E'')\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}$$
The fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters.
This concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.
Generate Input Files
End of explanation
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
zr90 = openmc.Nuclide('Zr90')
Explanation: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
# Instantiate a Material and register the Nuclides
inf_medium = openmc.Material(name='moderator')
inf_medium.set_density('g/cc', 5.)
inf_medium.add_nuclide(h1, 0.028999667)
inf_medium.add_nuclide(o16, 0.01450188)
inf_medium.add_nuclide(u235, 0.000114142)
inf_medium.add_nuclide(u238, 0.006886019)
inf_medium.add_nuclide(zr90, 0.002116053)
Explanation: With the nuclides we defined, we will now create a material for the homogeneous medium.
End of explanation
# Instantiate a Materials collection and export to XML
materials_file = openmc.Materials([inf_medium])
materials_file.export_to_xml()
Explanation: With our material, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Instantiate boundary Planes
min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)
max_x = openmc.XPlane(boundary_type='reflective', x0=0.63)
min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)
max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
Explanation: Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
End of explanation
# Instantiate a Cell
cell = openmc.Cell(cell_id=1, name='cell')
# Register bounding Surfaces with the Cell
cell.region = +min_x & -max_x & +min_y & -max_y
# Fill the Cell with the Material
cell.fill = inf_medium
Explanation: With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
End of explanation
# Instantiate Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
End of explanation
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
# Instantiate a 2-group EnergyGroups object
groups = mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6])
Explanation: Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
# Instantiate a few different sections
total = mgxs.TotalXS(domain=cell, groups=groups)
absorption = mgxs.AbsorptionXS(domain=cell, groups=groups)
scattering = mgxs.ScatterXS(domain=cell, groups=groups)
Explanation: We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:
TotalXS
TransportXS
NuTransportXS
AbsorptionXS
CaptureXS
FissionXS
NuFissionXS
KappaFissionXS
ScatterXS
NuScatterXS
ScatterMatrixXS
NuScatterMatrixXS
Chi
ChiPrompt
InverseVelocity
PromptNuFissionXS
These classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.
End of explanation
absorption.tallies
Explanation: Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.
End of explanation
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Add total tallies to the tallies file
tallies_file += total.tallies.values()
# Add absorption tallies to the tallies file
tallies_file += absorption.tallies.values()
# Add scattering tallies to the tallies file
tallies_file += scattering.tallies.values()
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input file for OpenMC.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
# Load the tallies from the statepoint into each MGXS object
total.load_from_statepoint(sp)
absorption.load_from_statepoint(sp)
scattering.load_from_statepoint(sp)
Explanation: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.
The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
total.print_xs()
Explanation: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
Let's first inspect our total cross section by printing it to the screen.
End of explanation
df = scattering.get_pandas_dataframe()
df.head(10)
Explanation: Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
End of explanation
absorption.export_xs_data(filename='absorption-xs', format='excel')
Explanation: Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
End of explanation
total.build_hdf5_store(filename='mgxs', append=True)
absorption.build_hdf5_store(filename='mgxs', append=True)
scattering.build_hdf5_store(filename='mgxs', append=True)
Explanation: The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.
End of explanation
# Use tally arithmetic to compute the difference between the total, absorption and scattering
difference = total.xs_tally - absorption.xs_tally - scattering.xs_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
difference.get_pandas_dataframe()
Explanation: Comparing MGXS with Tally Arithmetic
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.
End of explanation
# Use tally arithmetic to compute the absorption-to-total MGXS ratio
absorption_to_total = absorption.xs_tally / total.xs_tally
# The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
absorption_to_total.get_pandas_dataframe()
# Use tally arithmetic to compute the scattering-to-total MGXS ratio
scattering_to_total = scattering.xs_tally / total.xs_tally
# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
scattering_to_total.get_pandas_dataframe()
Explanation: Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.
End of explanation
# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity
sum_ratio = absorption_to_total + scattering_to_total
# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
sum_ratio.get_pandas_dataframe()
Explanation: Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.
End of explanation |
10,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seaborn crash course
<img src='https
Step1: Load sample dataset
Seaborn comes with a number of example dataset. Let us load the restaurant tipping dataset
Step2: Distribution plots
One of the first things we do is to find the data dist.
Step3: It is often useful to overlay the mean and SD with the histograms, below is one way to do it.
Step4: You can change things like bin, kde flags to customize the plot
Step5: Plotting dist of 2 variables
Seaborn can very easily attach a histogram to a scatter plot to show the data distribution
Step6: You can use the kind argument to change the scatter to hex, reg etc
Annotating correlation coefficient and p value if unavailable
<blockquote>
<b>Note
Step7: Plotting dist of all variables
You can get a quick overview of the pariwise relationships between your columns using pairplot. Specifying a categorical variable to hue argument will shade it accordingly
Step8: Plotting data frequency
Histograms provide data frequency. The distplot gives histograms. Another way to viz this is using rugplot. Rug plots are similar to the trading frequency bars we see in stock ticker time series datasets. | Python Code:
import seaborn as sns
%matplotlib inline
Explanation: Seaborn crash course
<img src='https://seaborn.pydata.org/_images/hexbin_marginals.png' height="150" width="150">
Seaborn is an amazing data and statistical visualization library that is built using matplotlib. It has good defaults and very easy to use.
ToC
- load sample dataset
- Distribution plots
- Plotting dist of 2 variables
- annotating with correlation coefficient if unavailable
- Plotting dist of all variables
- Plotting data frequency
End of explanation
tips = sns.load_dataset('tips')
tips.head(5)
Explanation: Load sample dataset
Seaborn comes with a number of example dataset. Let us load the restaurant tipping dataset
End of explanation
#find dist of total bills
sns.distplot(tips['total_bill'])
Explanation: Distribution plots
One of the first things we do is to find the data dist.
End of explanation
tips.total_bill.mean()
tips_mean = tips.total_bill.mean()
tips_sd = tips.total_bill.std()
ax = sns.distplot(tips['total_bill'])
# plot mean in black
ax.axvline(x=tips_mean, color='black', linestyle='dashed')
# plot mean +- 1SD in red, dotted
ax.axvline(x=tips_mean + tips_sd, color='red', linestyle='dotted')
ax.axvline(x=tips_mean - tips_sd, color='red', linestyle='dotted')
# title
ax.set_title('$\mu = {}$ | $\sigma = {}$'.format(round(tips_mean, 2), round(tips_sd, 2)))
Explanation: It is often useful to overlay the mean and SD with the histograms, below is one way to do it.
End of explanation
sns.distplot(tips['total_bill'], kde=False, bins=35)
Explanation: You can change things like bin, kde flags to customize the plot
End of explanation
sns.jointplot(x=tips['total_bill'], y=tips['tip'])
Explanation: Plotting dist of 2 variables
Seaborn can very easily attach a histogram to a scatter plot to show the data distribution
End of explanation
jgrid = sns.jointplot(x='min_season', y='max_wind_merged', data=hurricanes_ipl,
kind='reg', joint_kws={'line_kws':{'color':'green'}}, height=7, space=0.5)
j = jgrid.annotate(stats.pearsonr)
j = jgrid.ax_joint.set_title('Does hurricane wind speed increase over time?')
sns.jointplot(x=tips['total_bill'], y=tips['tip'], kind='hex')
sns.jointplot(x=tips['total_bill'], y=tips['tip'], kind='reg') #regression
Explanation: You can use the kind argument to change the scatter to hex, reg etc
Annotating correlation coefficient and p value if unavailable
<blockquote>
<b>Note:</b> In recent versions, seaborn does not print the correlation coefficient and its p-value. To get this, use annotation as shown below:
</blockquote>
End of explanation
sns.pairplot(tips, hue='sex')
Explanation: Plotting dist of all variables
You can get a quick overview of the pariwise relationships between your columns using pairplot. Specifying a categorical variable to hue argument will shade it accordingly
End of explanation
sns.rugplot(tips['total_bill'])
Explanation: Plotting data frequency
Histograms provide data frequency. The distplot gives histograms. Another way to viz this is using rugplot. Rug plots are similar to the trading frequency bars we see in stock ticker time series datasets.
End of explanation |
10,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from Kernel -> Restart Kernel, or running the following
Step2: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Authenticate your Google Cloud account
If you are using Vertex AI Workbench notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step5: Import libraries and define constants
Step6: Terminology and Concept
Featurestore Data model
Vertex AI Feature Store organizes data with the following 3 important hierarchical concepts
Step7: Use the function call below to retrieve a Featurestore and check that it has been created.
Step8: Create Entity Type
Entity types can be created within the Featurestore class. Below, create the Users entity type and Movies entity type. A process log will be printed out.
Step9: To retrieve an entity type or check that it has been created use the get_entity_type or list_entity_types methods on the Featurestore object.
Step10: Create Feature
Features can be created within each entity type. Add defining features to the Users entity type and Movies entity type by using the create_feature method.
Step11: Use the list_features method to list all the features of a given entity type.
Step12: Search created features
While the list_features method allows you to easily view all features of a single
entity type, the search method in the Feature class searches across all featurestores and entity types in a given location (such as us-central1), and returns a list of features. This can help you discover features that were created by someone else.
You can query based on feature properties including feature ID, entity type ID, and feature description. You can also limit results by filtering on a specific featurestore, feature value type, and/or labels. Some search examples are shown below.
Search for all features within a featurestore with the code snippet below.
Step13: Now, narrow down the search to features that are of type DOUBLE.
Step14: Or, limit the search results to features with specific keywords in their ID and type.
Step15: Import Feature Values
You need to import feature values before you can use them for online/offline serving. In this step, you learn how to import feature values by ingesting the values from GCS (Google Cloud Storage). You can also import feature values from BigQuery or a Pandas dataframe.
Source Data Format and Layout
BigQuery table/Avro/CSV are supported as input data types. No matter what format you are using, each imported entity must have an ID; also, each entity can optionally have a timestamp, specifying when the feature values are generated. This notebook uses Avro as an input, located at this public bucket. The Avro schemas are as follows
Step16: Import feature values for Movies entity type
Similarly, import feature values for the Movies entity type into the featurestore.
Step17: Get online predictions from your model
Online serving
lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly show movies that the current user would most likely watch.
Read one entity per request
With the Python SDK, it is easy to read feature values of one entity. By default, the SDK will return the latest value of each feature, meaning the feature values with the most recent timestamp.
To read feature values, specify the entity type ID and features to read. By default all the features of an entity type will be selected. The response will output and display the selected entity type ID and the selected feature values as a Pandas dataframe.
Step18: Read multiple entities per request
To read feature values from multiple entities, specify the different entity type IDs. By default all the features of an entity type will be selected. Note that fetching only a small number of entities is recommended when using this SDK due to its latency-sensitive nature.
Step19: Now that you have learned how to fetch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases.
Get batch predictions from your model
Batch serving is used to fetch a large batch of feature values for high-throughput, and is typically used for training a model or batch prediction. In this section, you learn how to prepare for training examples by using the Featurestore's batch serve function.
Use case
The task is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input
Step20: Batch Read Feature Values
Assemble the request which specify the following info
Step21: After the LRO finishes, you should be able to see the result in the BigQuery console, as a new table under the BigQuery dataset created earlier.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also keep the project but delete the featurestore and the BigQuery dataset by running the code below | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/feature_store/sdk-feature-store.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/feature_store/sdk-feature-store.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Overview
This notebook introduces Vertex AI Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.
This notebook assumes that you understand basic Google Cloud concepts such as Project, Storage and Vertex AI. Some machine learning knowledge is also helpful but not required.
Dataset
This notebook uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online.
Objective
In this notebook, you will learn how to:
* Create featurestore, entity type, and feature resources.
* Import your features into Vertex AI Feature Store.
* Serve online prediction requests using the imported features.
* Access imported features in offline jobs, such as training jobs.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Cloud BigQuery
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Vertex AI Workbench notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook dashboard.
Before you begin
Install additional packages
For this notebook, you need the Vertex SDK for Python.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from Kernel -> Restart Kernel, or running the following:
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
print("Project ID: ", PROJECT_ID)
Explanation: Otherwise, set your project ID here.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
from google.cloud import aiplatform
from google.cloud.aiplatform import Feature, Featurestore
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
FEATURESTORE_ID = "movie_prediction"
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movie_prediction.csv"
ONLINE_STORE_FIXED_NODE_COUNT = 1
aiplatform.init(project=PROJECT_ID, location=REGION)
Explanation: Import libraries and define constants
End of explanation
fs = Featurestore.create(
featurestore_id=FEATURESTORE_ID,
online_store_fixed_node_count=ONLINE_STORE_FIXED_NODE_COUNT,
project=PROJECT_ID,
location=REGION,
sync=True,
)
Explanation: Terminology and Concept
Featurestore Data model
Vertex AI Feature Store organizes data with the following 3 important hierarchical concepts:
Featurestore -> Entity type -> Feature
* Featurestore: the place to store your features
* Entity type: under a Featurestore, an Entity type describes an object to be modeled, real one or virtual one.
* Feature: under an Entity type, a Feature describes an attribute of the Entity type
In the movie prediction example, you will create a featurestore called movie_prediction. This store has 2 entity types: users and movies. The users entity type has the age, gender, and liked_genres features. The movies entity type has the titles, genres, and average rating features.
Create Featurestore and Define Schemas
Create Featurestore
The method to create a Featurestore returns a
long-running operation (LRO). An LRO starts an asynchronous job. LROs are returned for other API
methods too, such as updating or deleting a featurestore. Running the code cell will create a featurestore and print the process log.
End of explanation
fs = Featurestore(
featurestore_name=FEATURESTORE_ID,
project=PROJECT_ID,
location=REGION,
)
print(fs.gca_resource)
Explanation: Use the function call below to retrieve a Featurestore and check that it has been created.
End of explanation
# Create users entity type
users_entity_type = fs.create_entity_type(
entity_type_id="users",
description="Users entity",
)
# Create movies entity type
movies_entity_type = fs.create_entity_type(
entity_type_id="movies",
description="Movies entity",
)
Explanation: Create Entity Type
Entity types can be created within the Featurestore class. Below, create the Users entity type and Movies entity type. A process log will be printed out.
End of explanation
users_entity_type = fs.get_entity_type(entity_type_id="users")
movies_entity_type = fs.get_entity_type(entity_type_id="movies")
print(users_entity_type)
print(movies_entity_type)
fs.list_entity_types()
Explanation: To retrieve an entity type or check that it has been created use the get_entity_type or list_entity_types methods on the Featurestore object.
End of explanation
# to create features one at a time use
users_feature_age = users_entity_type.create_feature(
feature_id="age",
value_type="INT64",
description="User age",
)
users_feature_gender = users_entity_type.create_feature(
feature_id="gender",
value_type="STRING",
description="User gender",
)
users_feature_liked_genres = users_entity_type.create_feature(
feature_id="liked_genres",
value_type="STRING_ARRAY",
description="An array of genres this user liked",
)
Explanation: Create Feature
Features can be created within each entity type. Add defining features to the Users entity type and Movies entity type by using the create_feature method.
End of explanation
users_entity_type.list_features()
movies_feature_configs = {
"title": {
"value_type": "STRING",
"description": "The title of the movie",
},
"genres": {
"value_type": "STRING",
"description": "The genre of the movie",
},
"average_rating": {
"value_type": "DOUBLE",
"description": "The average rating for the movie, range is [1.0-5.0]",
},
}
movie_features = movies_entity_type.batch_create_features(
feature_configs=movies_feature_configs,
)
Explanation: Use the list_features method to list all the features of a given entity type.
End of explanation
my_features = Feature.search(query="featurestore_id={}".format(FEATURESTORE_ID))
my_features
Explanation: Search created features
While the list_features method allows you to easily view all features of a single
entity type, the search method in the Feature class searches across all featurestores and entity types in a given location (such as us-central1), and returns a list of features. This can help you discover features that were created by someone else.
You can query based on feature properties including feature ID, entity type ID, and feature description. You can also limit results by filtering on a specific featurestore, feature value type, and/or labels. Some search examples are shown below.
Search for all features within a featurestore with the code snippet below.
End of explanation
double_features = Feature.search(
query="value_type=DOUBLE AND featurestore_id={}".format(FEATURESTORE_ID)
)
double_features[0].gca_resource
Explanation: Now, narrow down the search to features that are of type DOUBLE.
End of explanation
title_features = Feature.search(
query="feature_id:title AND value_type=STRING AND featurestore_id={}".format(
FEATURESTORE_ID
)
)
title_features[0].gca_resource
Explanation: Or, limit the search results to features with specific keywords in their ID and type.
End of explanation
USERS_FEATURES_IDS = [feature.name for feature in users_entity_type.list_features()]
USERS_FEATURE_TIME = "update_time"
USERS_ENTITY_ID_FIELD = "user_id"
USERS_GCS_SOURCE_URI = (
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/users.avro"
)
GCS_SOURCE_TYPE = "avro"
WORKER_COUNT = 1
print(USERS_FEATURES_IDS)
users_entity_type.ingest_from_gcs(
feature_ids=USERS_FEATURES_IDS,
feature_time=USERS_FEATURE_TIME,
entity_id_field=USERS_ENTITY_ID_FIELD,
gcs_source_uris=USERS_GCS_SOURCE_URI,
gcs_source_type=GCS_SOURCE_TYPE,
worker_count=WORKER_COUNT,
sync=False,
)
Explanation: Import Feature Values
You need to import feature values before you can use them for online/offline serving. In this step, you learn how to import feature values by ingesting the values from GCS (Google Cloud Storage). You can also import feature values from BigQuery or a Pandas dataframe.
Source Data Format and Layout
BigQuery table/Avro/CSV are supported as input data types. No matter what format you are using, each imported entity must have an ID; also, each entity can optionally have a timestamp, specifying when the feature values are generated. This notebook uses Avro as an input, located at this public bucket. The Avro schemas are as follows:
For the Users entity:
schema = {
"type": "record",
"name": "User",
"fields": [
{
"name":"user_id",
"type":["null","string"]
},
{
"name":"age",
"type":["null","long"]
},
{
"name":"gender",
"type":["null","string"]
},
{
"name":"liked_genres",
"type":{"type":"array","items":"string"}
},
{
"name":"update_time",
"type":["null",{"type":"long","logicalType":"timestamp-micros"}]
},
]
}
For the Movies entity:
schema = {
"type": "record",
"name": "Movie",
"fields": [
{
"name":"movie_id",
"type":["null","string"]
},
{
"name":"average_rating",
"type":["null","double"]
},
{
"name":"title",
"type":["null","string"]
},
{
"name":"genres",
"type":["null","string"]
},
{
"name":"update_time",
"type":["null",{"type":"long","logicalType":"timestamp-micros"}]
},
]
}
Import feature values for Users entity type
When importing, specify the following in your request:
IDs of the features to import
Data source URI
Data source format: BigQuery Table/Avro/CSV
End of explanation
MOVIES_FEATURES_IDS = [feature.name for feature in movies_entity_type.list_features()]
MOVIES_FEATURE_TIME = "update_time"
MOVIES_ENTITY_ID_FIELD = "movie_id"
MOVIES_GCS_SOURCE_URI = (
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movies.avro"
)
GCS_SOURCE_TYPE = "avro"
WORKER_COUNT = 1
print(MOVIES_FEATURES_IDS)
movies_entity_type.ingest_from_gcs(
feature_ids=MOVIES_FEATURES_IDS,
feature_time=MOVIES_FEATURE_TIME,
entity_id_field=MOVIES_ENTITY_ID_FIELD,
gcs_source_uris=MOVIES_GCS_SOURCE_URI,
gcs_source_type=GCS_SOURCE_TYPE,
worker_count=WORKER_COUNT,
sync=False,
)
Explanation: Import feature values for Movies entity type
Similarly, import feature values for the Movies entity type into the featurestore.
End of explanation
users_entity_type.read(entity_ids="bob")
movies_entity_type.read(entity_ids="movie_01", feature_ids="title")
Explanation: Get online predictions from your model
Online serving
lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly show movies that the current user would most likely watch.
Read one entity per request
With the Python SDK, it is easy to read feature values of one entity. By default, the SDK will return the latest value of each feature, meaning the feature values with the most recent timestamp.
To read feature values, specify the entity type ID and features to read. By default all the features of an entity type will be selected. The response will output and display the selected entity type ID and the selected feature values as a Pandas dataframe.
End of explanation
users_entity_type.read(entity_ids=["bob", "alice"])
movies_entity_type.read(
entity_ids=["movie_02", "movie_03", "movie_04"], feature_ids=["title, genres"]
)
Explanation: Read multiple entities per request
To read feature values from multiple entities, specify the different entity type IDs. By default all the features of an entity type will be selected. Note that fetching only a small number of entities is recommended when using this SDK due to its latency-sensitive nature.
End of explanation
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
client = bigquery.Client(project=PROJECT_ID)
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
Explanation: Now that you have learned how to fetch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases.
Get batch predictions from your model
Batch serving is used to fetch a large batch of feature values for high-throughput, and is typically used for training a model or batch prediction. In this section, you learn how to prepare for training examples by using the Featurestore's batch serve function.
Use case
The task is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:
Features: you already imported into the featurestore.
Labels: the ground-truth data recorded that user X has watched movie Y.
To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Vertex AI Feature Store according to the entity IDs and timestamps in Table 1. In this example, the age, gender and liked_genres features from users and
the titles, genres and average_rating features from movies are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all True.
batch_serve_to_bq takes Table 1 as
input, joins all required feature values from the featurestore, and returns Table 2 for training.
<h4 align="center">Table 1. Ground-truth data</h4>
users | movies | timestamp
----- | -------- | --------------------
alice | Cinema Paradiso | 2019-11-01T00:00:00Z
bob | The Shining | 2019-11-15T18:09:43Z
... | ... | ...
<h4 align="center">Table 2. Expected training data generated by using batch serve</h4>
timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | title | genre | average_rating
-------------------- | ----------------- | --------------- | ---------------- | -------------------- | - | -------- | --------- | -----
2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | movie_02 | The Shining | Horror | 4.8
2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | movie_03 | Cinema Paradiso | Romance | 4.5 |
... | ... | ... | ... | ... | ... | ... | ... | ...
Why timestamp?
Note that there is a timestamp column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.
For example, the 2nd row of Table 2 indicates that user alice watched movie Cinema Paradiso on 2019-11-01T00:00:00Z. The featurestore keeps feature values for all timestamps but fetches feature values only at the given timestamp during batch serving. On that day, Alice might have been 54 years old, but now Alice might be 56; featurestore returns age=54 as Alice's age, instead of age=56, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres.
Create BigQuery dataset for output
You need a BigQuery dataset to host the output data in us-central1. Input the name of the dataset you want to create and specify the name of the table you want to store the output created later. These will be used in the next section.
Make sure that the table name does NOT already exist.
End of explanation
SERVING_FEATURE_IDS = {
# to choose all the features use 'entity_type_id: ['*']'
"users": ["age", "gender", "liked_genres"],
"movies": ["title", "average_rating", "genres"],
}
fs.batch_serve_to_bq(
bq_destination_output_uri=DESTINATION_TABLE_URI,
serving_feature_ids=SERVING_FEATURE_IDS,
read_instances_uri=INPUT_CSV_FILE,
)
Explanation: Batch Read Feature Values
Assemble the request which specify the following info:
Where is the label data, i.e., Table 1.
Which features are read, i.e., the column names in Table 2.
The output is stored in the BigQuery table.
End of explanation
# Delete Featurestore
fs.delete(force=True)
# Delete BigQuery dataset
client = bigquery.Client(project=PROJECT_ID)
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
Explanation: After the LRO finishes, you should be able to see the result in the BigQuery console, as a new table under the BigQuery dataset created earlier.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also keep the project but delete the featurestore and the BigQuery dataset by running the code below:
End of explanation |
10,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solving for current in R-L-C circuit
Step1: RLC circuit is governed by the following formulas
Step2: RLC circuit fed with dc voltage
For a dc voltage case it has a constant value, so its derivative over time is equal to zero
Step3: RLC Circuit with sinusoidal voltage
Now voltage and its derivative over time (not equal to zero in this case) are given as | Python Code:
#importing all required modules
#important otherwise pop-up window may not work
%matplotlib inline
import numpy as np
import scipy as sp
from scipy.integrate import odeint, ode, romb, cumtrapz
import matplotlib as mpl
import matplotlib.pyplot as plt
from math import *
import seaborn
from IPython.display import Image
#bokeh
from bokeh.plotting import figure, output_file, output_notebook, show
Explanation: Solving for current in R-L-C circuit
End of explanation
# RMS value of voltage
u = 230
#time vector
t = np.linspace(0,0.4, 1000)
#frequency & angular frequency
f = 50
omega = 2 * pi * f
#Resitance (values to consider 5 and 10 Ohms)
R = 5
#Inductance
L = 0.1
XL = 2*pi*f*L
#Capacitance (worth to consider 0.01 - two inertia or 0.001 - oscillator)
C = 0.001
XC = 1/(omega*C)
#Phase angle
phi=atan((XL-XC)/R)
#closing angle [rad]
alpha = 0
XL, XC
Explanation: RLC circuit is governed by the following formulas:
<img src="formula_1.png">
To put the last equation in order:
<img src="formula_2.png">
This will be a starting point for the analysis, which includes two cases:
RLC circuit fed with a dc voltage
RLC circuit fed with an ac voltage
First we need to define auxiliary variables
End of explanation
ua = [u for k in t]
#definition of the function dp/dt
def di(y,t):
#x = i, p = di/dt
x, p = y[0], y[1]
dx = p
dp = 1/L*(-R*p-(1/C)*x)
return [dx, dp]
#initial state
#initial capacitor voltage
uc0 = 0
y0 = [0.0, 1/L*(u-uc0)]
y0
I = odeint(di, y0, t)
ia = I[:,0]
# Capacitor voltage definition:
duc = ia/C
uc = cumtrapz(duc, dx=0.4/1000, initial=0)
# after integration vectors t and uc had different lengths so I need to append one item
np.append(uc, uc[999])
fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(8,8))
ax[0].plot(t,ia, label="Current")
ax[0].set_ylabel("Current [A]")
ax[0].set_xlabel("Time [s]")
ax[0].set_title("Current in R-L-C circuit during switch-on")
ax[0].legend()
ax[1].plot(t,ua, label="Supply voltage", color="green")
ax[1].plot(t,uc, label="Capacitor voltage", color="orange")
ax[1].set_ylabel("Voltage [V]")
ax[1].set_xlabel("Time [s]")
ax[1].set_title("Supply voltage")
ax[1].legend()
fig.tight_layout()
#checking damping factor: if below 1 - underdamped, if above 1 - overdamped
damp = (R/2)*sqrt(C/L)
damp
Explanation: RLC circuit fed with dc voltage
For a dc voltage case it has a constant value, so its derivative over time is equal to zero:
<img src="formula_4.png">
Consequently our equation becomes:
<img src="formula_5.png">
Voltage is given as:
End of explanation
ub = [sqrt(2)*u*sin(omega*k + alpha) for k in t]
# definition of the function dp/dt
def di(y,t):
#x = i, p = di/dt
x, p = y[0], y[1]
dx = p
dp = 1/L*(omega*sqrt(2)*u*cos(omega*t + alpha)-R*p-(1/C)*x)
return [dx, dp]
#initial state
#initial capacitor voltage
uc0 = 0
y0 = [0.0, 1/L*(ua[0]-uc0)]
I = odeint(di, y0, t)
ib = I[:,0]
#Capacitor voltage derivative
duc2 = ib/C
uc2 = cumtrapz(duc2, dx=0.4/1000, initial=0)
fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(8,8))
ax[0].plot(t,ib, label="Current")
ax[0].set_ylabel("Current [A]")
ax[0].set_xlabel("Time [s]")
ax[0].set_title("Current in R-L-C circuit during switch-on")
ax[0].legend()
ax[1].plot(t,ub, label="Line voltage", color="green")
ax[1].plot(t,uc2, label="Capacitor voltage", color="orange")
ax[1].set_ylabel("Voltage [V]")
ax[1].set_xlabel("Time [s]")
ax[1].set_title("Supply voltage")
ax[1].legend()
fig.tight_layout()
#checking the amplitude value in steady state
Im = sqrt(2)*u/(sqrt(R**2+(XL-XC)**2))
Im
Explanation: RLC Circuit with sinusoidal voltage
Now voltage and its derivative over time (not equal to zero in this case) are given as:
<img src="formula_3.png">
End of explanation |
10,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2-Room Spatial Navigation Analyses
This document contains a demonstration of how to analyse and visualize the 2-Room Spatial Navigation data.
Note that this contains parsing and analysis code, and it is a very flexible format. It is NOT recommended that you use the cogrecon.core.data_flexing_spatial_navigation package for the 2-Room version of the Spatial Navigation Task. There are several critical differences between the tasks which would result in incorrect analysis using that library. Instead, we'll just do it all in this notebook. One exception to this recommendation is when searching for files.
First, we'll list some key parameters of our task - namely, the location of the data, the names of the items which were used, and the minimum number of expected trials.
Step1: This next block of code finds and sorts the data files into "individuals".
Step2: Next, the summary files can be read individually and the results aggrigated into the 'test_results' object and save the data to the iPosition data format.
Step3: Finally, we'll get the iPosition output for the converted files.
Step4: Visualizing a Participant
Next, we can visualize an individual participant.
Step5: Context Boundary Analysis
Next, we'll look at whether or not context boundary effects were present in the data.
Step6: The Context Boundary Effect (CBE) is calculated by taking the average normalized distance between within context and across context pairs in the specified triples.
Step7: This data can be quickly plotted to get means and standard error for each trial as well as for collapsed across trials.
Step8: We can save/label the data in a DataFrame and then save it to file.
Step9: Visualize a Path
Finally, we may want to visualize an exploration path during the task. This can be done using the spatial_navigation_2room visualizer. It can and will take a LONG TIME to run because of the amount of data involved. | Python Code:
data_path = r'Z:\Kelsey\2017 Summer RetLu\Virtual_Navigation_Task\v5_2\NavigationTask_Data\Logged_Data'
study_labels = ['PurseCube', 'CrownCube', 'BasketballCube', 'BootCube', 'CloverCube', 'GuitarCube', 'HammerCube', 'LemonCube', 'IceCubeCube', 'BottleCube']
locations = [[8, -8], [-2, -23], [8, -38], [-14, -13], [15, -18], [-14, 7], [-14, 27], [-6, 18], [8, 22], [11, 5]]
correct_locations = {l: p for l, p in zip(study_labels, locations)}
min_num_trials = 4
iposition_directory = './saved_data/2-room-iposition'
Explanation: 2-Room Spatial Navigation Analyses
This document contains a demonstration of how to analyse and visualize the 2-Room Spatial Navigation data.
Note that this contains parsing and analysis code, and it is a very flexible format. It is NOT recommended that you use the cogrecon.core.data_flexing_spatial_navigation package for the 2-Room version of the Spatial Navigation Task. There are several critical differences between the tasks which would result in incorrect analysis using that library. Instead, we'll just do it all in this notebook. One exception to this recommendation is when searching for files.
First, we'll list some key parameters of our task - namely, the location of the data, the names of the items which were used, and the minimum number of expected trials.
End of explanation
from cogrecon.core.data_flexing.spatial_navigation.spatial_navigation_parser import catalog_files
import os
files = []
for walk_root, walk_dirs, walk_files in os.walk(data_path):
for f in walk_files:
files.append(os.path.join(walk_root, f))
individuals, excluded, non_matching = catalog_files(files, min_num_trials)
print('{0} individuals found.'.format(len(individuals)))
Explanation: This next block of code finds and sorts the data files into "individuals".
End of explanation
import numpy as np
import logging
import os
if not os.path.exists(iposition_directory):
os.makedirs(iposition_directory)
count = 1
num_items = len(study_labels)
test_results = {}
for individual in individuals:
individual_result = []
logging.info("Parsing Individual %s (%d/%d)." % (individual.subject_id, count, len(individuals)))
count += 1
trial_count = 1
out_lines = []
for trial in individual.trials:
item_locations = {l: [0.0, 0.0] for l in study_labels}
items_found = {l: False for l in study_labels}
if trial.test_vr is None:
continue
with open(trial.test_vr, 'rb') as fp:
lines = fp.readlines()
lines.reverse()
for line in lines:
decoded_line = line.decode('ascii')
if decoded_line.startswith('Object_Identity_Set'):
split_line = decoded_line.split(',')
name, location_string = decoded_line[20:].strip().split(' : ')
x, y, z = [float(a) for a in location_string[1:-1].split(',')]
if not items_found[name]:
items_found[name] = True
item_locations[name] = [x, z]
item_location_list = list(np.array([item_locations[l] for l in study_labels]).flatten())
out_lines.append('\t'.join([str(a) for a in item_location_list]))
individual_result.append(item_locations)
with open(os.path.join(iposition_directory, '{0}position_data_coordinates.txt'.format(individual.subject_id)), 'w') as fp:
for line in out_lines:
fp.write(line + '\n')
test_results[individual.subject_id] = individual_result
# Save actual_coordinates.txt
item_location_list = list(np.array(locations).flatten())
out_line = '\t'.join([str(a) for a in item_location_list])
with open(os.path.join(iposition_directory, 'actual_coordinates.txt'.format(individual.subject_id)), 'w') as fp:
for _ in range(0, min_num_trials):
fp.write(out_line + '\n')
Explanation: Next, the summary files can be read individually and the results aggrigated into the 'test_results' object and save the data to the iPosition data format.
End of explanation
from cogrecon.core.batch_pipeline import batch_pipeline
import datetime
import logging
batch_pipeline(iposition_directory,
datetime.datetime.now().strftime("Holodeck 2-Room Spatial Navigation - %Y-%m-%d_%H-%M-%S.csv"),
trial_by_trial_accuracy=False, collapse_trials=False, actual_coordinate_prefixes=False)
Explanation: Finally, we'll get the iPosition output for the converted files.
End of explanation
from cogrecon.core.data_structures import ParticipantData, AnalysisConfiguration
from cogrecon.core.full_pipeline import full_pipeline
import os
subid = '135'
full_pipeline(ParticipantData.load_from_file(os.path.join(iposition_directory, 'actual_coordinates.txt'),
os.path.join(iposition_directory, '{0}position_data_coordinates.txt'.format(subid)),
None),
AnalysisConfiguration(trial_by_trial_accuracy=False),
visualize=True)
Explanation: Visualizing a Participant
Next, we can visualize an individual participant.
End of explanation
triples_labels = ["red->blue", "blue->red"]
across_triples = [('IceCubeCube', 'PurseCube'), ('BootCube', 'GuitarCube')]
within_triples = [('PurseCube', 'BasketballCube'), ('GuitarCube', 'HammerCube')]
Explanation: Context Boundary Analysis
Next, we'll look at whether or not context boundary effects were present in the data.
End of explanation
import scipy.spatial.distance as distance
cbe_results = {}
for sid in test_results:
subject_results = []
for trial in test_results[sid]:
dists = []
for triple in across_triples:
dist = distance.euclidean(trial[triple[0]], trial[triple[1]])
actual_dist = distance.euclidean(correct_locations[triple[0]], correct_locations[triple[1]])
dists.append(dist/actual_dist)
average_across = np.mean(dists)
dists = []
for triple in within_triples:
dist = distance.euclidean(trial[triple[0]], trial[triple[1]])
actual_dist = distance.euclidean(correct_locations[triple[0]], correct_locations[triple[1]])
dists.append(dist/actual_dist)
average_within = np.mean(dists)
cbe = average_across - average_within
subject_results.append(cbe)
cbe_results[sid] = subject_results
Explanation: The Context Boundary Effect (CBE) is calculated by taking the average normalized distance between within context and across context pairs in the specified triples.
End of explanation
import matplotlib.pyplot as plt
data = [cbe_results[k] for k in cbe_results]
means = np.mean(data, axis=0)
stds = np.std(data, axis=0)
stes = [s/np.sqrt(len(data)) for s in stds]
plt.figure()
plt.bar(range(0, len(means)), means, yerr=stes)
plt.figure()
plt.bar([0], np.mean(data), yerr=np.std(data)/np.sqrt(len(data)))
plt.show()
Explanation: This data can be quickly plotted to get means and standard error for each trial as well as for collapsed across trials.
End of explanation
import pandas
df = pandas.DataFrame(cbe_results).transpose()
df.columns = ['Trial 1', 'Trial 2', 'Trial 3', 'Trial 4']
df
df.to_csv('context_boundary_effect.csv', index=False)
Explanation: We can save/label the data in a DataFrame and then save it to file.
End of explanation
from cogrecon.core.visualization.vis_spatial_navigation_2room import visualize
import os.path
from matplotlib import rc
rc('animation', html='html5')
%matplotlib inline
sub_directory = os.path.join(data_path, '2RoomTestAnonymous', '124')
raw_filepath = os.path.join(sub_directory, 'RawLog_Sub124_Trial1_13_15_57_30-05-2017.csv')
summary_filepath = os.path.join(sub_directory, 'SummaryLog_Sub124_Trial1_13_15_57_30-05-2017.csv')
%%capture
anim = visualize(raw_filepath, summary_filepath)
anim
Explanation: Visualize a Path
Finally, we may want to visualize an exploration path during the task. This can be done using the spatial_navigation_2room visualizer. It can and will take a LONG TIME to run because of the amount of data involved.
End of explanation |
10,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing the nscore transformation table
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: The nscore transformation table function
Step3: Important.
You may run ns_ttable in order to optain transin, transout
Get the transformation table
Step4: Get the normal score transformation
Note that the declustering is applied on the transformation tables
Step5: Normal score transformation using rank | Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
from matplotlib.patches import Ellipse
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
Explanation: Testing the nscore transformation table
End of explanation
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat')
# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code
# so, we are adding constant elevation = 0 and a dummy BHID = 1
mydata['Zlocation']=0
mydata['bhid']=1
# printing to verify results
print (' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5))
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
print (pygslib.gslib.__dist_transf.nscore.__doc__)
Explanation: The nscore transformation table function
End of explanation
transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight'])
print ('there was any error?: ', error!=0)
Explanation: Important.
You may run ns_ttable in order to optain transin, transout
Get the transformation table
End of explanation
mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False)
mydata['NS_Primary'].hist(bins=30)
Explanation: Get the normal score transformation
Note that the declustering is applied on the transformation tables
End of explanation
mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=True)
mydata['NS_Primary'].hist(bins=30)
Explanation: Normal score transformation using rank
End of explanation |
10,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Structures like these are encoded in "PDB" files
How can we parse a complicted file like this one?
Step1: We can do better by manually parsing the file.
Our test file
Predict what this will print
Step2: Predict what this will print
Step3: Predict what this will print
Step4: Basic file reading operations
Step5: Predict what the following program will do
Step6: Splitting strings
SOME_STRING.split(CHAR_TO_SPLIT_ON) allows you to split strings into a list.
If CHAR_TO_SPLIT_ON is not defined, it will split on all whitespace (" ","\t","\n","\r")
"\t" is TAB, "\n" is NEWLINE, "\r" is CARRIAGE_RETURN.
Predict what the following will do
Step7: Predict what will happen
Step8: value is a string of "1.5". You can't do math on it yet.
The solution is to cast it into a float
Step9: Cast calls
Step10: Write a program that grabs the "1" from the first line in the file and multiplies it by 75.
What about writing to files?
Basic file writing operations
Step11: Predict what this code will do
Step12: Predict what this code will do
Step13: Predict what this code will do
Step14: format lets you make pretty strings | Python Code:
import pandas as pd
pd.read_table("data/1stn.pdb")
Explanation: Structures like these are encoded in "PDB" files
How can we parse a complicted file like this one?
End of explanation
f = open("test-file.txt")
print(f.readlines())
f.close()
Explanation: We can do better by manually parsing the file.
Our test file
Predict what this will print
End of explanation
f = open("test-file.txt")
for line in f.readlines():
print(line)
f.close()
Explanation: Predict what this will print
End of explanation
f = open("test-file.txt")
for line in f.readlines():
print(line,end="")
f.close()
Explanation: Predict what this will print
End of explanation
f = open("test-file.txt")
for line in f.readlines():
print(line.split())
f.close()
Explanation: Basic file reading operations:
Open a file for reading: f = open(SOME_FILE_NAME)
Read lines of file sequentially: f.readlines()
Read one line from the file: f.readline()
Read the whole file into a string: f.read()
Close the file: f.close()
Now what do we do with each line?
Predict what the following program will do
End of explanation
f = open("test-file.txt")
for line in f.readlines():
print(line.split("1"))
f.close()
Explanation: Predict what the following program will do
End of explanation
f = open("test-file.txt")
lines = f.readlines()
f.close()
line_of_interest = lines[-1]
value = line_of_interest.split()[0]
print(value)
Explanation: Splitting strings
SOME_STRING.split(CHAR_TO_SPLIT_ON) allows you to split strings into a list.
If CHAR_TO_SPLIT_ON is not defined, it will split on all whitespace (" ","\t","\n","\r")
"\t" is TAB, "\n" is NEWLINE, "\r" is CARRIAGE_RETURN.
Predict what the following will do
End of explanation
print(value*5)
Explanation: Predict what will happen:
End of explanation
value_as_float = float(value)
print(value_as_float*5)
Explanation: value is a string of "1.5". You can't do math on it yet.
The solution is to cast it into a float
End of explanation
list("1.5")
Explanation: Cast calls:
float, int, str, list, tuple
End of explanation
def file_printer(file_name):
f = open(file_name)
for line in f.readlines():
print(line,end="")
f.close()
Explanation: Write a program that grabs the "1" from the first line in the file and multiplies it by 75.
What about writing to files?
Basic file writing operations:
Open a file for writing: f = open(SOME_FILE_NAME,'w') will wipe out file immediately!
Open a file to append: f = open(SOME_FILE_NAME,'a')
Write a string to a file: f.write(SOME_STRING)
Write a list of strings: f.writelines([STRING1,STRING2,...])
Close the file: f.close()
End of explanation
a_list = ["a","b","c"]
f = open("another-file.txt","w")
for a in a_list:
f.write(a)
f.close()
file_printer("another-file.txt")
Explanation: Predict what this code will do
End of explanation
a_list = ["a","b","c"]
f = open("another-file.txt","w")
for a in a_list:
f.write(a)
f.write("\n")
f.close()
file_printer("another-file.txt")
Explanation: Predict what this code will do
End of explanation
a_list = ["a","b","ccat"]
f = open("another-file.txt","w")
for a in a_list:
f.write("A test {{}} {}\n".format(a))
f.close()
file_printer("another-file.txt")
Explanation: Predict what this code will do
End of explanation
print("The value is: {:}".format(10.35151))
print("The value is: {:.2f}".format(10.35151))
print("The value is: {:20.2f}".format(10.35151))
print("The value is: {:}".format(10))
print("The value is: {:20d}".format(10))
Explanation: format lets you make pretty strings
End of explanation |
10,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Short demo of using ipython_memory_usage to diagnose numpy and Pandas RAM usage
Author Ian uses this tool in his Higher Performance Python training (https
Step1: Importing packages uses some RAM
Step2: Making a large array uses a predictable amount of RAM
Step3: Making a big random array takes RAM + time
Step4: Intermediate calculations can cost additional temporary RAM
NOTE this section may work different if you're on Windows (if so - please report back to Ian by raising a bug and noting the difference.
On some platforms, e.g. Linux as used here, temporary intermediates can be reused in-place reducing the overall memory allocation
Step5: Pandas DataFrames can be costly on RAM
Example with deleting columns
Props to Jamie Brunning for this example
Step6: Diagnostics
%xdel my_df will delete all references of my_df from the namespace including those in the Out[] history buffer, this does more cleaning than just using del my_df.
%reset will reset all variables and imported modules, it is like starting a new kernel. | Python Code:
import ipython_memory_usage
help(ipython_memory_usage) # or ipython_memory_usage?
%ipython_memory_usage_start
Explanation: Short demo of using ipython_memory_usage to diagnose numpy and Pandas RAM usage
Author Ian uses this tool in his Higher Performance Python training (https://ianozsvald.com/training/) and it is mentioned in his High Performance Python (2nd ed, O'Reilly) technical book.
We can use it to understand how much RAM we're currently using and which of several alternate ways to solve a problem in complex tools might be the most RAM efficient solutions.
total RAM usage is the current RAM usage at the end of that cell's execution
used shows the difference between the last total RAM usage and this one
peaked shows any during-execution peak above the resulting total RAM usage (i.e. hidden RAM usage that might catch you out)
End of explanation
import numpy as np # note that importing a package will increase total RAM usage a little
import pandas as pd # note that importing Pandas uses more RAM than importing numpy
import string
Explanation: Importing packages uses some RAM
End of explanation
# if we make a big array - 100M items * 8 byte floats, this cell
# uses circa 800MB (often 760 MiB - note mibi-bytes as used in the underlying memory_profiler tool)
# The total RAM usage grows by roughly this amount
arr = np.ones(100_000_000)
# deleting arr reduces RAM usage by roughly the expected amount and
# total RAM usage should drop back down
del arr
# if we make it again, RAM usage goes up again
arr = np.ones(100_000_000)
del arr
Explanation: Making a large array uses a predictable amount of RAM
End of explanation
# creating random items takes some time, after "used ... RAM" note "3s" or so for several seconds
arr = np.random.normal(size=100_000_000)
print(arr[:5], arr.dtype)
Explanation: Making a big random array takes RAM + time
End of explanation
pass
# arr*2 and arr*3 both have to be stored somewhere before the division can occur
# so two more circa 762MiB arrays are made temporarily, this is reported
# as "peaked 762MiB above current"
# before they can be discard. arr_result references the final result
# so overall we add 762MiB to the process
# we only add 762MiB, not 762MiB*2, as on Linux we can intelligently reuse
# one of the temporaries (else we'd peak at 762*2 MiB)
# we report "used 762...MiB" as the final arr_result adds this to the process
# so overall we're now _at_ 1.6GB but we actually peaked at 1.6+0.7 == 2.3GB
# whilst this cell executed
# if your code crashes with an out of memory exception, it could be caused
# by a situation like this
arr_result = (arr * 2) / (arr * 3)
del arr
del arr_result
Explanation: Intermediate calculations can cost additional temporary RAM
NOTE this section may work different if you're on Windows (if so - please report back to Ian by raising a bug and noting the difference.
On some platforms, e.g. Linux as used here, temporary intermediates can be reused in-place reducing the overall memory allocation: https://docs.scipy.org/doc/numpy-1.13.0/release.html#highlights
End of explanation
pass
arr_several_cols = np.random.normal(size=(100_000_000, 4))
arr_several_cols.shape
f"Cost per column {int(arr_several_cols.data.nbytes / arr_several_cols.shape[1]):,} bytes"
# The DataFrame in this case is a thin wrapper over the numpy array
# and costs little extra RAM
df = pd.DataFrame(arr_several_cols, columns=list(string.ascii_lowercase)[:arr_several_cols.shape[1]])
df.info()
# use Jupyter's xdel to remove all references of our expensive array, just in case
# (but not in this case) it is also referred to in an Out[] history item
%xdel arr_several_cols
df.info()
# using del is surprisingly expensive
# total RAM usage goes up by circa 1.5GB-2GB (>2x the cost of 1 column)
# DOES ANYONE KNOW WHAT'S HAPPENING BEHIND THE SCENES HERE?
# THE NEXT 2 CELLS SHOW IT ISN'T BEING QUICKLY GARBAGE COLLECTED
# note also that using del seems to take more seconds than using df.drop (a few cells below)
# possibly internally there's now (somehow) a 4-column original array _and_ a
# 3 column resulting array (in the BlockManager?) costing 7-columns (i.e. circa 800MB*7 == circa 5.6GB)
del df['a']
# we get no benefit by forcing a collection
import gc
gc.collect()
df.info()
pass
# using drop with inplace=False (the default) returns a copied DataFrame, if you don't use
# this then maybe you end up with multiple DataFrames consuming RAM in a confusing fashion
# e.g. you might have done `df2 = df.drop...` and then you've got the unmodified original
# plus the modified df2 in the local namespace
# We see total RAM usage drop by circa 800MB, the cost of 1 column, plus a lot more...
# which is a mystery to me!
# maybe the usage of drop forces a flush on any internal caching in pandas?
df = df.drop(columns=['b'])
df.info()
# dropping in-place is probably more sensible, we recover another circa 800MB
df.drop(columns=['c'], inplace=True)
df.info()
pass
# now we get back to where we were before we made the DataFrame and the array
df.drop(columns=['d'], inplace=True)
Explanation: Pandas DataFrames can be costly on RAM
Example with deleting columns
Props to Jamie Brunning for this example
End of explanation
# %whos shows what's in the local namespace
%whos
# we can use %xdel to safely remove all references including those that might be (but not in this case)
# in the Out[] history buffer
%xdel df
%ipython_memory_usage_stop
Explanation: Diagnostics
%xdel my_df will delete all references of my_df from the namespace including those in the Out[] history buffer, this does more cleaning than just using del my_df.
%reset will reset all variables and imported modules, it is like starting a new kernel.
End of explanation |
10,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Librería Cashflows
Juan David Velásquez Henao
[email protected]
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Haga click aquí para acceder a la última versión online
Haga click aquí para ver la última versión online en nbviewer.
Cashflows
cashflows es una librería para el cálculo interactivo del valor del dinero en IPython. Las funciones implementadas son similares a las usadas en Microsoft Excel, las calculadoras financieras y otros softwares similares. cashflows permite el análisis de diferentes tipos de inversiones
Step1: Tipos de interés
Interés nominal (nrate)
Step2: Función iconv
iconv(nrate=None, erate=None, prate=None, pyr=1)
donde
Step3: Los cálculos pueden ejecutarse con una sola llamada a la función.
Step4: Función pvfv
pvfv(pval=None, fval=None, nrate=None, nper=None, pyr=1,
noprint=True)
<img src="images/pvfv.png" width="300">
La función pvfv returna el valor faltante en la siguiente ecuación
Step5: Ejemplo.-- ¿Cuál será el valor futuro de $ 100, $ 200, $ 300 y $ 400 en 5 años a una tasa de interés del 3% anual?
Step6: Ejemplo.-- ¿Cuál será el valor futuro de $ 100 en 1, 2, 3 y 4 años a una tasa de interés de 3% anual?
Step7: Ejercicio.-- Se compra un propiedad por $ 32000. Si se presenta una depreciación del 2% por año, ¿cuál será el valor de la propiedad al final de 6 años?
Función pvpmt
pvpmt(pmt=None, pval=None, nrate=None, nper=None, pyr=1,
noprint=True)
Calcula el parámetro faltante en el siguiente flujo de efectivo.
<img src="images/pvpmt.png" width="300">
Nomenclatura para los parámetros
Step8: Ejemplo.-- [2, pág. 59] Calcule el pago mensual de una hipoteca por $ 243400 pagada en 348 meses a una tasa nominal de 5.25%.
Step9: Ejemplo.-- [3, pág. 81] Se está financiando la compra de un carro nuevo con un leasing a tres años a una tasa nominal del 10.5%. El precio del carro es de $ 7250. Se debe realizar un pago inicial de $ 1500. ¿Cuánto es el pago mensual si los pagos se hacen al final del mes?
Step10: Ejemplo.-- Para el caso del ejemplo anterior, se desea reducir la cuota mensual en $ 10, ¿cuál tasa de interés debería obtenerse?
Step11: Ejercicio.-- Se hará un préstamo de $ 35000 para la compra de una turbina de generación a gas. Si la tasa nominal es del 10.5% con pagos pagos mensuales de $ 550 al final de cada mes, ¿cuánto tiempo se requiere para cancelar la deuda?
Función pmtfv
pmtfv(pmt=None, fval=None, nrate=None, nper=None, pyr=1,
noprint=True)
Calcula el parámetro faltante para el siguiente flujo de efectivo.
<img src="images/pmtfv.png" width="300">
Nomenclatura para los parámetros
Step12: Función tvmm
tvmm(pval=None, fval=None, pmt=None, nrate=None, nper=None,
due=0, pyr=1, noprint=True)
Esta función calcula el parámetro faltante en el flujo de efectivo especificado por el parámetro due.
<img src="images/tvmm.png" width="600">
Nomenclatura para los parámetros
Step13: Ejemplo.-- [2, pág. 58] ¿Cuánto se puede pagar por una propiedad que generará un flujo neto anual de $ 17500 durante 5 años, si al final la propiedad se puede vender en $ 540.000? (la tasa nominal de interés es del 12%)
Step14: Ejemplo.-- ¿Cuál es la amortización para los siguientes préstamos (fval es el pago final residual)?
plazo 5, 5, 6, 7
pval 100, 110, 110, 105
fval -20, -10, -20, 0
tasa 0.020, 0.017, 0.016, 0.017
Step15: Ejercicio.-- Se abre una cuenta hoy con un depósito de $ 775. La tasa nominal es 6.25% con capitalización mensual. ¿Si se desea tener un capital de $ 4000 en 60 meses, ¿Cuánto se debe depositar mensualmente (al final del mes)?
Función amortize
amortize(pval=None, fval=None, pmt=None, nrate=None,
nper=None, due=0, pyr=1)
Imprime la tabla de amortizaciones. La llamada a la función retorna un pandas.DataFrame. Esta función usa los mismos parámetros de la función tvmm.
Ejemplo.-- Construya la tabla de amortización (balance) para un préstamo de $ 1000 a 6 meses con pagos mensuales iguales a una tasa de interés del 1% mensual. | Python Code:
import cashflows as cf
Explanation: Librería Cashflows
Juan David Velásquez Henao
[email protected]
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Haga click aquí para acceder a la última versión online
Haga click aquí para ver la última versión online en nbviewer.
Cashflows
cashflows es una librería para el cálculo interactivo del valor del dinero en IPython. Las funciones implementadas son similares a las usadas en Microsoft Excel, las calculadoras financieras y otros softwares similares. cashflows permite el análisis de diferentes tipos de inversiones:
ara los siguientes tipos de inversiones:
Créditos
Ahorros
Depreciaciones
Bonos
Flujos genéricos de efectivo
Este análisis ayuda a responder diferentes preguntas:
¿Cuáles son los indicadores financieros de un proyecto de generación?
¿Cuáles es la mejor alternativa de crédito para financiar la compra de un equipo?
¿Cuál es el incentivo mínimo que debe darse a una nueva tecnología para incentivar su adopción?
¿Cuál es el riesgo en el que se incurre en una inversión (transmisión, generación, distribución, etc.)?
Instalación de la librería
La instalación de la librería puede realizarse usando pip:
pip install cashflows
Ayuda
Las funciones de la librería se encuentran documentadas y su ayuda puede obtenerse a través de la función help. Por ejemplo, para obtener la ayuda de la función cashflow se digita el siguiente comando en el prompt interactivo:
>>> help(cashflow)
como resultado se abrirá la ayuda de dicha función.
Documentación
La documentación de la libraría puede consultarse en:
http://cashflows.readthedocs.io/en/latest/
Carga de la librería
Para cargar la librería use:
import cashflows as cf
Lenguaje de desarrollo
La librería está desarrollada en Python 3.6. No hay compatibilidad para Python 2.x.
End of explanation
pow(1+0.0672/2, 2) - 1 ## tasa efectiva anual para el Banco #1
pow(1+0.0670/4, 4) - 1 ## tasa efectiva anual para el Banco #2
## tasa mas favorable
pow(1+0.0665/12, 12) - 1 ## tasa efectiva anual para el Banco #3
Explanation: Tipos de interés
Interés nominal (nrate): expresado sobre una base anual para un número pyr de períodos de pago en el año.
Interés efectivo por período de pago (o períodico) (prate): representa el interés real para cada período de pago en el año.
Interés efectivo anual (erate): interés real para un período único de pago de un año.
$$ prate= \frac{nrate}{pyr}, \qquad erate = \left( \displaystyle 1 + prate\right)^{nper} - 1 = \left( \displaystyle 1 + \frac{nrate}{pyr}\right)^{nper} - 1 $$
Ejemplo.-- Se está considerando abrir una cuenta de ahorros en uno de tres bancos. ¿Cuál banco tienen la tasa de interés más favorable?
Banco #1: 6.72% anual, compuesto semestralmente.
Banco #2: 6.70% anual, compuesto trimestralmente.
Banco #3: 6.65% anual, compuesto mensualmente.
Solución manual
End of explanation
cf.iconv(nrate = 6.72, pyr = 2) ## Banco 1: 6.72% comp semestralmente
cf.iconv(nrate = 6.70, pyr = 4) ## Banco 2 6.70% comp trimestralmente
cf.iconv(nrate = 6.65, pyr = 12) ## Banco 3: 6.65% comp mensualmente
Explanation: Función iconv
iconv(nrate=None, erate=None, prate=None, pyr=1)
donde:
nrate -- tasa nominal.
prate -- tasa períodica o tasa efectiva por período de capitalización.
erate -- tasa efectiva por año.
pyr -- número de períodos de capitalización por año.
La función recibe una de las tasa de interés y retorna las otras dos así:
Al especificar nrate retorna (erate, prate).
Al especificar erate retorna (nrate, prate).
Al especificar prate retorna (nrate, erate).
Los cálculos son ejecutados usando las siguientes ecuaciones:
$$ prate= \frac{nrate}{pyr}, \qquad erate = \left( \displaystyle 1 + prate\right)^{nper} - 1 = \left( \displaystyle 1 + \frac{nrate}{pyr}\right)^{nper} - 1 $$
Los cálculos usando iconv se realizan de la siguiente manera:
End of explanation
## Otra forma
cf.iconv(nrate = [6.72, 6.79, 6.65], pyr = [2, 4, 12])
Explanation: Los cálculos pueden ejecutarse con una sola llamada a la función.
End of explanation
cf.pvfv(nrate = 7.2, # tasa de interes
pval = -2000, # valor presente
fval = +3000) # valor futuro
# Ya que nper es un valor entre 5 y 6, se requieren 6 años
# para tener un balance de al menos $ 3000.
# El balance al final de los seis años es (R/ 3035.28):
cf.pvfv(nrate = 7.2, # tasa de interes
pval = -2000, # valor presente
nper = 6) # numero de periodos
Explanation: Función pvfv
pvfv(pval=None, fval=None, nrate=None, nper=None, pyr=1,
noprint=True)
<img src="images/pvfv.png" width="300">
La función pvfv returna el valor faltante en la siguiente ecuación:
$$fval = - pval * \left(1 + \frac{nrate}{pyr}\right) ^ {nper}$$
donde:
* pval -- valor presente.
* fval -- valor futuro.
* nper -- cantidad de períodos.
* nrate -- tasa de interés nominal.
* pyr -- número de períodos de capitalización por año.
Ejemplo.-- [3, pág. 88] Se depositan $ 2000 en una cuenta de ahorros que paga un interés anual del 7.2% (capitalizado anualmente). Si no se hacen otros depósitos en la cuenta, ¿cuánto tiempo se requiere para que la cuenta tenga $ 3000? R/ 5.83
<img src="images/sesion-2-ejemplo-1.png" width="350">
End of explanation
# uno de los parámetros puede ser un vector
cf.pvfv(pval = [100, 200, 300, 400],
nper = 5,
nrate = 3.0)
Explanation: Ejemplo.-- ¿Cuál será el valor futuro de $ 100, $ 200, $ 300 y $ 400 en 5 años a una tasa de interés del 3% anual?
End of explanation
cf.pvfv(pval = 100,
nper = [1, 2, 3, 4],
nrate = 3.0)
Explanation: Ejemplo.-- ¿Cuál será el valor futuro de $ 100 en 1, 2, 3 y 4 años a una tasa de interés de 3% anual?
End of explanation
cf.pvpmt(pmt = -450, # pago mensual
nrate = 5.9, # tasa de interés
nper = 48, # numero de periodos
pyr = 12) # periodos de capitalización por año
_ + 1500
Explanation: Ejercicio.-- Se compra un propiedad por $ 32000. Si se presenta una depreciación del 2% por año, ¿cuál será el valor de la propiedad al final de 6 años?
Función pvpmt
pvpmt(pmt=None, pval=None, nrate=None, nper=None, pyr=1,
noprint=True)
Calcula el parámetro faltante en el siguiente flujo de efectivo.
<img src="images/pvpmt.png" width="300">
Nomenclatura para los parámetros:
* pval -- valor presente.
* pmt -- pago períodico o anualidad.
* nper -- cantidad de períodos.
* nrate -- tasa nominal de interés por año.
* pyr -- número de períodos por año.
Ejemplo.-- [2, pág. 57] Si se va a realizar un leasing a una tasa nominal de 5.9% y se deben realizar 48 pagos mensuales de $ 450 y un pago inicial de $ 1500 al constituirse el crédito, ¿cuál es el monto del préstamo?
End of explanation
cf.pvpmt(pval = 243400, # monto
nrate = 5.25, # tasa de interés
nper = 348, # número de períodos
pyr = 12) # períodos de capitalización por año
Explanation: Ejemplo.-- [2, pág. 59] Calcule el pago mensual de una hipoteca por $ 243400 pagada en 348 meses a una tasa nominal de 5.25%.
End of explanation
cf.pvpmt(pval = 5750, # = 7250 - 1500
nrate = 10.5,
nper = 36,
pyr = 12)
Explanation: Ejemplo.-- [3, pág. 81] Se está financiando la compra de un carro nuevo con un leasing a tres años a una tasa nominal del 10.5%. El precio del carro es de $ 7250. Se debe realizar un pago inicial de $ 1500. ¿Cuánto es el pago mensual si los pagos se hacen al final del mes?
End of explanation
cf.pvpmt(pval = 5750,
pmt = -176.89,
nper = 36,
pyr = 12)
Explanation: Ejemplo.-- Para el caso del ejemplo anterior, se desea reducir la cuota mensual en $ 10, ¿cuál tasa de interés debería obtenerse?
End of explanation
cf.pmtfv(pmt=-1000, nrate=12, nper=12, pyr=12)
Explanation: Ejercicio.-- Se hará un préstamo de $ 35000 para la compra de una turbina de generación a gas. Si la tasa nominal es del 10.5% con pagos pagos mensuales de $ 550 al final de cada mes, ¿cuánto tiempo se requiere para cancelar la deuda?
Función pmtfv
pmtfv(pmt=None, fval=None, nrate=None, nper=None, pyr=1,
noprint=True)
Calcula el parámetro faltante para el siguiente flujo de efectivo.
<img src="images/pmtfv.png" width="300">
Nomenclatura para los parámetros:
pmt -- pago períodico.
fval -- valor futuro.
nper -- cantidad de períodos.
nrate -- tasa nominal.
pyr -- número de períodos de capitalización por año.
Ejemplo.-- Si al principio de cada mes se ahorran $ 1000, a una tasa nominal del 12% con capitalización mensual, ¿Cuánto se tendrá ahorrado al final del mes 12?
End of explanation
cf.tvmm(pval = -6000, # depósito inicial
nper = 32, # número de períodos
pmt = 0, # pago períodico
fval = 10000, # saldo final
pyr = 12) # capitalización mensual
Explanation: Función tvmm
tvmm(pval=None, fval=None, pmt=None, nrate=None, nper=None,
due=0, pyr=1, noprint=True)
Esta función calcula el parámetro faltante en el flujo de efectivo especificado por el parámetro due.
<img src="images/tvmm.png" width="600">
Nomenclatura para los parámetros:
pval -- valor presente.
fval -- valor futuro.
pmt -- pago períodico.
nper -- cantidad de períodos.
nrate -- tasa de interés por período.
pyr -- número de períodos por año.
due -- momento del período en que se paga la anualidad: 'end' (o 0) indica el pago al final del período; 'begin' (o 1) indica el pago al principio del período.
Ejemplo.-- [2, pág. 55] ¿Qué tasa de interés debe obtenerse para acumular $ 10000 en 32 meses si se hace una inversión de $ 6000? R/ 1.61%
End of explanation
cf.tvmm(pmt = 17500, # pago períodico anual
fval = 540000, # valor de venta
nrate = 12.0, # tasa de interés
nper = 5) # numero de periodos
Explanation: Ejemplo.-- [2, pág. 58] ¿Cuánto se puede pagar por una propiedad que generará un flujo neto anual de $ 17500 durante 5 años, si al final la propiedad se puede vender en $ 540.000? (la tasa nominal de interés es del 12%)
End of explanation
cf.tvmm(pval = [ 100, 110, 110, 105 ],
fval = [ -20, -10, -20, 0 ],
nper = [ 5, 5, 6, 7 ],
nrate = [ 2.0, 1.7, 1.6, 1.7 ])
Explanation: Ejemplo.-- ¿Cuál es la amortización para los siguientes préstamos (fval es el pago final residual)?
plazo 5, 5, 6, 7
pval 100, 110, 110, 105
fval -20, -10, -20, 0
tasa 0.020, 0.017, 0.016, 0.017
End of explanation
cf.amortize(pval=1000, fval=0, pmt=None, nrate=1.0,
nper=6, due=0)
table = cf.amortize(pval=1000, fval=0, pmt=None, nrate=1.0,
nper=6, due=0)
table['Principal']
sum(table['Principal'])
table['Interest']
table['Interest'].tolist()
sum(table['Interest'])
table['Payment']
sum(table['Payment'])
Explanation: Ejercicio.-- Se abre una cuenta hoy con un depósito de $ 775. La tasa nominal es 6.25% con capitalización mensual. ¿Si se desea tener un capital de $ 4000 en 60 meses, ¿Cuánto se debe depositar mensualmente (al final del mes)?
Función amortize
amortize(pval=None, fval=None, pmt=None, nrate=None,
nper=None, due=0, pyr=1)
Imprime la tabla de amortizaciones. La llamada a la función retorna un pandas.DataFrame. Esta función usa los mismos parámetros de la función tvmm.
Ejemplo.-- Construya la tabla de amortización (balance) para un préstamo de $ 1000 a 6 meses con pagos mensuales iguales a una tasa de interés del 1% mensual.
End of explanation |
10,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of Convolution Theorem
Illustrate the discrete convolution theorem.
F indicates Fourier transform operator and F{f} and F{g} are the fourier transform of "f" and "g" so we have
Step1: Numeric sample
Step2: See that f and h are periodic images and the period is (H,W) that is the shape of f.
At the following code, the F and H need to be the same shape
Step3: gg and g need to be equal
Step4: Using an image to illustrate the discrete convolution theorem
See the original image (keyb,tif) and h
Step5: Convolution in frequency domain
Step6: Convolution in space domain
Step7: The convolution in frequency domain and space domain need to be equals | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
from numpy.fft import fft2
from numpy.fft import ifft2
Explanation: Demonstration of Convolution Theorem
Illustrate the discrete convolution theorem.
F indicates Fourier transform operator and F{f} and F{g} are the fourier transform of "f" and "g" so we have:
$$ F\left { f * g \right } = F\left { f \right } \cdot F\left { g \right } $$
$$ F(f\cdot g) = F\left { f \right } * F\left { g \right } $$
Importing
End of explanation
fr = np.linspace(-1,1,6)
f = np.array([fr,2*fr,fr,fr])
print(f)
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
g = ia.pconv(f,h)
print(g)
Explanation: Numeric sample
End of explanation
#Deixar o h (3,3) com o mesmo shape de f (4,6)
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
G = F * H
gg = ifft2(G)
print("Result gg: \n",np.around(gg))
Explanation: See that f and h are periodic images and the period is (H,W) that is the shape of f.
At the following code, the F and H need to be the same shape
End of explanation
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
Explanation: gg and g need to be equal:
End of explanation
f = mpimg.imread('/home/lotufo/ia898/data/keyb.tif')
plt.imshow(f,cmap='gray');
plt.title('Original')
plt.colorbar()
plt.show()
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
Explanation: Using an image to illustrate the discrete convolution theorem
See the original image (keyb,tif) and h
End of explanation
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
x,y = f.shape
plt.figure(1)
plt.imshow(np.log(np.abs(ia.ptrans(F,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of f')
plt.colorbar()
plt.figure(2)
plt.imshow(np.log(np.abs(ia.ptrans(H,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of h')
plt.colorbar()
G = F * H
plt.figure(3)
plt.imshow(np.log(np.abs(ia.ptrans(G,(x//2,y//2))+1)),cmap='gray')
plt.title('F * H')
plt.colorbar()
gg = ifft2(G)
plt.figure(4)
plt.imshow(gg.real.astype(np.float),cmap='gray');
plt.title('Convolution in frequency domain')
plt.colorbar()
plt.show()
Explanation: Convolution in frequency domain:
End of explanation
g = ia.pconv(f,h)
plt.imshow(g.real.astype(np.float),cmap='gray');
plt.title('Convolution in space domain')
plt.colorbar()
plt.show()
Explanation: Convolution in space domain
End of explanation
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
Explanation: The convolution in frequency domain and space domain need to be equals
End of explanation |
10,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 5
Step1: This is a list containing the ages of some group of students, and we want to compute the average. How do we compute averages?
We know an average is some total quantity divided by number of elements. Well, the latter is easy enough to compute
Step2: The total quantity is a bit trickier. You could certainly sum them all manually--
Step3: ...but that seems really, really tedious. Plus, how do you even know how many elements your list has?
Loop structure
The structure itself is pretty simple
Step4: There are two main parts to the loop
Step5: You can loop through sets and tuples the same way.
Step6: Iterators
The unifying theme with all these collections you can loop through is that they're all examples of iterators.
Easily the most common iterator you'll use (aside from lists, sets, and tuples) is the range function
Step7: Note, again, that the range of numbers goes from 0 (inclusive) to the specified end (exclusive)! The critical point is that the argument to range specifies the length of the returned iterator.
A few more examples of range before we get back to loops
Step8: IMPORTANT
Step9: With loops, whitespace in Python really starts to matter. If you want many things to happen inside of a loop, you'll need to indent every line!
Let's say in some future homework assignment, I ask you to write a loop computing the squares of the numbers 1-10. How would you do it?
Well, you could manually write it out, I suppose...
Step10: ...but that's awfully boring.
Instead, let's use the range function we were just discussing
Step11: Looping through dictionaries
This gets its own subsection because it pulls together pretty much all the concepts we've discussed so far
Step12: Remember the super-useful methods for iterating through dictionaries? keys gives you a list of all the keys, values a list of all the values, and items a list of tuples of the key-value pairs. Here's the loop
Step13: 1
Step14: instead of this
Step15: In the same vein, I could have just as easily written the loop like this
Step16: and indeed, if that is easier for you to understand, by all means do it! This is to illustrate all the concepts at play at once
Step17: x < 15 is a boolean statement | Python Code:
ages = [21, 22, 19, 19, 22, 21, 22, 31]
Explanation: Lecture 5: Loops
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
In this lecture, we'll go over the basics of looping in Python. By the end of this lecture, you should be able to
Perform basic arithmetic operations using arbitrary-length collections
Use "unpacking" as a shortcut for iterating through dictionaries
Describe the differences between the separate kinds of loops
Part 1: for Loops
Looping, like lists, is a critical component in programming and data science. When we're training models on data, we'll need to loop over each data point, examining it in turn and adjusting our model accordingly regardless of how many data points there are. This kind of repetitive task is ideal for looping.
Let's define for ourselves the following list:
End of explanation
number_of_elements = len(ages)
print(number_of_elements)
Explanation: This is a list containing the ages of some group of students, and we want to compute the average. How do we compute averages?
We know an average is some total quantity divided by number of elements. Well, the latter is easy enough to compute:
End of explanation
age_sum = ages[0] + ages[1] + ages[2] # + ... and so on
Explanation: The total quantity is a bit trickier. You could certainly sum them all manually--
End of explanation
for N in [2, 5, 7, 9]: # Header
print(N) # Body
Explanation: ...but that seems really, really tedious. Plus, how do you even know how many elements your list has?
Loop structure
The structure itself is pretty simple:
some collection of "things" to iterate over
a placeholder for the current "thing"
a chunk of code describing what to do with the current "thing"
Let's start simple: looping through a list, printing out each item one at a time.
End of explanation
age_sum = 0
ages = [21, 22, 19, 19, 22, 21, 22, 31]
for age in ages:
age_sum += age
avg = age_sum / number_of_elements # Compute the average using the formula we know and love!
print("Average age: {:.2f}".format(avg))
Explanation: There are two main parts to the loop: the header and the body.
The header contains 1) the collection we're iterating over (in this example, the list), and 2) the "placeholder" we're using to hold the current value (in this example, N).
The body is the chunk of code under the header (indented!) that executes on each iteration.
Back, then, to computing an average:
End of explanation
s = set([1, 1, 2, 3, 5])
for item in s:
print(item)
t = tuple([1, 1, 2, 3, 5])
for item in t:
print(item)
Explanation: You can loop through sets and tuples the same way.
End of explanation
for i in range(10):
print(i, end = " ") # Prints everything on 1 line.
Explanation: Iterators
The unifying theme with all these collections you can loop through is that they're all examples of iterators.
Easily the most common iterator you'll use (aside from lists, sets, and tuples) is the range function:
End of explanation
for i in range(5): # One argument: specifies the "end"
print(i, end = " ")
for i in range(5, 10): # Two arguments: first is "start" (inclusive), second is "end" (exclusive)
print(i, end = " ")
for i in range(0, 10, 2): # Three arguments: start, end, and increment
print(i, end = " ")
Explanation: Note, again, that the range of numbers goes from 0 (inclusive) to the specified end (exclusive)! The critical point is that the argument to range specifies the length of the returned iterator.
A few more examples of range before we get back to loops:
End of explanation
some_list = [3.14159, "random stuff", 4200]
for item in some_list:
print(item)
Explanation: IMPORTANT: INDENTATION MATTERS
You'll notice in these loops that the loop body is distinctly indented relative to the loop header. This is intentional and is indeed how it works! If you fail to indent the body of the loop, Python will complain:
End of explanation
squares = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Explanation: With loops, whitespace in Python really starts to matter. If you want many things to happen inside of a loop, you'll need to indent every line!
Let's say in some future homework assignment, I ask you to write a loop computing the squares of the numbers 1-10. How would you do it?
Well, you could manually write it out, I suppose...
End of explanation
squares = [] # Empty list for all our squares
for num in range(10):
squared_number = num ** 2 # Exponent operation!
squares.append(squared_number) # Add to our list.
print(squares)
Explanation: ...but that's awfully boring.
Instead, let's use the range function we were just discussing:
End of explanation
favorite_languages = {
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'shannon': 'python'
}
# Notice the indentation, if you decide to define a dictionary this way!
Explanation: Looping through dictionaries
This gets its own subsection because it pulls together pretty much all the concepts we've discussed so far: lists, tuples, dictionaries, and looping.
Let's start by defining a dictionary. In this case, we'll set up a dictionary that maps people to their favorite programming language.
End of explanation
for key, value in favorite_languages.items(): # 1
print("{} prefers {}.".format(key, value))
Explanation: Remember the super-useful methods for iterating through dictionaries? keys gives you a list of all the keys, values a list of all the values, and items a list of tuples of the key-value pairs. Here's the loop:
End of explanation
some_list = ['a', 'b']
a, b = some_list
Explanation: 1: Notice how key, value are just out there floating! This is called unpacking and is a very useful technique in Python. If I have a list of a few items, and (critically) I know how many items there are, I can do this
End of explanation
some_list = ['a', 'b']
a = some_list[0]
b = some_list[1]
Explanation: instead of this
End of explanation
for keyvalue in favorite_languages.items(): # 1
key = keyvalue[0]
value = keyvalue[1]
print("{} prefers {}.".format(key, value)) # 2
Explanation: In the same vein, I could have just as easily written the loop like this:
End of explanation
x = 10
while x < 15:
print(x, end = " ")
x += 1
Explanation: and indeed, if that is easier for you to understand, by all means do it! This is to illustrate all the concepts at play at once:
the loop header iterates through a list provided by favorite_languages.items()
each iteration, items() provides a tuple: a key-value pair from the dictionary
we can "unpack" these variables using shorthand, but it's also perfectly valid to do it the "regular" way
That's pretty much for loops!
What about the case where you don't know ahead of time how many iterations your loop will take?
Part 2: while Loops
"While" loops go back yet again to the concept of boolean logic we introduced in an earlier lecture: loop until some condition is reached.
The structure here is a little different than for loops. Instead of explicitly looping over an iterator, you'll set some condition that evaluates to either True or False; as long as the condition is True, Python executes another loop.
End of explanation
for i in range(10, 15):
print(i, end = " ")
# No update needed!
Explanation: x < 15 is a boolean statement: it is either True or False, depending on the value of x. Initially, this number is 10, which is certainly < 15, so the loop executes. 10 is printed, x is incremented, and the condition is checked again.
A potential downside of while loops: forgetting to update the condition inside the loop.
It's easy to take for granted; for loops implicitly handle this for us!
End of explanation |
10,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 2. Coordinate transformation methods
Step2: 3. COMPUTE AKI & RICHARDS SOLUTION
Step3: 4. Plot displacement components | Python Code:
# Please run it before you start the simulation!
import matplotlib.pyplot as plt
from scipy.special import erf
from scipy.integrate import quad
from numpy import sin, cos, arccos, arctan, pi, sign, sqrt
from numpy import vectorize, linspace, asarray, outer, diff, savetxt
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Seismic Wavefield of a Double-Couple Point Source</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
The fundamental analytical solution to the problem of a double couple point source in infinite homogeneous media (Aki and Richards - 2002) is implemented in this Ipython notebook. This solution of seimic waves in an infinite homogeneous medium provide fundamentamental information used as benchmark to understand kinematic properties of seismic sources, quasi-analytical solutions to wave propagation problems, and influence of earthquakes on crustal deformation.
Simulations of 3D elastic wave propagation need to be validated by the use of analytical solutions. In order to evaluate how healty a numerical solution is, one may recreate conditions for which analytical solutions exist with the aim of reproduce and compare the different results. In this sense, the fundamental solution for the double couple point source offers an alternative to achieve this quality control
We which to find the displacement wavefield $\mathbf{u}(\mathbf{x},t)$ at some distance $\mathbf{x}$ from a seismic moment tensor source with $M_xz = M_zx = M_0$. According to (Aki and Richards 2002), the displacement $\mathbf{u}(\mathbf{x},t)$ due to a double-couple point source in an infinite, homogeneous, isotropic medium is
\begin{align}
\mathbf{u}(\mathbf{x},t) &= \dfrac{1}{4\pi\rho} \mathbf{A}^N \dfrac{1}{r^4} \int_{{r}/{\alpha}}^{{r}/{\beta}} \tau M_o(t-\tau)d\tau +\
&+\dfrac{1}{4\pi\rho\alpha^2}\mathbf{A}^{IP}\dfrac{1}{r^2} M_o(t-{r}/{\alpha}) +\dfrac{1}{4\pi\rho\beta^2}\mathbf{A}^{IS}\dfrac{1}{r^2} M_o(t-{r}/{\beta})+\
&+\dfrac{1}{4\pi\rho\alpha^3}\mathbf{A}^{FP}\dfrac{1}{r} \dot M_o(t-{r}/{\alpha}) +\dfrac{1}{4\pi\rho\beta^3}\mathbf{A}^{FS}\dfrac{1}{r} \dot M_o(t-{r}/{\beta})
\end{align}
where the radiation patterns $\mathbf{A}^N$ (near-field), $\mathbf{A}^{IP}$ (intermediate-field P wave), $\mathbf{A}^{IS}$ (intermediate-field S wave), $\mathbf{A}^{FP}$ (far-field P wave) and $\mathbf{A}^{FS}$ (far-field S wave) are:
\begin{align}
\mathbf{A}^N &= 9sin(2\theta)cos(\phi)\hat{\mathbf{r}} - 6(cos(2\theta)cos(\phi)\hat{\mathbf{\theta}} - cos(\theta)sin(\phi))\hat{\mathbf{\phi}}\
\mathbf{A}^{IP} &= 4sin(2\theta)cos(\phi)\hat{\mathbf{r}} - 2(cos(2\theta)cos(\phi)\hat{\mathbf{\theta}} - cos(\theta)sin(\phi))\hat{\mathbf{\phi}}\
\mathbf{A}^{IS} &= -3sin(2\theta)cos(\phi)\hat{\mathbf{r}} + 3(cos(2\theta)cos(\phi)\hat{\mathbf{\theta}} - cos(\theta)sin(\phi))\hat{\mathbf{\phi}}\
\mathbf{A}^{FP} &= sin(2\theta)cos(\phi)\hat{\mathbf{r}}\
\mathbf{A}^{FS} &= cos(2\theta)cos(\phi)\hat{\mathbf{\theta}} - cos(\theta)sin(\phi)\hat{\mathbf{\phi}}
\end{align}
The parameters one have to consider include: receiver coordinates $\mathbf{x}$, density of the medium $\rho$, S-Wave velocity $\beta$, p-wave velocity $\alpha$, and the desired time dependent seismic moment function $M_o(t)$.On the other hand, integrations limits are determined by the propagation time from source to receiver for both p-waves and s-waves ie. ${r}/{\beta}$ and ${r}/{\alpha}$ respectively.
This a solution in spherical coordinates. Since we normally measure displacements in cartesian coordinates, it is necessary to implement a change of coordinates if we want to visualize the solution in cartesian coordinates.
End of explanation
def sph2cart(r, th, phi):
'''
Transform spherical coordinates to cartesian
'''
x = r * sin(th) * cos(phi)
y = r * sin(th) * sin(phi)
z = r * cos(th)
return x, y, z
def cart2sph(x, y, z):
'''
Transform cartesian coordinates to spherical
'''
r = sqrt(x**2 + y**2 + z**2)
th = arccos(z/r)
phi = arctan(y/x)
return r, th, phi
Explanation: 2. Coordinate transformation methods
End of explanation
#%% Initialization of setup
# -----------------------------------------------------------------------------
x = 4000 # x receiver coordinate
y = 4000 # y receiver coodinate
z = 4000 # z receiver coodinate
rho = 2500 # Density kg/m^3
beta = 3000 # S-wave velocity
alpha = sqrt(3)*beta # p-wave velocity
stf = 'gauss' # Set the desired source time function 'heaviside' , 'gauss'
Trise = 0.25 # Rise time used in the source time function
Mo = 4*10E16 # Scalar Moment
r, th, phi = cart2sph(x, y, z) # spherical receiver coordinates
tmin = r/alpha - 2*Trise # Minimum observation time
tmax = r/beta + Trise + 2*Trise # Maximum observation time
# SOURCE TIME FUNCTION
# -----------------------------------------------------------------------------
if stf == 'heaviside':
M0 = lambda t: 0.5*Mo*0.5*(sign(t) + 1)
if stf == 'gauss':
M0 = lambda t: Mo*(1 + erf(t/Trise))
#******************************************************************************
# COMPUTE AKI & RICHARDS SOLUTION
#******************************************************************************
# Scalar factors int the AKI & RICHARDS solution
# -----------------------------------------------------------------------------
CN = (1/(4 * pi * rho))
CIP = (1/(4 * pi * rho * alpha**2))
CIS = (1/(4 * pi * rho * beta**2))
CFP = (1/(4 * pi * rho * alpha**3))
CFS = (1/(4 * pi * rho * beta**3))
# Radiation patterns: near(AN), intermedia(AIP,AIS), and far(AFP,AFS) fields
# -----------------------------------------------------------------------------
def AN(th, phi):
AN = [[9*sin(2*th)*cos(phi), -6*cos(2*th)*cos(phi), 6*cos(th)*sin(phi)]]
return asarray(AN)
def AIP(th, phi):
AIP = [[4*sin(2*th)*cos(phi), -2*cos(2*th)*cos(phi), 2*cos(th)*sin(phi)]]
return asarray(AIP)
def AIS(th, phi):
AIS = [-3*sin(2*th)*cos(phi), 3*cos(2*th)*cos(phi), -3*cos(th)*sin(phi)]
return asarray(AIS)
def AFP(th, phi):
AFP = [sin(2*th)*cos(phi), 0, 0 ]
return asarray(AFP)
def AFS(th, phi):
AFS = [0, cos(2*th)*cos(phi), -cos(th)*sin(phi)]
return asarray(AFS)
# Calculate integral in the right hand side of AKI & RICHARDS solution
# -----------------------------------------------------------------------------
integrand = lambda tau, t: tau*M0(t - tau)
def integral(t):
return quad(integrand, r/alpha, r/beta, args=(t))[0]
vec_integral = vectorize(integral)
# Assemble the total AKI & RICHARDS solution
# -----------------------------------------------------------------------------
t = linspace(tmin, tmax, 1000)
UN = CN * (1/r**4) * outer(AN(th, phi), vec_integral(t))
UIP = CIP * (1/r**2) * outer(AIP(th, phi), M0(t - r/alpha))
UIS = CIS * (1/r**2) * outer(AIS(th, phi), M0(t - r/beta))
t, dt = linspace(tmin, tmax, 1001, retstep=True) # diff() return N-1 size vector
UFP = CFP * (1/r) * outer(AFP(th, phi), diff(M0(t - r/alpha))/dt)
UFS = CFS * (1/r) * outer(AFS(th, phi), diff(M0(t - r/beta))/dt)
t = linspace(tmin, tmax, 1000)
U = UN + UIP + UIS + UFP + UFS
Ur, Uth, Uphi = U[0,:], U[1,:], U[2,:] # spherical componets of the field u
Ux, Uy, Uz = sph2cart(Ur, Uth, Uphi) # spherical to cartesian coordinates
Explanation: 3. COMPUTE AKI & RICHARDS SOLUTION
End of explanation
# Plotting
# -----------------------------------------------------------------------------
seis = [Ux, Uy, Uz, Ur, Uth, Uphi] # Collection of seismograms
labels = ['$U_x(t)$','$U_y(t)$','$U_z(t)$','$U_r(t)$','$U_\theta(t)$','$U_\phi(t)$']
cols = ['b','r','k','g','c','m']
# Initialize animated plot
fig = plt.figure(figsize=(12,8), dpi=80)
fig.suptitle("Seismic Wavefield of a Double-Couple Point Source", fontsize=16)
plt.ion() # set interective mode
plt.show()
for i in range(6):
st = seis[i]
ax = fig.add_subplot(2, 3, i+1)
ax.plot(t, st, lw = 1.5, color=cols[i])
ax.set_xlabel('Time(s)')
ax.text(tmin+0.8*(tmax-tmin), 0.7*max(st), labels[i])
ax.spines['left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.spines['left'].set_smart_bounds(True)
ax.spines['bottom'].set_smart_bounds(True)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
#savetxt('seis.csv', (t, Ux, Uy, Uz, Ur, Uth, Uphi)) # Export the data as seis.csv in the given order
Explanation: 4. Plot displacement components
End of explanation |
10,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiscale example in one dimension
This script applies the FEM to a one dimensional example of a multiscale problem. This problem was introduced by
Peterseim in "Variational Multiscale Stabilization and the Exponential Decay of correctors, p.2".
$$
\begin{cases}
- (A_{\varepsilon}(x)u_{\varepsilon}'(x))' &= 1, \qquad \text{ for }x \in (0,1)\
u_{\varepsilon}(0)= u_{\varepsilon}(1) &= 0,
\end{cases}
$$
where, for $\varepsilon >0$,
$$
A_{\varepsilon}(x)
Step1: First, we define the given functions and furthermore, we visualize the coefficient for two choices of epsilon.
Step2: We compute the FEM approximation for each mesh size, plot the result and store the energy error. In order to apply the FEM, our computations are based on the 'gridlod' framework but with a coarse right hand side.
Step3: Lastly, we plot the energy error. | Python Code:
import os
import sys
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from gridlod import util, world, fem
from gridlod.world import World
import femsolverCoarse
Explanation: Multiscale example in one dimension
This script applies the FEM to a one dimensional example of a multiscale problem. This problem was introduced by
Peterseim in "Variational Multiscale Stabilization and the Exponential Decay of correctors, p.2".
$$
\begin{cases}
- (A_{\varepsilon}(x)u_{\varepsilon}'(x))' &= 1, \qquad \text{ for }x \in (0,1)\
u_{\varepsilon}(0)= u_{\varepsilon}(1) &= 0,
\end{cases}
$$
where, for $\varepsilon >0$,
$$
A_{\varepsilon}(x) := \frac{1}{4}\left( 2 - \cos \left(\frac{2 \pi x}{\varepsilon}\right) \right)^{-1}.
$$
The exact solution is given by
$$
u_{\varepsilon}(x) = 4 (x-x^2)- 4 \varepsilon \left( \frac{1}{4 \pi} \sin(2 \pi \frac{x}{\varepsilon}) - \frac{1}{2 \pi}x \sin(2 \pi \frac{x}{\varepsilon}) -
\frac{\varepsilon}{4 \pi^2} \cos(2 \pi \frac{x}{\varepsilon}) + \frac{\varepsilon}{4 \pi^2} \right).
$$
In order to demonstrate the issue that comes with multiscale problems in terms of the FEM, we use several choices of the mesh size $h$ and compare the energy error.
End of explanation
fine = 4096
NFine = np.array([fine])
NpFine = np.prod(NFine+1)
NList = [2,4,8,16, 32, 64, 128, 256]
epsilon = 2**(-5)
epsilon1 = 2**(-6)
pi = np.pi
xt = util.tCoordinates(NFine).flatten()
xp = util.pCoordinates(NFine).flatten()
aFine = (2 - np.cos(2*pi*xt/epsilon))**(-1)
aFine1 = (2 - np.cos(2*pi*xt/epsilon1))**(-1)
uSol = 4*(xp - xp**2) - 4*epsilon*(1/(4*pi)*np.sin(2*pi*xp/epsilon) -
1/(2*pi)*xp*np.sin(2*pi*xp/epsilon) -
epsilon/(4*pi**2)*np.cos(2*pi*xp/epsilon) +
epsilon/(4*pi**2))
uSol = uSol/4
# plot the coefficient
plt.figure('Coefficient')
plt.plot(xt,aFine, label='$A_{\epsilon}(x)$')
plt.yticks((0,np.max(aFine)+np.min(aFine)),fontsize="small")
plt.ylabel('$y$', fontsize="small")
plt.xlabel('$x$', fontsize="small")
plt.legend(frameon=False,fontsize="large")
plt.title('$A_{\epsilon}(x)$ for $\epsilon=2^{-5}$.')
plt.show()
# plot the coefficient for a smaller epsilon
plt.figure('Coefficient1')
plt.plot(xt,aFine1, label='$A_{\epsilon}(x)$')
plt.yticks((0,np.max(aFine)+np.min(aFine1)),fontsize="small")
plt.ylabel('$y$', fontsize="small")
plt.xlabel('$x$', fontsize="small")
plt.legend(frameon=False,fontsize="large")
plt.title('$A_{\epsilon}(x)$ for $\epsilon=2^{-6}$.')
plt.show()
Explanation: First, we define the given functions and furthermore, we visualize the coefficient for two choices of epsilon.
End of explanation
newErrorFine = []
x = []
y = []
for N in NList:
NWorldCoarse = np.array([N])
boundaryConditions = np.array([[0, 0]])
NCoarseElement = NFine/NWorldCoarse
world = World(NWorldCoarse, NCoarseElement, boundaryConditions)
AFine = fem.assemblePatchMatrix(NFine, world.ALocFine, aFine)
#grid nodes
xpCoarse = util.pCoordinates(NWorldCoarse).flatten()
NpCoarse = np.prod(NWorldCoarse+1)
f = np.ones(NpCoarse)
uCoarseFull = femsolverCoarse.solveCoarse_fem(world, aFine, f, boundaryConditions)
basis = fem.assembleProlongationMatrix(NWorldCoarse, NCoarseElement)
uLodCoarse = basis*uCoarseFull
newErrorFine.append(np.sqrt(np.dot(uSol - uLodCoarse, AFine*(uSol - uLodCoarse))))
x.append(N)
y.append(1./N)
if np.size(x)==1:
plt.figure('FEM-Solutions')
plt.subplots_adjust(left=0.01,bottom=0.04,right=0.99,top=0.95,wspace=0,hspace=0.2)
plt.subplot(241)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'o--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==2:
plt.subplot(242)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'o--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==3:
plt.subplot(243)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'o--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==4:
plt.subplot(244)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'o--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==5:
plt.subplot(245)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==6:
plt.subplot(246)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==7:
plt.subplot(247)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
elif np.size(x)==8:
plt.subplot(248)
plt.plot(xp,uSol,'k', label='$u_{\epsilon}(x)$')
plt.plot(xpCoarse,uCoarseFull,'--', label= 'u_h(x)')
plt.title('1/h= ' + str(N),fontsize="small")
plt.tick_params(axis='both', which='both', bottom='off', top='off', labelbottom='off', right='off', left='off', labelleft='off')
plt.legend(frameon=False,fontsize="small")
plt.show()
Explanation: We compute the FEM approximation for each mesh size, plot the result and store the energy error. In order to apply the FEM, our computations are based on the 'gridlod' framework but with a coarse right hand side.
End of explanation
plt.figure("Error")
plt.loglog(x,newErrorFine,'o-', basex=2, basey=2)
plt.loglog(x,y,'--k',basex=2, basey=2, linewidth=1, alpha=0.3)
plt.ylabel('Energy error')
plt.xlabel('$1/h$')
plt.subplots_adjust(left=0.1,bottom=0.1,right=0.98,top=0.95,wspace=0.2,hspace=0.2)
plt.title('Energy error for the standard FEM')
plt.grid(True,which="both",ls="--")
plt.show()
Explanation: Lastly, we plot the energy error.
End of explanation |
10,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AveragePooling2D
[pooling.AveragePooling2D.0] input 6x6x3, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last'
Step1: [pooling.AveragePooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.AveragePooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
Step3: [pooling.AveragePooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.AveragePooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.AveragePooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.AveragePooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
Step7: [pooling.AveragePooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
Step8: [pooling.AveragePooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.AveragePooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
Step10: [pooling.AveragePooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.AveragePooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
Step12: [pooling.AveragePooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests | Python Code:
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: AveragePooling2D
[pooling.AveragePooling2D.0] input 6x6x3, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 7, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(273)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(274)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(275)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(276)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 7, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(277)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(278)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(279)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (5, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(280)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
End of explanation
data_in_shape = (5, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(281)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
End of explanation
data_in_shape = (4, 6, 4)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
End of explanation
import os
filename = '../../../test/data/layers/pooling/AveragePooling2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
10,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step36: Neural Network Training
Hyperparameters
Tune the following parameters
Step38: Build the Graph
Build the graph using the neural network you implemented.
Step40: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step42: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step44: Checkpoint
Step47: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step50: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step52: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (8, 100)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
words = set()
index_to_word = {}
word_to_index = {}
for word in text:
words.add(word)
for index, word in enumerate(words):
#print (word,index)
index_to_word[index] = word
word_to_index[word] = index
return word_to_index, index_to_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
ret = {}
ret['.'] = "||Period||" #( . )
ret[','] = "||Comma||" #( , )
ret['"'] = "||Quotation_Mark||" # ( " )
ret[';'] = "||Semicolon||" #( ; )
ret['!'] = "||Exclamation_mark||" #( ! )
ret['?'] = "||Question_mark||" #( ? )
ret['('] = "||Left_Parentheses||" #( ( )
ret[')'] = "||Right_Parentheses||" #( ) )
ret['--'] = "||Dash||" # ( -- )
ret['\n'] = "||Return||" # ( \n )
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None ], name="input")
targets = tf.placeholder(tf.int32, [None, None ], name="targets")
learning_rate = tf.placeholder(tf.float32, None, name="LearningRate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
layer_count = 2
keep_prob = tf.constant(0.7,tf.float32, name="keep_prob")
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)
lstm2 = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm, lstm2], state_is_tuple=True)
initial_state = cell.zero_state( batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state" )
#_outputs, final_state = tf.nn.rnn(cell, rnn_inputs, initial_state=init_state)
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
import random
import math
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
ret = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
ret = tf.nn.embedding_lookup(ret, input_data)
print("shape {}".format(ret.get_shape().as_list()))
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity (final_state, "final_state")
return output, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embedded = get_embed(input_data, vocab_size, rnn_size)
out, fin = build_rnn(cell, embedded)
out = tf.contrib.layers.fully_connected(out,vocab_size, activation_fn=None)
out_shape = out.get_shape().as_list()
print("build_nn embedded{}, out:{}, fin:{}".format(embedded.get_shape().as_list(),out_shape, fin.get_shape().as_list()))
print()
return out, fin
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
text = int_text
ret = np.array([])
inputs = []
targets = []
text_len = len(text) - len(text) % (seq_length*batch_size)
print ("get_batches text:{}, batch:{}, seq:{}".format(text_len, batch_size, seq_length))
ret=[]
for i in range(0, text_len-1, seq_length):
seq = list(int_text[i:i+seq_length])
inputs.append(list(int_text[i:i+seq_length]))
targets.append(list(int_text[i+1:i+seq_length+1]))
for i in range(0,len(inputs),batch_size):
pos=batch_size
#batch_pair = n
ret.append([inputs[i:i+batch_size], targets[i:i+batch_size]])
ret = np.asanyarray(ret)
print("batch test ", ret.shape, ret[3,:,2])
return ret
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 300 # previously 150, but want to get lower loss.
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 1024
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 12 # already discouraged from using 6 and 16, avg sentence length being 10-12
# I'm favoring this formula frm the curse of lerning rate being a function of parameter count.
#This is guess work (empirical), but gives good results.
learning_rate = 1/np.sqrt(rnn_size*seq_length*6700)
print( "learning rate {}, vocab_size {}".format(learning_rate,6700))
100 inf
0.0012 -- 1.666 860-1210: 1.259
0.00012 -- 5.878 1920-2190: 1.070
0.000012 7.4 3000: 2.107
0.00012 -- 6.047 3000: 0.964-- embedding w truncated normal.
1024
0.00812 -- 1.182 stuck
0.00612 -- 0.961 stuck
# Show stats for every n number of batches
show_every_n_batches = 20
tf.set_random_seed(42)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initials = loaded_graph.get_tensor_by_name("initial_state:0")
finals = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initials, finals, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
# As suggested by the last reviewer - tuning randomness
#print("probabs:{}, - {}".format(probabilities.shape, int_to_vocab[np.argmax(probabilities)]))
mostprobable = np.argsort(probabilities)
ret = np.random.choice(mostprobable[-3:],1, p=[0.1, 0.2, 0.7])
return int_to_vocab[ret[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
10,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1
Prepared by David Kirkby dkirkby@uci.edu on 14-Jan-2016.
Step2: 1.4.2 Code Management with Git
See the links from the 2014 Physics 231 website.
1.5.4 Fetching and Displaying SDSS Spectra
Reproduce Figure 1.2 showing a sample SDSS spectrum
Step4: 1.5.6 SDSS DR7 Quasar Catalog
Reproduce Figure 1.4 showing color (r-i) vs. redshift for SDSS DR7 quasars
Step5: Access BOSS spectra and metadata
The AstroML tools can only access pre-BOSS SDSS data, i.e. up to data release DR7. However, all BOSS data (and eventually eBOSS data) can be access with the https
Step6: Read the DR12 quasar catalog
Step8: 1.6.1 Plotting Two-Dimensional Representations of Large Data Sets
Reproduce Figure 1.9 showing g-r vs r-i for SDSS stripe-82 standard stars as a scatter plot with contours overlayed
Step9: Use the same technique to plot the r-i vs. redshift quasar plot above
Step11: 1.6.3 Plotting Representations of Data on the Sky
Reproduce Figure 1.15 showing the WMAP7 raw temperature map using healpix with nside=512 (~3.1Mpix)
Step12: You can make nicer sky plots using the Basemap map-projections library. This example is borrowed from the bossdata docs and shows the number density of BOSS DR12 quasars on the sky
Step13: Graphing Extras
Two packages worth exploring for visualization are | Python Code:
%pylab inline
import astroML
print astroML.__version__
Explanation: Chapter 1
Prepared by David Kirkby dkirkby@uci.edu on 14-Jan-2016.
End of explanation
SDSS Spectrum Example
---------------------
Figure 1.2.
An example of an SDSS spectrum (the specific flux plotted as a function of
wavelength) loaded from the SDSS SQL server in real time using Python tools
provided here (this spectrum is uniquely described by SDSS parameters
plate=1615, fiber=513, and mjd=53166).
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
from matplotlib import pyplot as plt
from astroML.datasets import fetch_sdss_spectrum
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Fetch single spectrum
plate = 1615
mjd = 53166
fiber = 513
spec = fetch_sdss_spectrum(plate, mjd, fiber)
#------------------------------------------------------------
# Plot the resulting spectrum
fig, ax = plt.subplots(figsize=(5, 3.75))
ax.plot(spec.wavelength(), spec.spectrum, '-k', lw=1)
ax.set_xlim(3000, 10000)
ax.set_ylim(25, 300)
ax.set_xlabel(r'$\lambda {(\rm \AA)}$')
ax.set_ylabel('Flux')
ax.set_title('Plate = %(plate)i, MJD = %(mjd)i, Fiber = %(fiber)i' % locals())
plt.show()
Explanation: 1.4.2 Code Management with Git
See the links from the 2014 Physics 231 website.
1.5.4 Fetching and Displaying SDSS Spectra
Reproduce Figure 1.2 showing a sample SDSS spectrum:
End of explanation
SDSS DR7 Quasars
----------------
Figure 1.4.
The r-i color vs. redshift diagram for the first 10,000 entries from the
SDSS Data Release 7 Quasar Catalog. The color variation is due to emission
lines entering and exiting the r and i band wavelength windows.
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
from matplotlib import pyplot as plt
from astroML.datasets import fetch_dr7_quasar
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Fetch the quasar data
data = fetch_dr7_quasar()
# select the first 10000 points
data = data[:10000]
r = data['mag_r']
i = data['mag_i']
z = data['redshift']
#------------------------------------------------------------
# Plot the quasar data
fig, ax = plt.subplots(figsize=(5, 3.75))
ax.plot(z, r - i, marker='.', markersize=2, linestyle='none', color='black')
ax.set_xlim(0, 5)
ax.set_ylim(-0.5, 1.0)
ax.set_xlabel(r'${\rm redshift}$')
ax.set_ylabel(r'${\rm r-i}$')
plt.show()
Explanation: 1.5.6 SDSS DR7 Quasar Catalog
Reproduce Figure 1.4 showing color (r-i) vs. redshift for SDSS DR7 quasars:
End of explanation
import bossdata
print bossdata.__version__
Explanation: Access BOSS spectra and metadata
The AstroML tools can only access pre-BOSS SDSS data, i.e. up to data release DR7. However, all BOSS data (and eventually eBOSS data) can be access with the https://bossdata.readthedocs.org/en/latest/, developed here at UCI:
End of explanation
quasar_catalog = bossdata.meta.Database(quasar_catalog=True)
dr12q = quasar_catalog.select_all(what='RA,DEC,Z_VI,PSFMAG_2,PSFMAG_3', max_rows=0)
z = dr12q['Z_VI']
r = dr12q['PSFMAG_2']
i = dr12q['PSFMAG_3']
fig, ax = plt.subplots(figsize=(5, 3.75))
ax.plot(z, r - i, marker='.', markersize=2, linestyle='none', color='black')
ax.set_xlim(0, 5)
ax.set_ylim(-0.5, 1.0)
ax.set_xlabel(r'${\rm redshift}$')
ax.set_ylabel(r'${\rm r-i}$')
plt.show()
Explanation: Read the DR12 quasar catalog:
End of explanation
SDSS Stripe 82 Standard Stars
-----------------------------
Figure 1.9.
Scatter plot with contours over dense regions.This is a color-color diagram
of the entire set of SDSS Stripe 82 standard stars; cf. figure 1.6.
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
from matplotlib import pyplot as plt
from astroML.plotting import scatter_contour
from astroML.datasets import fetch_sdss_S82standards
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Fetch the Stripe 82 standard star catalog
data = fetch_sdss_S82standards()
g = data['mmu_g']
r = data['mmu_r']
i = data['mmu_i']
#------------------------------------------------------------
# plot the results
fig, ax = plt.subplots(figsize=(5, 3.75))
scatter_contour(g - r, r - i, threshold=200, log_counts=True, ax=ax,
histogram2d_args=dict(bins=40),
plot_args=dict(marker=',', linestyle='none', color='black'),
contour_args=dict(cmap=plt.cm.bone))
ax.set_xlabel(r'${\rm g - r}$')
ax.set_ylabel(r'${\rm r - i}$')
ax.set_xlim(-0.6, 2.5)
ax.set_ylim(-0.6, 2.5)
plt.show()
Explanation: 1.6.1 Plotting Two-Dimensional Representations of Large Data Sets
Reproduce Figure 1.9 showing g-r vs r-i for SDSS stripe-82 standard stars as a scatter plot with contours overlayed:
End of explanation
z = dr12q['Z_VI']
r = dr12q['PSFMAG_2']
i = dr12q['PSFMAG_3']
fig, ax = plt.subplots(figsize=(5, 3.75))
scatter_contour(z, r - i, threshold=1000, log_counts=True, ax=ax,
histogram2d_args=dict(bins=40),
plot_args=dict(marker=',', linestyle='none', color='black'),
contour_args=dict(cmap=plt.cm.bone))
ax.set_xlim(0, 5)
ax.set_ylim(-0.5, 1.0)
ax.set_xlabel(r'${\rm redshift}$')
ax.set_ylabel(r'${\rm r-i}$')
plt.show()
Explanation: Use the same technique to plot the r-i vs. redshift quasar plot above:
End of explanation
Example of HealPix pixellization
--------------------------------
Figure 1.15.
The top panel shows HEALPix pixels in nested order. The 12 fundamental sky
divisions can be seen, as well as the hierarchical nature of the smaller
pixels. This shows a pixelization with nside = 4, that is, each of the 12
large regions has 4 x 4 pixels, for a total of 192 pixels. The lower panel
shows a seven-year co-add of raw WMAP data, plotted using the HEALPix
projection using the HealPy package. This particular realization has
nside = 512, for a total of 3,145,728 pixels. The pixels are roughly
6.8 arcminutes on a side.
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
from __future__ import print_function
import numpy as np
from matplotlib import pyplot as plt
# warning: due to a bug in healpy, importing it before pylab can cause
# a segmentation fault in some circumstances.
import healpy as hp
from astroML.datasets import fetch_wmap_temperatures
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Next plot the wmap pixellization
wmap_unmasked = fetch_wmap_temperatures(masked=False)
# plot the unmasked map
fig = plt.figure(2, figsize=(10, 7.5))
hp.mollview(wmap_unmasked, min=-1, max=1, title='Raw WMAP data',
unit=r'$\Delta$T (mK)', fig=2)
plt.show()
Explanation: 1.6.3 Plotting Representations of Data on the Sky
Reproduce Figure 1.15 showing the WMAP7 raw temperature map using healpix with nside=512 (~3.1Mpix):
End of explanation
from mpl_toolkits.basemap import Basemap
from matplotlib.collections import PolyCollection
def plot_sky(ra, dec, data=None, nside=16, label='', projection='eck4', cmap=plt.get_cmap('jet'), norm=None,
hide_galactic_plane=False):
# get pixel area in degrees
pixel_area = hp.pixelfunc.nside2pixarea(nside, degrees=True)
# find healpixels associated with input vectors
pixels = hp.ang2pix(nside, 0.5*np.pi-np.radians(dec), np.radians(ra))
# find unique pixels
unique_pixels = np.unique(pixels)
# count number of points in each pixel
bincounts = np.bincount(pixels)
# if no data provided, show counts per sq degree
# otherwise, show mean per pixel
if data is None:
values = bincounts[unique_pixels]/pixel_area
else:
weighted_counts = np.bincount(pixels, weights=data)
values = weighted_counts[unique_pixels]/bincounts[unique_pixels]
# find pixel boundaries
corners = hp.boundaries(nside, unique_pixels, step=1)
corner_theta, corner_phi = hp.vec2ang(corners.transpose(0,2,1))
corner_ra, corner_dec = np.degrees(corner_phi), np.degrees(np.pi/2-corner_theta)
# set up basemap
m = Basemap(projection=projection, lon_0=90, resolution='l', celestial=True)
m.drawmeridians(np.arange(0, 360, 30), labels=[0,0,1,0], labelstyle='+/-')
m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-')
m.drawmapboundary()
# convert sky coords to map coords
x,y = m(corner_ra, corner_dec)
# regroup into pixel corners
verts = np.array([x.reshape(-1,4), y.reshape(-1,4)]).transpose(1,2,0)
# Make the collection and add it to the plot.
coll = PolyCollection(verts, array=values, cmap=cmap, norm=norm, edgecolors='none')
plt.gca().add_collection(coll)
plt.gca().autoscale_view()
if not hide_galactic_plane:
from astropy.coordinates import SkyCoord
import astropy.units as u
# generate vector in galactic coordinates and convert to equatorial coordinates
galactic_l = np.linspace(0, 2*np.pi, 1000)
galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5
# project to map coordinates
galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree)
m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k')
# Add a colorbar for the PolyCollection
plt.colorbar(coll, orientation='horizontal', pad=0.01, aspect=40, label=label)
return m
plt.figure(figsize=(12,9))
plot_sky(dr12q['RA'].data, dr12q['DEC'].data, label='Number of quasars per square degree')
plt.show()
Explanation: You can make nicer sky plots using the Basemap map-projections library. This example is borrowed from the bossdata docs and shows the number density of BOSS DR12 quasars on the sky:
End of explanation
import seaborn as sns
z = dr12q['Z_VI']
r = dr12q['PSFMAG_2']
i = dr12q['PSFMAG_3']
fig, ax = plt.subplots(figsize=(5, 3.75))
scatter_contour(z, r - i, threshold=1000, log_counts=True, ax=ax,
histogram2d_args=dict(bins=40),
plot_args=dict(marker=',', linestyle='none', color='black'),
contour_args=dict(cmap=plt.cm.bone))
ax.set_xlim(0, 5)
ax.set_ylim(-0.5, 1.0)
ax.set_xlabel(r'${\rm redshift}$')
ax.set_ylabel(r'${\rm r-i}$')
plt.show()
Explanation: Graphing Extras
Two packages worth exploring for visualization are:
* Seaborn: builds on top of matplotlib and provides better defaults and some higher-level graphing functions.
* Bokeh: uses a client-server architecture to allow easy interaction with graphs.
Both of these work in notebooks. The easiest way to get start is to import seaborn, which improves your defaults.
End of explanation |
10,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoregressive Distributed Lag (ARDL) models
ARDL Models
Autoregressive Distributed Lag (ARDL) models extend Autoregressive models with lags of explanatory variables. While ARDL models are technically AR-X models, the key difference is that ARDL models focus on the exogenous variables and selecting the correct lag structure from both the endogenous variable and the exogenous variables. ARDL models are also closely related to Vector Autoregressions, and a single ARDL is effectively one row of a VAR. The key distinction is that an ARDL assumes that the exogenous variables are exogenous in the sense that it is not necessary to include the endogenous variable as a predictor of the exogenous variables.
The full specification of ARDL models is
$$
Y_t = \underset{\text{Constant and Trend}}{\underbrace{\delta_0 + \delta_1 t + \ldots + \delta_k t^k}}
+ \underset{\text{Seasonal}}{\underbrace{\sum_{i=0}^{s-1} \gamma_i S_i}}
+ \underset{\text{Autoregressive}}{\underbrace{\sum_{p=1}^P \phi_p Y_{t-p}}}
+ \underset{\text{Distributed Lag}}{\underbrace{\sum_{k=1}^M \sum_{j=0}^{Q_k} \beta_{k,j} X_{k, t-j}}}
+ \underset{\text{Fixed}}{\underbrace{Z_t \Gamma}} + \epsilon_t
$$
The terms in the model are
Step1: Data
This notebook makes use of money demand data from Denmark, as first used in S. Johansen and K. Juselius (1990). The key variables are
Step2: We plot the demeaned data so that all series appear on the same scale. The lrm series appears to be non-stationary, as does lry. The stationarity of the other two is less obvious.
Step3: Model Selection
ardl_select_order can be used to automatically select the order. Here we use min the minimum AIC among all modes that consider up to 3 lags of the endogenous variable and 3 lags of each exogenous variable. trend="c" indicates that a constant should be included in the model.
Step4: The optimal order is returned as the number of lags of the endogenous variable followed by each of the exogenous regressors. The attribute model on sel_res contains the model ARDL specification which can be used to call fit. Here we look at the summary where the L# indicates that lag length (e.g., L0 is no lag, i.e., $X_{k,t}$, L2 is 2 lags, i.e., $X_{k,t-2}$).
Step5: Global searches
The selection criteria can be switched the BIC which chooses a smaller model. Here we also use the glob=True option to perform a global search which considers models with any subset of lags up to the maximum lag allowed (3 here). This option lets the model selection choose non-contiguous lag specifications.
Step6: While the ardl_order shows the largest included lag of each variable, ar_lags and dl_lags show the specific lags included. The AR component is regular in the sense that all 3 lags are included. The DL component is not since ibo selects only lags 0 and 3 and ide selects only lags 2.
Step7: We can take a look at the best performing models according to the BIC which are stored in the bic property. ibo at lags 0 and 3 is consistently selected, as is ide at either lag 2 or 3, and lry at lag 0. The selected AR lags vary more, although all of the best specifications select some.
Step8: Direct Parameterization
ARDL models can be directly specified using the ARDL class. The first argument is the endogenous variable ($Y_t$). The second is the AR lags. It can be a constant, in which case lags 1, 2, ..., $P$ are included, or a list of specific lags indices to include (e.g., [1, 4]). The third are the exogenous variables, and the fourth is the list of lags to include. This can be one of
An int
Step9: NumPy Data
Below we see how the specification of ARDL models differs when using NumPy arrays. The key difference is that the keys in the dictionary are now integers which indicate the column of x to use. This model is identical to the previously fit model and all key value match exactly (e.g., Log Likelihood).
Step10: Causal models
Using the causal=True flag eliminates lag 0 from the DL components, so that all variables included in the model are known at time $t-1$ when modeling $Y_t$.
Step11: Unconstrained Error Correction Models (UECM)
Unconstrained Error Correction Models reparameterize ARDL model to focus on the long-run component of a time series. The reparameterized model is
$$
\Delta Y_t = \underset{\text{Constant and Trend}}{\underbrace{\delta_0 + \delta_1 t + \ldots + \delta_k t^k}}
+ \underset{\text{Seasonal}}{\underbrace{\sum_{i=0}^{s-1} \gamma_i S_i}}
+ \underset{\text{Long-Run}}{\underbrace{\lambda_0 Y_{t-1} + \sum_{b=1}^M \lambda_i X_{b,t-1}}}
+ \underset{\text{Autoregressive}}{\underbrace{\sum_{p=1}^P \phi_p \Delta Y_{t-p}}}
+ \underset{\text{Distributed Lag}}{\underbrace{\sum_{k=1}^M \sum_{j=0}^{Q_k} \beta_{k,j} \Delta X_{k, t-j}}}
+ \underset{\text{Fixed}}{\underbrace{Z_t \Gamma}} + \epsilon_t
$$
Most of the components are the same. The key differences are
Step12: Cointegrating Relationships
Because the focus is on the long-run relationship, the results of UECM model fits contains a number of properties that focus on the long-run relationship. These are all prefixed ci_, for cointegrating. ci_summary contains the normalized estimates of the cointegrating relationship and associated estimated values.
Step13: ci_resids contains the long-run residual, which is the error the drives figure changes in $\Delta Y_t$.
Step14: Seasonal Dummies
Here we add seasonal terms, which appear to be statistically significant.
Step15: All deterministic terms are included in the ci_ prefixed terms. Here we see the normalized seasonal effects in the summary.
Step16: The residuals are somewhat more random in appearance.
Step17: The relationship between Consumption and Growth
Here we look at an example from Greene's Econometric analysis which focuses on teh long-run relationship between consumption and growth. We start by downloading the raw data.
Greene, W. H. (2000). Econometric analysis 4th edition. International edition, New Jersey
Step18: We then transform the index to be a pandas DatetimeIndex so that we can easily use seasonal terms.
Step19: We defined g as the log of real gdp and c as the log of real consumption.
Step20: Lag Length Selection
The selected model contains 5 lags of consumption and 2 of growth (0 and 1). Here we include seasonal terms although these are not significant.
Step21: from_ardl is a simple way to get the equivalent UECM specification. Here we rerun the selection without the seasonal terms.
Step22: We see that for every % increase in consumption, we need a 1.05% increase in gdp. In other words, the saving rate is estimated to be around 5%.
Step23: Direct Specification of UECM models
UECM can be used to directly specify model lag lengths.
Step24: The changes in the lag structure make little difference in the estimated long-run relationship.
Step25: Bounds Testing
UECMResults expose the bounds test of Pesaran, Shin, and Smith (2001). This test facilitates testing whether there is a level relationship between a set of variables without identifying which variables are I(1). This test provides two sets of critical and p-values. If the test statistic is below the critical value for the lower bound, then there appears to be no levels relationship irrespective of the order or integration in the $X$ variables. If it is above the upper bound, then there appears to be a levels relationship again, irrespective of the order of integration of the $X$ variables. There are 5 cases covered in the paper that include different combinations of deterministic regressors in the model or the test.
$$\Delta Y_{t}=\delta_{0} + \delta_{1}t + Z_{t-1}\beta + \sum_{j=0}^{P}\Delta X_{t-j}\Gamma + \epsilon_{t}$$
where $Z_{t-1}$ includes both $Y_{t-1}$ and $X_{t-1}$.
The cases determine which deterministic terms are included in the model and which are tested as part of the test.
No deterministic terms
Constant included in both the model and the test
Constant included in the model but not in the test
Constant and trend included in the model, only trend included in the test
Constant and trend included in the model, neither included in the test
Here we run the test on the Danish money demand data set. Here we see the test statistic is above the 95% critical value for both the lower and upper.
Pesaran, M. H., Shin, Y., & Smith, R. J. (2001). Bounds testing approaches to the analysis of level relationships. Journal of applied econometrics, 16(3), 289-326.
Step26: Case 3 also rejects the null of no levels relationship. | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style("darkgrid")
sns.mpl.rc("figure", figsize=(16, 6))
sns.mpl.rc("font", size=14)
Explanation: Autoregressive Distributed Lag (ARDL) models
ARDL Models
Autoregressive Distributed Lag (ARDL) models extend Autoregressive models with lags of explanatory variables. While ARDL models are technically AR-X models, the key difference is that ARDL models focus on the exogenous variables and selecting the correct lag structure from both the endogenous variable and the exogenous variables. ARDL models are also closely related to Vector Autoregressions, and a single ARDL is effectively one row of a VAR. The key distinction is that an ARDL assumes that the exogenous variables are exogenous in the sense that it is not necessary to include the endogenous variable as a predictor of the exogenous variables.
The full specification of ARDL models is
$$
Y_t = \underset{\text{Constant and Trend}}{\underbrace{\delta_0 + \delta_1 t + \ldots + \delta_k t^k}}
+ \underset{\text{Seasonal}}{\underbrace{\sum_{i=0}^{s-1} \gamma_i S_i}}
+ \underset{\text{Autoregressive}}{\underbrace{\sum_{p=1}^P \phi_p Y_{t-p}}}
+ \underset{\text{Distributed Lag}}{\underbrace{\sum_{k=1}^M \sum_{j=0}^{Q_k} \beta_{k,j} X_{k, t-j}}}
+ \underset{\text{Fixed}}{\underbrace{Z_t \Gamma}} + \epsilon_t
$$
The terms in the model are:
$\delta_i$: constant and deterministic time regressors. Set using trend.
$S_i$ are seasonal dummies which are included if seasonal=True.
$X_{k,t-j}$ are the exogenous regressors. There are a number of formats that can be used to specify which lags are included. Note that the included lag lengths do no need to be the same. If causal=True, then the lags start with lag 1. Otherwise lags begin with 0 so that the model included the contemporaneous relationship between $Y_t$ and $X_t$.
$Z_t$ are any other fixed regressors that are not part of the distributed lag specification. In practice these regressors may be included when they do no contribute to the long run-relationship between $Y_t$ and the vector of exogenous variables $X_t$.
${\epsilon_t}$ is assumed to be a White Noise process
End of explanation
from statsmodels.datasets.danish_data import load
from statsmodels.tsa.api import ARDL
from statsmodels.tsa.ardl import ardl_select_order
data = load().data
data = data[["lrm", "lry", "ibo", "ide"]]
data.tail()
Explanation: Data
This notebook makes use of money demand data from Denmark, as first used in S. Johansen and K. Juselius (1990). The key variables are:
lrm: Log of real money measured using M2
lry: Log of real income
ibo: Interest rate on bonds
ide: Interest rate of bank deposits
The standard model uses lrm as the dependent variable and the other three as exogenous drivers.
Johansen, S. and Juselius, K. (1990), Maximum Likelihood Estimation and Inference on Cointegration – with Applications to the Demand for Money, Oxford Bulletin of Economics and Statistics, 52, 2, 169–210.
We start by loading the data and examining it.
End of explanation
_ = (data - data.mean()).plot()
Explanation: We plot the demeaned data so that all series appear on the same scale. The lrm series appears to be non-stationary, as does lry. The stationarity of the other two is less obvious.
End of explanation
sel_res = ardl_select_order(
data.lrm, 3, data[["lry", "ibo", "ide"]], 3, ic="aic", trend="c"
)
print(f"The optimal order is: {sel_res.model.ardl_order}")
Explanation: Model Selection
ardl_select_order can be used to automatically select the order. Here we use min the minimum AIC among all modes that consider up to 3 lags of the endogenous variable and 3 lags of each exogenous variable. trend="c" indicates that a constant should be included in the model.
End of explanation
res = sel_res.model.fit()
res.summary()
Explanation: The optimal order is returned as the number of lags of the endogenous variable followed by each of the exogenous regressors. The attribute model on sel_res contains the model ARDL specification which can be used to call fit. Here we look at the summary where the L# indicates that lag length (e.g., L0 is no lag, i.e., $X_{k,t}$, L2 is 2 lags, i.e., $X_{k,t-2}$).
End of explanation
sel_res = ardl_select_order(
data.lrm, 3, data[["lry", "ibo", "ide"]], 3, ic="bic", trend="c", glob=True
)
sel_res.model.ardl_order
Explanation: Global searches
The selection criteria can be switched the BIC which chooses a smaller model. Here we also use the glob=True option to perform a global search which considers models with any subset of lags up to the maximum lag allowed (3 here). This option lets the model selection choose non-contiguous lag specifications.
End of explanation
sel_res.model.ar_lags
sel_res.model.dl_lags
Explanation: While the ardl_order shows the largest included lag of each variable, ar_lags and dl_lags show the specific lags included. The AR component is regular in the sense that all 3 lags are included. The DL component is not since ibo selects only lags 0 and 3 and ide selects only lags 2.
End of explanation
for i, val in enumerate(sel_res.bic.head(10)):
print(f"{i+1}: {val}")
Explanation: We can take a look at the best performing models according to the BIC which are stored in the bic property. ibo at lags 0 and 3 is consistently selected, as is ide at either lag 2 or 3, and lry at lag 0. The selected AR lags vary more, although all of the best specifications select some.
End of explanation
res = ARDL(
data.lrm, 2, data[["lry", "ibo", "ide"]], {"lry": 1, "ibo": 2, "ide": 3}, trend="c"
).fit()
res.summary()
Explanation: Direct Parameterization
ARDL models can be directly specified using the ARDL class. The first argument is the endogenous variable ($Y_t$). The second is the AR lags. It can be a constant, in which case lags 1, 2, ..., $P$ are included, or a list of specific lags indices to include (e.g., [1, 4]). The third are the exogenous variables, and the fourth is the list of lags to include. This can be one of
An int: Include lags 0, 1, ..., Q
A dict with column names when exog is a DataFrame or numeric column locations when exog is a NumPy array (e.g., {0:1, 1: 2, 2:3}, would match the specification below if a NumPy array was used.
A dict with column names (DataFrames) or integers (NumPy arrays) that contains a list of specific lags to include (e.g., {"lry":[0,2], "ibo":[1,2]}).
The specification below matches that model selected by ardl_select_order.
End of explanation
y = np.asarray(data.lrm)
x = np.asarray(data[["lry", "ibo", "ide"]])
res = ARDL(y, 2, x, {0: 1, 1: 2, 2: 3}, trend="c").fit()
res.summary()
Explanation: NumPy Data
Below we see how the specification of ARDL models differs when using NumPy arrays. The key difference is that the keys in the dictionary are now integers which indicate the column of x to use. This model is identical to the previously fit model and all key value match exactly (e.g., Log Likelihood).
End of explanation
res = ARDL(
data.lrm,
2,
data[["lry", "ibo", "ide"]],
{"lry": 1, "ibo": 2, "ide": 3},
trend="c",
causal=True,
).fit()
res.summary()
Explanation: Causal models
Using the causal=True flag eliminates lag 0 from the DL components, so that all variables included in the model are known at time $t-1$ when modeling $Y_t$.
End of explanation
from statsmodels.tsa.api import UECM
sel_res = ardl_select_order(
data.lrm, 3, data[["lry", "ibo", "ide"]], 3, ic="aic", trend="c"
)
ecm = UECM.from_ardl(sel_res.model)
ecm_res = ecm.fit()
ecm_res.summary()
Explanation: Unconstrained Error Correction Models (UECM)
Unconstrained Error Correction Models reparameterize ARDL model to focus on the long-run component of a time series. The reparameterized model is
$$
\Delta Y_t = \underset{\text{Constant and Trend}}{\underbrace{\delta_0 + \delta_1 t + \ldots + \delta_k t^k}}
+ \underset{\text{Seasonal}}{\underbrace{\sum_{i=0}^{s-1} \gamma_i S_i}}
+ \underset{\text{Long-Run}}{\underbrace{\lambda_0 Y_{t-1} + \sum_{b=1}^M \lambda_i X_{b,t-1}}}
+ \underset{\text{Autoregressive}}{\underbrace{\sum_{p=1}^P \phi_p \Delta Y_{t-p}}}
+ \underset{\text{Distributed Lag}}{\underbrace{\sum_{k=1}^M \sum_{j=0}^{Q_k} \beta_{k,j} \Delta X_{k, t-j}}}
+ \underset{\text{Fixed}}{\underbrace{Z_t \Gamma}} + \epsilon_t
$$
Most of the components are the same. The key differences are:
The levels only enter at lag 1
All other lags of $Y_t$ or $X_{k,t}$ are differenced
Due to their structure, UECM models do not support irregular lag specifications, and so lags specifications must be integers. The AR lag length must be an integer or None, while the DL lag specification can be an integer or a dictionary of integers. Other options such as trend, seasonal, and causal are identical.
Below we select a model and then using the class method from_ardl to construct the UECM. The parameter estimates prefixed with D. are differences.
End of explanation
ecm_res.ci_summary()
Explanation: Cointegrating Relationships
Because the focus is on the long-run relationship, the results of UECM model fits contains a number of properties that focus on the long-run relationship. These are all prefixed ci_, for cointegrating. ci_summary contains the normalized estimates of the cointegrating relationship and associated estimated values.
End of explanation
_ = ecm_res.ci_resids.plot(title="Cointegrating Error")
Explanation: ci_resids contains the long-run residual, which is the error the drives figure changes in $\Delta Y_t$.
End of explanation
ecm = UECM(data.lrm, 2, data[["lry", "ibo", "ide"]], 2, seasonal=True)
seasonal_ecm_res = ecm.fit()
seasonal_ecm_res.summary()
Explanation: Seasonal Dummies
Here we add seasonal terms, which appear to be statistically significant.
End of explanation
seasonal_ecm_res.ci_summary()
Explanation: All deterministic terms are included in the ci_ prefixed terms. Here we see the normalized seasonal effects in the summary.
End of explanation
_ = seasonal_ecm_res.ci_resids.plot(title="Cointegrating Error with Seasonality")
Explanation: The residuals are somewhat more random in appearance.
End of explanation
greene = pd.read_fwf("http://www.stern.nyu.edu/~wgreene/Text/Edition7/TableF5-2.txt")
greene.head()
Explanation: The relationship between Consumption and Growth
Here we look at an example from Greene's Econometric analysis which focuses on teh long-run relationship between consumption and growth. We start by downloading the raw data.
Greene, W. H. (2000). Econometric analysis 4th edition. International edition, New Jersey: Prentice Hall, 201-215.
End of explanation
index = pd.to_datetime(
greene.Year.astype("int").astype("str")
+ "Q"
+ greene.qtr.astype("int").astype("str")
)
greene.index = index
greene.index.freq = greene.index.inferred_freq
greene.head()
Explanation: We then transform the index to be a pandas DatetimeIndex so that we can easily use seasonal terms.
End of explanation
greene["c"] = np.log(greene.realcons)
greene["g"] = np.log(greene.realgdp)
Explanation: We defined g as the log of real gdp and c as the log of real consumption.
End of explanation
sel_res = ardl_select_order(
greene.c, 8, greene[["g"]], 8, trend="c", seasonal=True, ic="aic"
)
ardl = sel_res.model
ardl.ardl_order
res = ardl.fit(use_t=True)
res.summary()
Explanation: Lag Length Selection
The selected model contains 5 lags of consumption and 2 of growth (0 and 1). Here we include seasonal terms although these are not significant.
End of explanation
sel_res = ardl_select_order(greene.c, 8, greene[["g"]], 8, trend="c", ic="aic")
uecm = UECM.from_ardl(sel_res.model)
uecm_res = uecm.fit()
uecm_res.summary()
Explanation: from_ardl is a simple way to get the equivalent UECM specification. Here we rerun the selection without the seasonal terms.
End of explanation
uecm_res.ci_summary()
_ = uecm_res.ci_resids.plot(title="Cointegrating Error")
Explanation: We see that for every % increase in consumption, we need a 1.05% increase in gdp. In other words, the saving rate is estimated to be around 5%.
End of explanation
uecm = UECM(greene.c, 2, greene[["g"]], 1, trend="c")
uecm_res = uecm.fit()
uecm_res.summary()
Explanation: Direct Specification of UECM models
UECM can be used to directly specify model lag lengths.
End of explanation
uecm_res.ci_summary()
Explanation: The changes in the lag structure make little difference in the estimated long-run relationship.
End of explanation
ecm = UECM(data.lrm, 3, data[["lry", "ibo", "ide"]], 3, trend="c")
ecm_fit = ecm.fit()
bounds_test = ecm_fit.bounds_test(case=4)
bounds_test
bounds_test.crit_vals
Explanation: Bounds Testing
UECMResults expose the bounds test of Pesaran, Shin, and Smith (2001). This test facilitates testing whether there is a level relationship between a set of variables without identifying which variables are I(1). This test provides two sets of critical and p-values. If the test statistic is below the critical value for the lower bound, then there appears to be no levels relationship irrespective of the order or integration in the $X$ variables. If it is above the upper bound, then there appears to be a levels relationship again, irrespective of the order of integration of the $X$ variables. There are 5 cases covered in the paper that include different combinations of deterministic regressors in the model or the test.
$$\Delta Y_{t}=\delta_{0} + \delta_{1}t + Z_{t-1}\beta + \sum_{j=0}^{P}\Delta X_{t-j}\Gamma + \epsilon_{t}$$
where $Z_{t-1}$ includes both $Y_{t-1}$ and $X_{t-1}$.
The cases determine which deterministic terms are included in the model and which are tested as part of the test.
No deterministic terms
Constant included in both the model and the test
Constant included in the model but not in the test
Constant and trend included in the model, only trend included in the test
Constant and trend included in the model, neither included in the test
Here we run the test on the Danish money demand data set. Here we see the test statistic is above the 95% critical value for both the lower and upper.
Pesaran, M. H., Shin, Y., & Smith, R. J. (2001). Bounds testing approaches to the analysis of level relationships. Journal of applied econometrics, 16(3), 289-326.
End of explanation
ecm = UECM(data.lrm, 3, data[["lry", "ibo", "ide"]], 3, trend="c")
ecm_fit = ecm.fit()
bounds_test = ecm_fit.bounds_test(case=3)
bounds_test
Explanation: Case 3 also rejects the null of no levels relationship.
End of explanation |
10,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4. Model Training
This notebook demonstrates how to train a Propensity Model using BigQuery ML.
Requirements
Input features used for training needs to be stored as a BigQuery table. This can be done using 2. ML Data Preparation Notebook.
Install and import required modules
Step1: Set paramaters
Step2: Next, let's configure modeling options.
Model and features configuration
Model options can be configured in detail based on BigQuery ML specifications
listed in The CREATE MODEL statement.
NOTE
Step3: Train the model
First, we initialize PropensityModel with config parameters.
Step4: Next cell triggers model training job in BigQuery which takes some time to finish depending on dataset size and model complexity. Set verbose=True, if you want to verify training query details.
Step5: Following cell allows you to see detailed information about the input features used to train a model. It provides following columns
Step6: Evaluate the model
This section helps to do quick model evaluation to get following model metrics | Python Code:
# Uncomment to install required python modules
# !sh ../utils/setup.sh
# Add custom utils module to Python environment
import os
import sys
sys.path.append(os.path.abspath(os.pardir))
from gps_building_blocks.cloud.utils import bigquery as bigquery_utils
from utils import model
from utils import helpers
Explanation: 4. Model Training
This notebook demonstrates how to train a Propensity Model using BigQuery ML.
Requirements
Input features used for training needs to be stored as a BigQuery table. This can be done using 2. ML Data Preparation Notebook.
Install and import required modules
End of explanation
configs = helpers.get_configs('config.yaml')
dest_configs, run_id_configs = configs.destination, configs.run_id
# GCP project ID
PROJECT_ID = dest_configs.project_id
# Name of the BigQuery dataset
DATASET_NAME = dest_configs.dataset_name
# To distinguish the separate runs of the training pipeline
RUN_ID = run_id_configs.train
# BigQuery table name containing model development dataset
FEATURES_DEV_TABLE = f'features_dev_table_{RUN_ID}'
# BigQuery table name containing model testing dataset
FEATURES_TEST_TABLE = f'features_test_table_{RUN_ID}'
# Output model name to save in BigQuery
MODEL_NAME = f'propensity_model_{RUN_ID}'
bq_utils = bigquery_utils.BigQueryUtils(project_id=PROJECT_ID)
Explanation: Set paramaters
End of explanation
# Read in Features table schema to select feature names for model training
sql = ("SELECT column_name "
f"FROM `{PROJECT_ID}.{DATASET_NAME}`.INFORMATION_SCHEMA.COLUMNS "
f"WHERE table_name='{FEATURES_DEV_TABLE}';")
print(sql)
features_schema = bq_utils.run_query(sql).to_dataframe()
# Columns to remove from the feature list
to_remove = ['window_start_ts', 'window_end_ts', 'snapshot_ts', 'user_id',
'label', 'key', 'data_split']
# Selected features for model training
training_features = [v for v in features_schema['column_name']
if v not in to_remove]
print('Number of training features:', len(training_features))
print(training_features)
# Set parameters for AUTOML_CLASSIFIER model
FEATURE_COLUMNS = training_features
TARGET_COLUMN = 'label'
params = {
'model_path': f'{PROJECT_ID}.{DATASET_NAME}.{MODEL_NAME}',
'features_table_path': f'{PROJECT_ID}.{DATASET_NAME}.{FEATURES_DEV_TABLE}',
'feature_columns': FEATURE_COLUMNS,
'target_column': TARGET_COLUMN,
'MODEL_TYPE': 'AUTOML_CLASSIFIER',
'BUDGET_HOURS': 1.0,
# Enable data_split_col if you want to use custom data split.
# Details on AUTOML data split column:
# https://cloud.google.com/automl-tables/docs/prepare#split
# 'DATA_SPLIT_COL': 'data_split',
'OPTIMIZATION_OBJECTIVE': 'MAXIMIZE_AU_ROC'
}
Explanation: Next, let's configure modeling options.
Model and features configuration
Model options can be configured in detail based on BigQuery ML specifications
listed in The CREATE MODEL statement.
NOTE: Propensity modeling supports only following four types of models available in BigQuery ML:
- LOGISTIC_REG
- AUTOML_CLASSIFIER
- BOOSTED_TREE_CLASSIFIER
- DNN_CLASSIFIER
In order to use specific model options, you can add options to following configuration exactly same as listed in the The CREATE MODEL statement. For example, if you want to trian AUTOML_CLASSIFIER with BUDGET_HOURS=1, you can specify it as:
python
params = {
'model_type': 'AUTOML_CLASSIFIER',
'budget_hours': 1
}
End of explanation
propensity_model = model.PropensityModel(bq_utils=bq_utils,
params=params)
Explanation: Train the model
First, we initialize PropensityModel with config parameters.
End of explanation
propensity_model.train(verbose=False)
Explanation: Next cell triggers model training job in BigQuery which takes some time to finish depending on dataset size and model complexity. Set verbose=True, if you want to verify training query details.
End of explanation
propensity_model.get_feature_info()
Explanation: Following cell allows you to see detailed information about the input features used to train a model. It provides following columns:
- input — The name of the column in the input training data.
- min — The sample minimum. This column is NULL for non-numeric inputs.
- max — The sample maximum. This column is NULL for non-numeric inputs.
- mean — The average. This column is NULL for non-numeric inputs.
- stddev — The standard deviation. This column is NULL for non-numeric inputs.
- category_count — The number of categories. This column is NULL for non-categorical columns.
- null_count — The number of NULLs.
For more details refer to help page.
End of explanation
# Model performance on the model development dataset on which the final
# model has been trained
EVAL_TABLE_NAME = FEATURES_DEV_TABLE
eval_params = {
'eval_table_path': f'{PROJECT_ID}.{DATASET_NAME}.{EVAL_TABLE_NAME}',
'threshold': 0.5
}
propensity_model.evaluate(eval_params, verbose=False)
# Model performance on the held out test dataset
EVAL_TABLE_NAME = FEATURES_TEST_TABLE
eval_params = {
'eval_table_path': f'{PROJECT_ID}.{DATASET_NAME}.{EVAL_TABLE_NAME}',
'threshold': 0.5
}
propensity_model.evaluate(eval_params, verbose=False)
Explanation: Evaluate the model
This section helps to do quick model evaluation to get following model metrics:
recall
accuracy
f1_score
log_loss
roc_auc
Two optional parameters can be specified for evaluation:
eval_table: BigQuery table containing evaluation dataset
threshold: Custom probability threshold to be used for evaluation (to binarize the predictions). Default value is 0.5.
If neither of these options are specified, the model is evaluated using evaluation dataset split during training with default threshold of 0.5.
NOTE: This evaluation provides basic model performance metrics. For thorough evaluation refer to 5. Model evaluation notebook notebook.
TODO(): Add sql code to calculate the proportion of positive examples in the evaluation dataset to be used as the threshold.
End of explanation |
10,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bowl
Initial Gaussian hump sends out ripples in a symmetric bowl-shaped pond. Refinment is focused near one edge of the pond.
Create topography files and data files
Step1: Run code in serial mode (will work, even if code is compiled with MPI)
Step2: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)
Step3: Create PNG files for web-browser viewing, or animation.
Step4: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below.
Step5: Plot figure 0, showing the entire solution. To see detailed plotting parameters, see file make_plots.py.
Step6: Then plot figures 10 and 11 to compare the solution in the two refined regions. | Python Code:
%run make_topo.py
%run make_data.py
Explanation: Bowl
Initial Gaussian hump sends out ripples in a symmetric bowl-shaped pond. Refinment is focused near one edge of the pond.
Create topography files and data files
End of explanation
!bowl
Explanation: Run code in serial mode (will work, even if code is compiled with MPI)
End of explanation
#!mpirun -n 4 bowl
Explanation: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)
End of explanation
%run make_plots.py
Explanation: Create PNG files for web-browser viewing, or animation.
End of explanation
%pylab inline
import glob
from matplotlib import image
from clawpack.visclaw.JSAnimation import IPython_display
from matplotlib import animation
def init():
im.set_data(image.imread(filenames[0]))
return im,
def animate(i):
image_i=image.imread(filenames[i])
im.set_data(image_i)
return im,
Explanation: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below.
End of explanation
figno = 0
fname = '_plots/*fig' + str(figno) + '.png'
filenames = sorted(glob.glob(fname))
fig = plt.figure()
im = plt.imshow(image.imread(filenames[0]))
animation.FuncAnimation(fig, animate, init_func=init,
frames=len(filenames), interval=500, blit=True)
Explanation: Plot figure 0, showing the entire solution. To see detailed plotting parameters, see file make_plots.py.
End of explanation
figno = 10
fname = '_plots/*fig' + str(figno) + '.png'
filenames = sorted(glob.glob(fname))
fig = plt.figure()
im = plt.imshow(image.imread(filenames[0]))
animation.FuncAnimation(fig, animate, init_func=init,
frames=len(filenames), interval=500, blit=True)
Explanation: Then plot figures 10 and 11 to compare the solution in the two refined regions.
End of explanation |
10,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How the FFT (Fast Fourier Tranform) works in Python and how to use it. A practical guide.
Motivation
I wrote this in order to have a future reference in how to FFT works in Python. Basically everytime that I try to do some frequency related analysis I have to rethink all the quantities. Therefore I decided to write here a basic implementation with all the details.
Implementation Number 1
Here we juts show how to go from a signal composed from a handful of frequencies to the FFT that revelas those frequencies in the proper units.
Step1: Size of the FFT
The FFT is going to be of the size N_to_use. Having multiples of 2 here allows us to do the calculation faster. Otherwise zero padding is used.
Step2: Analysis of the sampling rate on the limits of what the FFT can tell us.
The Smallest Possible Frequency
The smallest or slower frequency (or the period of the signal) is proportional to the samplin rate and inversely proportional number of points that we use. This make sense because the higher is the sampling rate then the higher the quantity of points that we will require to fill one period and therefore get information about the signal. On the other hand the bigger the number of points the easier to cover one period of the signal and therefore we can get information for smaller frequencies.
The Nyquist Frequency or the Biggest Possible Frequency
This is more straighforward. The bigger the sampling frequency the higher the range of frequencies that we can get information from.
Step3: A word about frequencies units and the pi value
When we multiply the frequency and time by the $ 2\pi $ in the argument of trigonometric function we are actually asking that the natural period of the sine is equal to one. Otherwise we will require $ 2 \pi $ units to go from one period to the other.
In other words we are doing this so we can talk about the frequency in ordinary terms (1 / s) instead of angular units.
See angular frequency vs cycles per second in order to further understand this point.
Step4: Final Comments
So we see that the FFT gives back the frequencies at the proper values that we know because we defined them to be that way. Finally, usually one ignores the negatives frequencies and only takes the positive values in this kind of analysis. This can be achieved by cutting the freq and the transform apporpiately. The frequencies start at 0 till the middle of the vector and then show the negative frequencies, so we get only the first half of the frequencies and of the transform vector consequently.
Step5: Implementation Number 2
Here I show how we can calculate the inverse fourier transform, then the inverse that get us back to the original signal and how the sampling rate and the number of points that we use to calculate the FFT affects the units of the inverse.
Step6: Here we will give the frequency in terms of the period for ease of interpretation.
Step7: About the Period of the Recovered Signal
The recovered signal is going to repeat itself after sampling_rate * Period which in this case is | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sampling_rate = 20 # This quantity is on Hertz
step = 1.0 / sampling_rate
Tmax = 20.0
time = np.arange(0, Tmax, step)
N_to_use = 1024 # Should be a power of two.
Explanation: How the FFT (Fast Fourier Tranform) works in Python and how to use it. A practical guide.
Motivation
I wrote this in order to have a future reference in how to FFT works in Python. Basically everytime that I try to do some frequency related analysis I have to rethink all the quantities. Therefore I decided to write here a basic implementation with all the details.
Implementation Number 1
Here we juts show how to go from a signal composed from a handful of frequencies to the FFT that revelas those frequencies in the proper units.
End of explanation
print("The smalles frequency that the FFT will discern: ", sampling_rate / N_to_use)
print("Nyquist Frequency: ", sampling_rate / 2)
Explanation: Size of the FFT
The FFT is going to be of the size N_to_use. Having multiples of 2 here allows us to do the calculation faster. Otherwise zero padding is used.
End of explanation
f1 = 1.0
f2 = 2.0
f3 = 4.0 # All of this on Hertz
y1 = np.sin(2 * np.pi * f1 * time)
y2 = np.sin(2 * np.pi * f2 * time)
y3 = np.sin(2 * np.pi * f3 * time)
y = y1 + y2 + y3
transform = np.fft.fft(y, N_to_use)
# We get the proper frequencies for the FFT
frequencies = np.fft.fftfreq(N_to_use, d=step)
Explanation: Analysis of the sampling rate on the limits of what the FFT can tell us.
The Smallest Possible Frequency
The smallest or slower frequency (or the period of the signal) is proportional to the samplin rate and inversely proportional number of points that we use. This make sense because the higher is the sampling rate then the higher the quantity of points that we will require to fill one period and therefore get information about the signal. On the other hand the bigger the number of points the easier to cover one period of the signal and therefore we can get information for smaller frequencies.
The Nyquist Frequency or the Biggest Possible Frequency
This is more straighforward. The bigger the sampling frequency the higher the range of frequencies that we can get information from.
End of explanation
%matplotlib inline
plt.plot(frequencies, np.abs(transform))
plt.title('Fast Fourier Transform')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Power Spectrum')
plt.xlim([-6, 6])
Explanation: A word about frequencies units and the pi value
When we multiply the frequency and time by the $ 2\pi $ in the argument of trigonometric function we are actually asking that the natural period of the sine is equal to one. Otherwise we will require $ 2 \pi $ units to go from one period to the other.
In other words we are doing this so we can talk about the frequency in ordinary terms (1 / s) instead of angular units.
See angular frequency vs cycles per second in order to further understand this point.
End of explanation
aux = int(N_to_use / 2)
freq_aux = frequencies[0: aux]
plt.plot(freq_aux, np.abs(transform[:aux]))
plt.title('Fast Fourier Transform')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Power Spectrum')
plt.xlim([0, 6])
Explanation: Final Comments
So we see that the FFT gives back the frequencies at the proper values that we know because we defined them to be that way. Finally, usually one ignores the negatives frequencies and only takes the positive values in this kind of analysis. This can be achieved by cutting the freq and the transform apporpiately. The frequencies start at 0 till the middle of the vector and then show the negative frequencies, so we get only the first half of the frequencies and of the transform vector consequently.
End of explanation
sampling_rate = 100 # This quantity is on Hertz
step = 1.0 / sampling_rate
Tmax = 20.0
time = np.arange(0, Tmax, step)
N_to_use = 1024 * 2 # Should be a power of two.
Explanation: Implementation Number 2
Here I show how we can calculate the inverse fourier transform, then the inverse that get us back to the original signal and how the sampling rate and the number of points that we use to calculate the FFT affects the units of the inverse.
End of explanation
T = 10.0 # Period
f = 1.0 / T # Frequency relationship
y = np.sin(2 * np.pi * f * time)
transform = np.fft.fft(y, N_to_use)
inverse = np.fft.ifft(transform, N_to_use)
time_inverse = np.arange(0, N_to_use * step, step)
# Now we plot this.
plt.subplot(1, 2, 1)
plt.title('Original Signal')
plt.plot(time, y)
plt.subplot(1, 2, 2)
plt.title('Recovered Signal')
plt.plot(time_inverse, inverse.real)
Explanation: Here we will give the frequency in terms of the period for ease of interpretation.
End of explanation
sampling_rate * T
Explanation: About the Period of the Recovered Signal
The recovered signal is going to repeat itself after sampling_rate * Period which in this case is:
End of explanation |
10,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CI/CD for a Kubeflow pipeline on Vertex AI
Learning Objectives
Step1: Let us make sure that the artifact store exists
Step2: Creating the KFP CLI builder for Vertex AI
Exercise
In the cell below, write a docker file that
* Uses gcr.io/deeplearning-platform-release/base-cpu as base image
* Install the python packages kfp with version 1.6.6 and google-cloud-aiplatform with version 1.3.0
* Starts /bin/bash as entrypoint
Step3: Build the image and push it to your project's Container Registry.
Step4: Exercise
In the cell below, use gcloud builds to build the kfp-cli-vertex Docker image and push it to the project gcr.io registry.
Step5: Understanding the Cloud Build workflow.
Exercise
In the cell below, you'll complete the cloudbuild_vertex.yaml file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using Cloud Build variables.
The CI/CD workflow automates the steps you walked through manually during lab-02_vertex
Step6: Manually triggering CI/CD runs
You can manually trigger Cloud Build runs using the gcloud builds submit command. | Python Code:
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
REGION = 'us-central1'
ARTIFACT_STORE = f'gs://{PROJECT_ID}-vertex'
Explanation: CI/CD for a Kubeflow pipeline on Vertex AI
Learning Objectives:
1. Learn how to create a custom Cloud Build builder to pilote Vertex AI Pipelines
1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP
1. Learn how to setup a Cloud Build GitHub trigger a new run of the Kubeflow PIpeline
In this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds, deploys, and runs a Kubeflow pipeline on Vertex AI. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.
Configuring environment settings
End of explanation
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
Explanation: Let us make sure that the artifact store exists:
End of explanation
%%writefile kfp-cli/Dockerfile
# TODO
Explanation: Creating the KFP CLI builder for Vertex AI
Exercise
In the cell below, write a docker file that
* Uses gcr.io/deeplearning-platform-release/base-cpu as base image
* Install the python packages kfp with version 1.6.6 and google-cloud-aiplatform with version 1.3.0
* Starts /bin/bash as entrypoint
End of explanation
KFP_CLI_IMAGE_NAME = 'kfp-cli-vertex'
KFP_CLI_IMAGE_URI = f'gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest'
KFP_CLI_IMAGE_URI
Explanation: Build the image and push it to your project's Container Registry.
End of explanation
!gcloud builds # COMPLETE THE COMMAND
Explanation: Exercise
In the cell below, use gcloud builds to build the kfp-cli-vertex Docker image and push it to the project gcr.io registry.
End of explanation
%%writefile cloudbuild_vertex.yaml
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
# file except in compliance with the License. You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
steps:
# Build the trainer image
- name: # TODO
args: # TODO
dir: # TODO
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
dsl-compile-v2 # TODO
env:
- 'PIPELINE_ROOT=gs://$PROJECT_ID-vertex/pipeline'
- 'PROJECT_ID=$PROJECT_ID'
- 'REGION=$_REGION'
- 'SERVING_CONTAINER_IMAGE_URI=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest'
- 'TRAINING_CONTAINER_IMAGE_URI=gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest'
- 'TRAINING_FILE_PATH=gs://$PROJECT_ID-vertex/data/training/dataset.csv'
- 'VALIDATION_FILE_PATH=gs://$PROJECT_ID-vertex/data/validation/dataset.csv'
dir: pipeline_vertex
# Run the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
python kfp-cli_vertex/run_pipeline.py # TODO
# Push the images to Container Registry
# TODO: List the images to be pushed to the project Docker registry
images: # TODO
# This is required since the pipeline run overflows the default timeout
timeout: 10800s
Explanation: Understanding the Cloud Build workflow.
Exercise
In the cell below, you'll complete the cloudbuild_vertex.yaml file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using Cloud Build variables.
The CI/CD workflow automates the steps you walked through manually during lab-02_vertex:
1. Builds the trainer image
1. Compiles the pipeline
1. Uploads and run the pipeline to the Vertex AI Pipeline environment
1. Pushes the trainer to your project's Container Registry
The Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI.
End of explanation
SUBSTITUTIONS= f'_REGION={REGION}'
SUBSTITUTIONS
!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
Explanation: Manually triggering CI/CD runs
You can manually trigger Cloud Build runs using the gcloud builds submit command.
End of explanation |
10,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas 소개 2
GonsSu24 내용에 이어서 Pandas 라이브러리를 소개한다.
먼저 GongSu24를 임포트 한다.
Step1: 색인(Index) 클래스
Pandas에 정의된 색인(Index) 클래스는 Series와 DataFrame 자료형의 행과 열을 구분하는 이름들의 목록을 저장하는 데에 사용된다.
Series 객체에서 사용되는 Index 객체
index 속성
아래와 같이 Series 객체를 생성한 후에 index를 확인해보자.
Step2: index의 자료형이 Index 클래스의 객체임을 확인할 수 있다.
Step3: Index 객체에 대해 인덱싱과 슬라이싱을 리스트의 경우처럼 활용할 수 있다.
Step4: Index 객체는 불변(immutable) 자료형이다.
Step5: 색인 객체는 변경될 수 없기에 자료 구조 사이에서 안전하게 공유될 수 있다.
Step6: 앞서 선언된 an_index를 새로운 Series 나 DataFrame 을 생성하는 데에 사용할 수 있으며, 사용된 index가 무엇인지를 확인할 수도 있다.
Step7: DataFrame 객체에서 사용되는 Index 객체
index 속성
columns 속성
Step8: columns와 index 속성 모두 Index 객체이다.
Step9: in 연산자 활용하기
in 연산자를 활용하여 index 와 columns에 사용된 행과 열의 이름의 존재여부를 확인할 수 있다.
Step10: 각각의 색인은 담고 있는 데이터에 대한 정보를 취급하는 여러 가지 메서드와 속성을 가지고 있다. [표 5-3]을 참고하자.
Series와 DataFrame 관련 연산 및 주요 메소드
Series나 DataFrame 형식으로 저장된 데이터를 다루는 주요 연산 및 기능을 설명한다.
재색인(reindex) 메소드
reindex() 메소드는 지정된 색인을 사용해서 새로운 Series나 DataFrame 객체를 생성한다.
Series의 경우 재색인
Step11: reindex() 메소드를 이용하여 인덱스를 새로 지정할 수 있다.
주의
Step12: 누락된 값을 지정된 값으로 채울 수도 있다.
Step13: method 옵션
시계열(time series) 등과 데이터 처럼 어떠 순서에 따라 정렬된 데이터를 재색인할 때
보간법을 이용하여 누락된 값들을 채워 넣어야 하는 경우가 있다.
이런 경우 method 옵션을 이용하며, ffill, bfill, nearest 등을 옵션값으로 활용한다.
Step14: DataFrame의 경우 재색인
행과 열에 대해 모두 사용이 가능하다.
Step15: index 속성의 재색인은 Series의 경우와 동일하다.
Step16: columns 속성의 재색인은 키워드(예약어)를 사용한다.
Step17: method 옵션을 이용한 보간은 행 대해서만 이루어진다.
Step18: method='nearest'는 인덱스가 모두 숫자인 경우에만 적용할 수 있다.
Step19: 주의
reindex는 기존 자료를 변경하지 않는다.
Step20: loc 메소드를 이용한 재색인
loc 메소드를 이용하여 재색인이 가능하다.
Step21: 5.2.2 하나의 로우 또는 칼럼 제외하기
Step22: DataFrame에서는 로우와 칼럼 모두에서 값을 삭제할 수 있다.
Step23: 행 삭제
Step24: drop() 메소드는 기존의 자료를 건드리지 않는다.
Step25: 5.2.2 하나의 로우 또는 칼럼 삭제하기
Step26: 슬라이싱 문법으로 선택된 영역에 값을 대입하는 것은 예상한 대로 동작한다.
Step27: 앞에서 확인한대로 색인으로 DataFrame에서 칼럼의 값을 하나 이상 가져올 수 있다.
Step28: 슬라이싱으로 로우를 선택하거나 불리언 배열로 칼럼을 선택할 수 있다.
Step29: 이 문법에 모순이 있다고 생각할 수 있지만, 실용성에 기인한 것일 뿐이다.
또 다른 사례는 스칼라 비교를 통해 생성된 불리언 DataFrame을 사용해서 값을 선택하는 것이다.
Step30: 이 예제는 DataFrame을 ndarray와 문법적으로 비슷하게 보이도록 의도한 것이다.
DataFrame의 칼럼에 대해 라벨로 색인하는 방법으로, 특수한 색인 필드인 ix를 소개한다. ix는 NumPy와 비슷한 방식에 추가적으로 축의 라벨을 사용하여 DataFrame의 로우와 칼럼을 선택할 수 있도록 한다. 앞에서 언급했듯이 이 방법은 재색인을 좀 더 간단하게 할 수 있는 방법이다.
Step31: 지금까지 살펴봤듯이 pandas 객체에서 데이터를 선택하고 재배열하는 방법은 여러 가지가 있다. [표 5-6]에 다양한 방법을 정리해두었다. 나중에 살펴볼 계층적 색인을 이용하면 좀 더 다양한 방법을 사용할 수 있다.
5.2.4 산술연산과 데이터 정렬
pandas에서 중요한 기능은 색인이 다른 객체 간의 산술연산이다. 객체를 더할 때 짝이 맞지 않는 색인이 있다면 결과에 두 색인이 통합된다.
Step32: 서로 겹치는 색인이 없다면 데이터는 NA 값이 된다. 산술연산 시 누락된 값은 전파되며, DataFrame에서는 로우와 칼럼 모두에 적용된다.
Step33: 산술연산 메서드에 채워 넣을 값 지정하기
서로 다른 색인을 가지는 객체 간의 산술연산에서 존재하지 않는 축의 값을 특수한 값( 0 같은)으로 지정하고 싶을 때는 다음과 같이 할 수 있다.
Step34: 이 둘을 더했을 때 겹치지 않는 부분의 값이 NA값이 된 것을 알 수 있다.
df1의 add메서드로 df2와 fill_value 값을 인자로 전달한다.
Step35: Series나 DataFrame을 재색인할 때 역시 fill_value를 지정할 수 있다.
DataFrame과 Series 간의 연산
NumPy 배열의 연산처럼 DataFrame과 Series 간의 연산도 잘 정의되어 있다. 먼저 2차원 배열과 그 배열 중 한 칼럼의 차이에 대해서 생각할 수 있는 예제를 살펴보자.
Step36: 이 예제는 브로드캐스팅에 대한 예제로 자세한 내용은 12장에서 살펴볼 것이다. DataFrame과 Series간의 연산은 이와 유사하다.
Step37: 기본적으로 DataFrame과 Series 간의 산술 연산은 Series의 색인을 DataFrame의 칼럼에 맞추고 아래 로우로 전파한다.
Step38: 만약 색인 값을 DataFrame의 칼럼이나 Series의 색인에서 찾을 수 없다면 그 객체는 형식을 맞추기 위해 재색인된다.
Step39: 만약 각 로우에 대해 연산을 수행하고 싶다면 산술연산 메서드를 사용하면 된다.
Step40: 5.2.5 함수 적용과 매핑
pandas 객체에도 NumPy의 유니버설 함수( 배열의 각 원소에 적용되는 메서드)를 적용할 수 있다.
Step41: 자주 사용되는 또 다른 연산은 각 로우나 칼럼의 1차원 배열에 함수를 적용하는 것이다.
DataFrame의 apply 메서드를 통해 수행할 수 있다.
Step42: 배열의 합계나 평균 같은 일반적인 통계는 DataFrame의 메서드로 있으므로 apply 메서드를 사용해야만 하는 것은 아니다.
apply 메서드에 전달된 함수는 스칼라 값을 반환할 필요가 없으며, Series 또는 여러 값을 반환해도 된다.
Step43: 배열의 각 원소에 적용되는 파이썬의 함수를 사용할 수도 있다. frame 객체에서 실수 값을 문자열 포맷으로 변환하고 싶다면 applymap을 이용해서 다음과 같이 해도 된다.
Step44: 이 메서드의 이름이 applymap인 이유는 Series가 각 원소에 적용할 함수를 지정하기 위한 map 메서드를 가지고 있기 때문이다.
Step45: 5.2.6 정렬과 순위
어떤 기준에 근거해서 데이터를 정렬하는 것 역시 중요한 명령이다. 로우나 칼럼의 색인을 알파벳 순으로 정렬하려면 정렬된 새로운 객체를 반화하는 sort_index 메서드를 사용하면 된다.
Step46: 5.2.6
어떤 기준에 근거해서 데이터를 정렬하는 것 역시 중요한 명령이다. 로우나 칼럼의 색인을 알파벳 순으로 정렬하려면 정렬된 새로운 객체를 반환하는 sort_index 메서드를 사용하면 된다.
Step47: DataFrame은 로우나 칼럼 중 하나의 축을 기준으로 정렬할 수 있다.
Step48: 데이터는 기본적으로 오름차순으로 정렬되지만 내림차순으로 정렬할 수도 있다.
Step49: Series 객체를 값에 따라 정렬하고 싶다면 sort_values 메서드를 사용한다.
Step50: 정렬할 때 비어있는 값은 기본적으로 Series 객체에서 가장 마지막에 위치한다.
obj = Series([4, np.nan, 7, np.nan, -3, 2])
obj.sort_values()
DataFrame에서는 하나 이상의 칼럼에 있는 값으로 정렬이 필요할 수 있다. 이럴 때는 by 옵션에 필요한 칼럼의 이름을 넘기면 된다.
Step51: 여러 개의 칼럼을 정렬하려면 칼럼의 이름이 담긴 리스트를 전달하면 된다.
Step52: 순위는 정렬과 거의 흡사하며, 1부터 배열의 유효한 데이터 개수까지 순위를 매긴다. 또한 순위는 numpy.argsort에서 반환하는 간접 정렬 색인과 유사한데, 동률인 순위를 처리하는 방식이 다르다. 기본적으로 Series와 DataFrame의 rank 메서드는 동점인 항목에 대해서는 평균 순위를 매긴다.
Step53: 데이터 상에서 나타나는 순서에 따라 순위를 매길 수도 있다.
Step54: 내림차순으로 순위를 매길 수도 있다.
Step55: 5.2.7 중복 색인
지금까지 살펴본 모든 예제는 모두 축의 이름(색인 값)이 유일했다.
pandas의 많은 함수(reindex 같은) 에서 색인 값은 유일해야 하지만 강제 사항은 아니다. 이제 색인 값이 중복된 Series객체를 살펴보자.
Step56: 색인의 is_unique 속성은 해당 값이 유일한지 아닌지 알려준다.
Step57: 중복되는 색인 값이 있으면 색인을 이용한 데이터 선택은 다르게 동작하고 하나의 Series 객체를 반환한다. 하지만 중복되는 색인 값이 없으면 색인을 이용한 데이터 선택은 스칼라 값을 반환한다.
Step58: DataFrame에서 로우를 선택하는 것도 동일하다. | Python Code:
from GongSu24_Pandas_Introduction_1 import *
Explanation: Pandas 소개 2
GonsSu24 내용에 이어서 Pandas 라이브러리를 소개한다.
먼저 GongSu24를 임포트 한다.
End of explanation
s6 = Series(range(3), index=['a', 'b', 'c'])
s6
Explanation: 색인(Index) 클래스
Pandas에 정의된 색인(Index) 클래스는 Series와 DataFrame 자료형의 행과 열을 구분하는 이름들의 목록을 저장하는 데에 사용된다.
Series 객체에서 사용되는 Index 객체
index 속성
아래와 같이 Series 객체를 생성한 후에 index를 확인해보자.
End of explanation
s6_index = s6.index
s6_index
Explanation: index의 자료형이 Index 클래스의 객체임을 확인할 수 있다.
End of explanation
s6_index[2]
s6_index[1:]
Explanation: Index 객체에 대해 인덱싱과 슬라이싱을 리스트의 경우처럼 활용할 수 있다.
End of explanation
s6_index[1] = 'd'
Explanation: Index 객체는 불변(immutable) 자료형이다.
End of explanation
an_index = pd.Index(np.arange(3))
an_index
Explanation: 색인 객체는 변경될 수 없기에 자료 구조 사이에서 안전하게 공유될 수 있다.
End of explanation
s7= Series([1.5, -2.5, 0], index=an_index)
s7.index is an_index
Explanation: 앞서 선언된 an_index를 새로운 Series 나 DataFrame 을 생성하는 데에 사용할 수 있으며, 사용된 index가 무엇인지를 확인할 수도 있다.
End of explanation
df3
Explanation: DataFrame 객체에서 사용되는 Index 객체
index 속성
columns 속성
End of explanation
df3.columns
df3.index
df3.columns[:2]
Explanation: columns와 index 속성 모두 Index 객체이다.
End of explanation
'debt' in df3.columns
'four' in df3.index
Explanation: in 연산자 활용하기
in 연산자를 활용하여 index 와 columns에 사용된 행과 열의 이름의 존재여부를 확인할 수 있다.
End of explanation
s8 = Series([4.3, 9.2, 8.1, 3.9], index= ['b', 'c', 'a', 'd'])
s8
Explanation: 각각의 색인은 담고 있는 데이터에 대한 정보를 취급하는 여러 가지 메서드와 속성을 가지고 있다. [표 5-3]을 참고하자.
Series와 DataFrame 관련 연산 및 주요 메소드
Series나 DataFrame 형식으로 저장된 데이터를 다루는 주요 연산 및 기능을 설명한다.
재색인(reindex) 메소드
reindex() 메소드는 지정된 색인을 사용해서 새로운 Series나 DataFrame 객체를 생성한다.
Series의 경우 재색인
End of explanation
s9 = s8.reindex(['a', 'b', 'c', 'd', 'e', 'f'])
s9
Explanation: reindex() 메소드를 이용하여 인덱스를 새로 지정할 수 있다.
주의: 새로 사용되는 항목이 index에 추가되면 NaN이 값으로 사용된다.
End of explanation
s8.reindex(['a','b','c','d','e', 'f'], fill_value=0.0)
Explanation: 누락된 값을 지정된 값으로 채울 수도 있다.
End of explanation
s9 = Series(['blue', 'purple', 'yellow'], index=[0, 2, 4])
s9
s9.reindex(range(6))
s9.reindex(range(6), method='ffill')
s9.reindex(range(6), method='bfill')
s9.reindex(range(6), method='nearest')
Explanation: method 옵션
시계열(time series) 등과 데이터 처럼 어떠 순서에 따라 정렬된 데이터를 재색인할 때
보간법을 이용하여 누락된 값들을 채워 넣어야 하는 경우가 있다.
이런 경우 method 옵션을 이용하며, ffill, bfill, nearest 등을 옵션값으로 활용한다.
End of explanation
data = np.arange(9).reshape(3, 3)
data
df6 = DataFrame(data, index=['a', 'b', 'd'], columns= ['Ohio', 'Texas', 'California'])
df6
Explanation: DataFrame의 경우 재색인
행과 열에 대해 모두 사용이 가능하다.
End of explanation
df7 = df6.reindex(['a', 'b', 'c', 'd'])
df7
Explanation: index 속성의 재색인은 Series의 경우와 동일하다.
End of explanation
states = ['Texas', 'Utah', 'California']
df6.reindex(columns=states)
Explanation: columns 속성의 재색인은 키워드(예약어)를 사용한다.
End of explanation
df6.reindex(index=['a', 'b', 'c', 'd'], method='ffill')
df6.reindex(index=['a', 'b', 'c', 'd'], method='bfill')
df6.reindex(index=['a', 2, 3, 4])
Explanation: method 옵션을 이용한 보간은 행 대해서만 이루어진다.
End of explanation
df6.reindex(index=['a', 'b', 'c', 'd'], method='nearest')
Explanation: method='nearest'는 인덱스가 모두 숫자인 경우에만 적용할 수 있다.
End of explanation
df6
Explanation: 주의
reindex는 기존 자료를 변경하지 않는다.
End of explanation
states
df6.loc[['a', 'b', 'c', 'd'], states]
Explanation: loc 메소드를 이용한 재색인
loc 메소드를 이용하여 재색인이 가능하다.
End of explanation
obj = Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e'])
obj
new_obj = obj.drop('c')
new_obj
obj.drop(['d', 'c'])
Explanation: 5.2.2 하나의 로우 또는 칼럼 제외하기: drop 메소드
drop 메서드를 사용하여 지정된 행 또는 열을 제외하여 새로운 Series나 DataFrame을 생성할 수 있다.
End of explanation
df7
Explanation: DataFrame에서는 로우와 칼럼 모두에서 값을 삭제할 수 있다.
End of explanation
df7.drop('a', axis=0)
df7.drop('Ohio', axis=1)
Explanation: 행 삭제
End of explanation
df7
data.drop('two', axis=1)
data.drop(['two', 'four'], axis=1)
Explanation: drop() 메소드는 기존의 자료를 건드리지 않는다.
End of explanation
obj = Series(np.arange(4.), index=['a', 'b', 'c', 'd'])
obj['b':'c']
Explanation: 5.2.2 하나의 로우 또는 칼럼 삭제하기: del 메소드
del 메서드를 사용하여 지정된 행 또는 열을 삭제할 수 있다.
5.2.3 색인하기, 선택하기, 거르기
Series의 색인 (obj[...])은 NumPy 배열의 색인과 유사하게 동작하는데, Series의 색인은 정수가 아니어도 된다는 점이 다르다.
라벨 이름으로 슬라이싱하는 것은 시작점과 끝점을 포함한다는 점이 일반 파이선에서 슬라이싱과 다른 점이다.
End of explanation
obj['b':'c'] = 5
obj
Explanation: 슬라이싱 문법으로 선택된 영역에 값을 대입하는 것은 예상한 대로 동작한다.
End of explanation
data = DataFrame(np.arange(16).reshape((4, 4)),
index=['Ohio', 'Colorado', 'Utah', 'New York'],
columns = ['one', 'two', 'three', 'four'])
data
data['two']
data[['three', 'one']]
Explanation: 앞에서 확인한대로 색인으로 DataFrame에서 칼럼의 값을 하나 이상 가져올 수 있다.
End of explanation
data[:2]
data[data['three'] > 5]
Explanation: 슬라이싱으로 로우를 선택하거나 불리언 배열로 칼럼을 선택할 수 있다.
End of explanation
data < 5
data[data < 5] = 0
data
Explanation: 이 문법에 모순이 있다고 생각할 수 있지만, 실용성에 기인한 것일 뿐이다.
또 다른 사례는 스칼라 비교를 통해 생성된 불리언 DataFrame을 사용해서 값을 선택하는 것이다.
End of explanation
data.ix['Colorado', ['two', 'three']]
data.ix[['Colorado', 'Utah'], [3,0,1]]
data.ix[2]
data.ix[:'Utah', 'two']
data.ix[data.three > 5, :3]
Explanation: 이 예제는 DataFrame을 ndarray와 문법적으로 비슷하게 보이도록 의도한 것이다.
DataFrame의 칼럼에 대해 라벨로 색인하는 방법으로, 특수한 색인 필드인 ix를 소개한다. ix는 NumPy와 비슷한 방식에 추가적으로 축의 라벨을 사용하여 DataFrame의 로우와 칼럼을 선택할 수 있도록 한다. 앞에서 언급했듯이 이 방법은 재색인을 좀 더 간단하게 할 수 있는 방법이다.
End of explanation
s1 = Series([7.3, -2.5, 3.4, 1.5], index=['a', 'c', 'd','e'])
s2 = Series([-2.1, 3.6, -1.5, 4, 3.1], index=['a', 'c', 'e', 'f', 'g'])
s1 + s2
Explanation: 지금까지 살펴봤듯이 pandas 객체에서 데이터를 선택하고 재배열하는 방법은 여러 가지가 있다. [표 5-6]에 다양한 방법을 정리해두었다. 나중에 살펴볼 계층적 색인을 이용하면 좀 더 다양한 방법을 사용할 수 있다.
5.2.4 산술연산과 데이터 정렬
pandas에서 중요한 기능은 색인이 다른 객체 간의 산술연산이다. 객체를 더할 때 짝이 맞지 않는 색인이 있다면 결과에 두 색인이 통합된다.
End of explanation
df1 = DataFrame(np.arange(9.).reshape((3, 3)), columns=list('bcd'),
index=['Ohio', 'Texas', 'Colorado'])
df2 = DataFrame(np.arange(12.).reshape((4,3)), columns=list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
df1 + df2
Explanation: 서로 겹치는 색인이 없다면 데이터는 NA 값이 된다. 산술연산 시 누락된 값은 전파되며, DataFrame에서는 로우와 칼럼 모두에 적용된다.
End of explanation
df1 = DataFrame(np.arange(12.).reshape((3,4)), columns=list('abcd'))
df2 = DataFrame(np.arange(20.).reshape((4,5)), columns=list('abcde'))
df1
df2
df1 + df2
Explanation: 산술연산 메서드에 채워 넣을 값 지정하기
서로 다른 색인을 가지는 객체 간의 산술연산에서 존재하지 않는 축의 값을 특수한 값( 0 같은)으로 지정하고 싶을 때는 다음과 같이 할 수 있다.
End of explanation
df1.add(df2, fill_value=0)
df1.reindex(columns=df2.columns, fill_value=0)
Explanation: 이 둘을 더했을 때 겹치지 않는 부분의 값이 NA값이 된 것을 알 수 있다.
df1의 add메서드로 df2와 fill_value 값을 인자로 전달한다.
End of explanation
arr = np.arange(12).reshape(3, 4)
arr
arr[0]
arr - arr[0]
Explanation: Series나 DataFrame을 재색인할 때 역시 fill_value를 지정할 수 있다.
DataFrame과 Series 간의 연산
NumPy 배열의 연산처럼 DataFrame과 Series 간의 연산도 잘 정의되어 있다. 먼저 2차원 배열과 그 배열 중 한 칼럼의 차이에 대해서 생각할 수 있는 예제를 살펴보자.
End of explanation
frame = DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
series = frame.ix[0]
frame
series
Explanation: 이 예제는 브로드캐스팅에 대한 예제로 자세한 내용은 12장에서 살펴볼 것이다. DataFrame과 Series간의 연산은 이와 유사하다.
End of explanation
frame - series
Explanation: 기본적으로 DataFrame과 Series 간의 산술 연산은 Series의 색인을 DataFrame의 칼럼에 맞추고 아래 로우로 전파한다.
End of explanation
series2 = Series(range(3), index = list('bef'))
frame + series2
Explanation: 만약 색인 값을 DataFrame의 칼럼이나 Series의 색인에서 찾을 수 없다면 그 객체는 형식을 맞추기 위해 재색인된다.
End of explanation
series3 = frame['d']
frame
series3
frame.sub(series3, axis=0)
Explanation: 만약 각 로우에 대해 연산을 수행하고 싶다면 산술연산 메서드를 사용하면 된다.
End of explanation
frame = DataFrame(np.random.randn(4,3), columns=list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
frame
np.abs(frame) #절대값
Explanation: 5.2.5 함수 적용과 매핑
pandas 객체에도 NumPy의 유니버설 함수( 배열의 각 원소에 적용되는 메서드)를 적용할 수 있다.
End of explanation
f = lambda x: x.max() - x.min()
frame.apply(f)
frame.apply(f, axis=1)
Explanation: 자주 사용되는 또 다른 연산은 각 로우나 칼럼의 1차원 배열에 함수를 적용하는 것이다.
DataFrame의 apply 메서드를 통해 수행할 수 있다.
End of explanation
def f(x):
return Series([x.min(), x.max()], index=['min', 'max'])
frame.apply(f)
Explanation: 배열의 합계나 평균 같은 일반적인 통계는 DataFrame의 메서드로 있으므로 apply 메서드를 사용해야만 하는 것은 아니다.
apply 메서드에 전달된 함수는 스칼라 값을 반환할 필요가 없으며, Series 또는 여러 값을 반환해도 된다.
End of explanation
format = lambda x: '%.2f' % x
frame.applymap(format)
Explanation: 배열의 각 원소에 적용되는 파이썬의 함수를 사용할 수도 있다. frame 객체에서 실수 값을 문자열 포맷으로 변환하고 싶다면 applymap을 이용해서 다음과 같이 해도 된다.
End of explanation
frame['e'].map(format)
Explanation: 이 메서드의 이름이 applymap인 이유는 Series가 각 원소에 적용할 함수를 지정하기 위한 map 메서드를 가지고 있기 때문이다.
End of explanation
frame['e'].map(format)
Explanation: 5.2.6 정렬과 순위
어떤 기준에 근거해서 데이터를 정렬하는 것 역시 중요한 명령이다. 로우나 칼럼의 색인을 알파벳 순으로 정렬하려면 정렬된 새로운 객체를 반화하는 sort_index 메서드를 사용하면 된다.
End of explanation
obj = Series(range(4), index=['d', 'a', 'b', 'c'])
obj.sort_index()
Explanation: 5.2.6
어떤 기준에 근거해서 데이터를 정렬하는 것 역시 중요한 명령이다. 로우나 칼럼의 색인을 알파벳 순으로 정렬하려면 정렬된 새로운 객체를 반환하는 sort_index 메서드를 사용하면 된다.
End of explanation
frame = DataFrame(np.arange(8).reshape((2,4)), index = ['three', 'one'], columns = ['d', 'a', 'b', 'c'])
frame.sort_index()
frame.sort_index(axis=1)
Explanation: DataFrame은 로우나 칼럼 중 하나의 축을 기준으로 정렬할 수 있다.
End of explanation
frame.sort_index(axis=1, ascending=False)
Explanation: 데이터는 기본적으로 오름차순으로 정렬되지만 내림차순으로 정렬할 수도 있다.
End of explanation
obj.sort_values()
obj = Series([4, 7, -3, 2])
obj.sort_values()
Explanation: Series 객체를 값에 따라 정렬하고 싶다면 sort_values 메서드를 사용한다.
End of explanation
frame = DataFrame({'b': [4, 7, -3, 2], 'a': [0, 1, 0, 1]})
frame
frame.sort_values(by='b')
Explanation: 정렬할 때 비어있는 값은 기본적으로 Series 객체에서 가장 마지막에 위치한다.
obj = Series([4, np.nan, 7, np.nan, -3, 2])
obj.sort_values()
DataFrame에서는 하나 이상의 칼럼에 있는 값으로 정렬이 필요할 수 있다. 이럴 때는 by 옵션에 필요한 칼럼의 이름을 넘기면 된다.
End of explanation
frame.sort_values(by=['a','b'])
Explanation: 여러 개의 칼럼을 정렬하려면 칼럼의 이름이 담긴 리스트를 전달하면 된다.
End of explanation
obj = Series([7, -5, 7, 4, 2, 0 ,4])
obj.rank()
Explanation: 순위는 정렬과 거의 흡사하며, 1부터 배열의 유효한 데이터 개수까지 순위를 매긴다. 또한 순위는 numpy.argsort에서 반환하는 간접 정렬 색인과 유사한데, 동률인 순위를 처리하는 방식이 다르다. 기본적으로 Series와 DataFrame의 rank 메서드는 동점인 항목에 대해서는 평균 순위를 매긴다.
End of explanation
obj.rank(method='first')
Explanation: 데이터 상에서 나타나는 순서에 따라 순위를 매길 수도 있다.
End of explanation
# 'max' 는 같은 값을 가지는 그룹을 높은 순위로 매긴다.
obj.rank(ascending=False, method='max')
Explanation: 내림차순으로 순위를 매길 수도 있다.
End of explanation
obj = Series(range(5), index=['a', 'a', 'b', 'b', 'c'])
obj
Explanation: 5.2.7 중복 색인
지금까지 살펴본 모든 예제는 모두 축의 이름(색인 값)이 유일했다.
pandas의 많은 함수(reindex 같은) 에서 색인 값은 유일해야 하지만 강제 사항은 아니다. 이제 색인 값이 중복된 Series객체를 살펴보자.
End of explanation
obj.index.is_unique
Explanation: 색인의 is_unique 속성은 해당 값이 유일한지 아닌지 알려준다.
End of explanation
obj['a']
obj['c']
Explanation: 중복되는 색인 값이 있으면 색인을 이용한 데이터 선택은 다르게 동작하고 하나의 Series 객체를 반환한다. 하지만 중복되는 색인 값이 없으면 색인을 이용한 데이터 선택은 스칼라 값을 반환한다.
End of explanation
df = DataFrame(np.random.randn(4, 3), index=['a', 'a', 'b','b'])
df
df.ix['b']
Explanation: DataFrame에서 로우를 선택하는 것도 동일하다.
End of explanation |
10,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step10: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step11: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step12: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step13: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step14: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step15: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step16: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step17: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
# Create your dictionary that maps vocab words to integers here
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
End of explanation
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
10,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a binary tumor/normal classifier and explain via Shapely values
Train a neural network on TCGA+TARGET+GTEX gene expression to classify tumor vs. normal.
Evalute the model and explain using SHapley Additive exPlanations
Step1: Load and Wrangle Data
Step2: Build and Train Model
Step3: Explain
Evaluate several tissues to see if Shapely values re-capitulate any known biomarkers
Step4: Explain Breast Predictions | Python Code:
import os
import json
import numpy as np
import pandas as pd
import keras
import matplotlib.pyplot as plt
# fix random seed for reproducibility
np.random.seed(42)
Explanation: Train a binary tumor/normal classifier and explain via Shapely values
Train a neural network on TCGA+TARGET+GTEX gene expression to classify tumor vs. normal.
Evalute the model and explain using SHapley Additive exPlanations
End of explanation
%%time
X = pd.read_hdf("data/tcga_target_gtex.h5", "expression")
Y = pd.read_hdf("data/tcga_target_gtex.h5", "labels")
# Prune X to only KEGG pathway genes
# with open("data/c2.cp.kegg.v6.1.symbols.gmt") as f:
# genes_subset = list(set().union(*[line.strip().split("\t")[2:] for line in f.readlines()]))
# Prune X to only Cosmic Cancer Genes
genes_subset = pd.read_csv("data/cosmic_germline.tsv", sep="\t")["Gene Symbol"].values
X_pruned = X.drop(labels=(set(X.columns) - set(genes_subset)), axis=1, errors="ignore")
# order must match dataframe
genes = list(X_pruned.columns.values)
print("Pruned expression to only include", len(genes), "genes")
# Create a one-hot for tumor/normal training and numeric disease label for stratification
from sklearn.preprocessing import LabelEncoder
tumor_normal_encoder = LabelEncoder()
Y["tumor_normal_value"] = pd.Series(tumor_normal_encoder.fit_transform(Y["tumor_normal"]), index=Y.index)
disease_encoder = LabelEncoder()
Y["disease_value"] = pd.Series(disease_encoder.fit_transform(Y["disease"]), index=Y.index)
# Divide into training and test sets strattified by disease
# Split into stratified training and test sets based primary site
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(X.values, Y.disease):
X_train = X_pruned.values[train_index]
X_test = X_pruned.values[test_index]
Y_train = Y.iloc[train_index]
Y_test = Y.iloc[test_index]
print(X_train.shape, X_test.shape)
# Lets see how big each class is based on primary site
plt.hist(Y_train.disease_value.values, alpha=0.5, label='Train')
plt.hist(Y_test.disease_value.values, alpha=0.5, label='Test')
plt.legend(loc='upper right')
plt.title("Disease distribution between train and test sets")
plt.show()
# Lets see how big each class is based on primary site
plt.hist(Y_train.tumor_normal_value.values, alpha=0.5, label='Train')
plt.hist(Y_test.tumor_normal_value.values, alpha=0.5, label='Test')
plt.legend(loc='upper right')
plt.title("Tumor/normal distribution between train and test sets")
plt.show()
Explanation: Load and Wrangle Data
End of explanation
%%time
from keras.models import Model
from keras.layers import Input, BatchNormalization, Dense, Dropout
from keras.callbacks import EarlyStopping
from keras import regularizers
def create_model(input_shape, output_shape, params):
inputs = Input(shape=(input_shape,))
x = BatchNormalization()(inputs)
x = Dense(16, activation="relu")(x)
x = Dropout(0.5)(x)
x = Dense(16, activation="relu")(x)
x = Dropout(0.5)(x)
outputs = Dense(output_shape, kernel_initializer="normal", activation="sigmoid")(x)
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
model = create_model(X_train.shape[1], 1, {})
model.summary()
callbacks = [EarlyStopping(monitor="acc", min_delta=0.05, patience=2, verbose=2, mode="max")]
model.fit(X_train, Y_train.tumor_normal_value.values, epochs=10, batch_size=128, shuffle="batch", callbacks=callbacks)
print(model.metrics_names, model.evaluate(X_test, Y_test.tumor_normal_value.values))
# Save the model to disk so we can read and predict without training
# See https://github.com/h5py/h5py/issues/712
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
with open("models/disease.params.json", "w") as f:
f.write(json.dumps({
"tumor_normal": tumor_normal_encoder.classes_.tolist(),
"diseases": disease_encoder.classes_.tolist(),
"genes": genes}))
with open("models/disease.model.json", "w") as f:
f.write(model.to_json())
model.save_weights("models/disease.weights.h5")
# Load the model and predict the test set so we're using exactly what we'll load later from disk
model = keras.models.model_from_json(open("models/disease.model.json").read())
model.load_weights("models/disease.weights.h5")
params = json.loads(open("models/disease.params.json").read())
Explanation: Build and Train Model
End of explanation
import shap
# import warnings; warnings.simplefilter('ignore')
def shap_predict(X):
return model.predict(X).flatten()
shap.initjs()
Explanation: Explain
Evaluate several tissues to see if Shapely values re-capitulate any known biomarkers
End of explanation
# Select tumor and normal samples from a single tissue as background
X_tumor = X_pruned.loc[Y[Y.disease == "Breast Invasive Carcinoma"].index]
X_normal = X_pruned.loc[Y[Y.disease == "Breast - Mammary Tissue"].index]
print("Found {} tumor and {} normal samples".format(X_tumor.shape[0], X_normal.shape[0]))
background_samples = pd.concat([X_tumor.iloc[:25], X_normal.iloc[:25]])
print("Explantion based on {} samples".format(background_samples.shape[0]))
explainer = shap.KernelExplainer(shap_predict, background_samples)
# Show details for a tumor sample
np.random.seed(42)
sample_shap_values = explainer.shap_values(X_tumor.iloc[10], nsamples=150)
shap.force_plot(sample_shap_values, X_tumor.iloc[10])
# Show details for a normal sample
np.random.seed(42)
sample_shap_values = explainer.shap_values(X_normal.iloc[0], nsamples=150)
shap.force_plot(sample_shap_values, X_normal.iloc[0])
# Explain a subset of tumor and normal samples
background_shap_values = explainer.shap_values(background_samples.iloc[::5], nsamples=150)
shap.force_plot(background_shap_values, background_samples.iloc[::5])
shap.summary_plot(background_shap_values, background_samples.iloc[::5])
Explanation: Explain Breast Predictions
End of explanation |
10,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word Vectors
Word vectors are vectors of real numbers where each data point captures a dimension of the word's meaning.
The word vectors can be used as features in many natural language processing and machine learning applications.
Interesting properties
Word vectors are able to capture many linguistic regularities and semantically similar words have similar vectors.
A simple way to investigate the learned representations is to find the closest words for a user-specified word. The distance tool serves that purpose. For example, if you enter 'france', distance will display the most similar words and their distances to 'france', , such as other countries.
Once the words are mapped into numerical vectors, you can apply algebraic operations to them, for example vector operations
Step1: Now we will load the word embeddings as a Python dictionary.
As stated, these have already been obtained through a machine learning algorithm.
It's a pre-trained subset of the huge Google News dataset (see Introduction).
Step2: Now that the model is loaded, we can take a look at the word representations. First, note that the word_embeddings is a dictionary. Each word is the key to the entry, and the value is its corresponding vector presentation. Remember that square brackets allow access to any entry if the key exists.
Step3: Each of the word embedding is a 300-dimensional vector.
Step4: It is important to note that we store each vector as a NumPy array. It allows us to use the linear algebra operations on it.
How to visualise the word vectors
Word embeddings are multidimensional arrays, usually with hundreds of attributes that pose a challenge for its interpretation.
One way is to plot charts that will allow us to illustrate the mechanical part in Python.
Plotting the dots gives an idea of the words, the arrow representations help to visualise the vector's alignment as well.
Step7: Plotting the vectors using PCA
Now we will explore plotting the vectors and the distance between word vectors after reducing their dimension, using the principal component analysis (PCA) technique that we have already seen. As we noticed, we are working in a 300-dimensional space in this case. Although from a computational perspective we will be able to perform a good job, it is impossible to visualise results in such high dimensional spaces.
PCA is a method that projects our vectors in a space of reduced dimension, while keeping the maximum information about the original vectors in their reduced counterparts. In this case, by maximum infomation we mean that the Euclidean distance between the original vectors and their projected siblings is minimal. Hence vectors that were originally close in the embeddings dictionary, will produce lower dimensional vectors that are still close to each other.
When you map out the words, similar words will be clustered next to each other. For example, the words 'sad', 'happy', 'joyful' all describe emotion and are supposed to be near each other when plotted. The words
Step8: Plot the words
Now we will use the PCA function to plot a few words.
You will see that similar words tend to be clustered near each other.
Sometimes, even antonyms tend to be clustered near each other. Antonyms describe the same thing but just tend to be on the other end of the scale. They are usually found in the same location of a sentence, have the same parts of speech and thus - when learning the word vectors - you end up getting similar weights.
Step9: Now we will reduce the dimensions down to 2 (two!) so that we will be able to plot them into a scatter plot.
Step10: Now that we have a manageable number of dimensions we can draw the words on a scatter plot
Step11: Do you notice it?
The word vectors for 'gas', 'oil' and 'petroleum' appear related to each other, because their vectors are close to each other. Similarly the other clusters, e.g. 'sad', 'joyful' and 'happy' all express emotions and are also near each other.
You can take advantage of this type of consistency encoding
to identify patterns. For example, if you had the word doctor and
you were to find the closest words
that are closest to it by computing
some kind of similarity, you might get the word doctors, nurse, cardiologist,
surgeon, etc.
To calculate a smilarity function we would need to be able to calculate the distance between the word vectors.
We start by plotting an arrow between each word and a common Origin (0,0)
Step12: When you perform a PCA reduction, you retain as much as possible information but something is always lost. This is why you normally keep all dimensions for your model building and use PCA for visualisations or some reduction.
Here is an example of another characteristic of word vectors that was not immediately visible in the previous plots which used a reduced-dimension data set.
Step13: The arrows are different since we have selected two specific dimensions and not the reduced PCA ones and words are more spread, not necessarily clustered together.
But note that similar words like 'village' and 'town' or 'country' and 'continent' tend to point in the same direction.
Also, note that 'sad' and 'happy' looks close to each other; however, the vectors point in opposite directions.
In this chart, one can figure out the angles and distances between the words. Some words are close in both kinds of similarity metrics.
Word distance
Now we will explore another peculiarity of the word vectors
Step14: The distance between the words village and town is plotted in blue.
Note that it has a similar length than the distance between happy and sad. This is a characteristic that can be useful for pattern recognition and predictions and we will see how now.
Predict relationships among words
Now we will write a function that will use the word embeddings (specifically the distance) to predict relationships among words!
* The function will take as input three words.
* The first two are related to each other.
* It will predict a 4th word which is related to the third word in a similar manner as the two first words are related to each other.
* As an example, "Athens is to Greece as Bangkok is to ______"?
We will use the function to tell the capital of a country.
To do this, we will first compute the distance
Step15: We can observe that the vector 'country' that we expected to be the same as the vector for Spain is not exactly it.
Step17: So, we have to look for the closest words in the embedding that matches the candidate country. If the word embedding works as expected, the most similar word must be 'Spain'.
Let us define a function that helps us to do it. We will store our word embedding as a DataFrame, which facilitate the lookup operations based on the numerical vectors.
The Euclidean similarity metric allows to identify how far two points or two vectors are apart from each other.
The Euclidean distance is the length of the straight line segment connecting two vector points in the vector space
Step18: Now let us find the name that corresponds to our numerical country
Step19: Yes, as expected!
Now you have a simple process to get unknown relationships between words by the use of known relationships between others.
The only catch here is that you need a vector space where the representations capture the relative meaning of words.
Step20: Once we have a vector representing the "concept of Capital" (the difference between the vectors of a country and its capital city) we can directly search for the closest word, without calculating the capital vector each time.
Now let's try with another combination
Step21: However, it does not always work
Step22: We can improve this by using a different similarity metric than the Euclidean distance and that is the cosine distance.
The cosine similarity function is one of the most
popular similarity functions.
The cosine distance basically makes use of the cosine
of the angle between two vectors. And based off that, it tells
whether two vectors are close or not.
The cosine similarity also allows to overcome a problem when using euclidean distance
Step24: Remember that if the angle is small,
the cosine would be close to one and that means the two words are more similar. And as the angle approaches 90 degrees,
the cosine approaches zero and the words are less similar.
Finding the country of each capital
Now, we will use the cosine distance to find the capital cities of countries, in a similar way as previously, putting all together in one handy function
Step25: And this time it predicted correctly that Lisbon is the capital city of Portugal!
Model Accuracy
Now we will test the new function on the dataset and check the accuracy of the model | Python Code:
import pandas as pd # to read the dataset
data = pd.read_csv('../datasets/capitals.txt', delimiter=' ')
data.columns = ['city1', 'country1', 'city2', 'country2']
# print first and last five elements in the DataFrame
data.head()
data.tail()
data.describe()
Explanation: Word Vectors
Word vectors are vectors of real numbers where each data point captures a dimension of the word's meaning.
The word vectors can be used as features in many natural language processing and machine learning applications.
Interesting properties
Word vectors are able to capture many linguistic regularities and semantically similar words have similar vectors.
A simple way to investigate the learned representations is to find the closest words for a user-specified word. The distance tool serves that purpose. For example, if you enter 'france', distance will display the most similar words and their distances to 'france', , such as other countries.
Once the words are mapped into numerical vectors, you can apply algebraic operations to them, for example vector operations:
vector('Paris') - vector('France') + vector('Italy') results in a vector that is very close to vector('Rome').
Similarly vector('king') - vector('man') + vector('woman') is close to vector('queen').
Many methods exist to map this mathematical embedding into a vector space with lower dimension. One of them is Google's word2vec:
The word2vec tool takes a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words.
To observe strong regularities in the word vector space, it is needed to train the models on large data set, with sufficient vector dimensionality.
Using the word2vec tool, it is possible to train models on huge data sets (up to hundreds of billions of words).
We are publishing pre-trained vectors trained on part of Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases.
Note that because the original google news word embedding dataset is about 3.64 gigabytes,
the workspace is not able to handle the full file set. So I have downloaded the full dataset,
extracted a sample of the words that we're going to analyse and saved
it in a pickle file called word_embeddings_capitals.p
Read the data
End of explanation
import pickle # object serializer: convert byte stream into a Python object
word_embeddings = pickle.load(open("../datasets/word_embeddings_subset.p", "rb"))
Explanation: Now we will load the word embeddings as a Python dictionary.
As stated, these have already been obtained through a machine learning algorithm.
It's a pre-trained subset of the huge Google News dataset (see Introduction).
End of explanation
print("dimension: ", word_embeddings['Spain'].shape[0])
Explanation: Now that the model is loaded, we can take a look at the word representations. First, note that the word_embeddings is a dictionary. Each word is the key to the entry, and the value is its corresponding vector presentation. Remember that square brackets allow access to any entry if the key exists.
End of explanation
countryVector = word_embeddings['country'] # Get the vector representation for the word 'country'
print(type(countryVector)) # Print the type of the vector. Note it is a numpy array
print(countryVector[0:10]) # Print the first ten values of the vector.
Explanation: Each of the word embedding is a 300-dimensional vector.
End of explanation
import numpy as np
# Helper: Get the vector representation for a given word
def getVector(w):
return word_embeddings[w]
Explanation: It is important to note that we store each vector as a NumPy array. It allows us to use the linear algebra operations on it.
How to visualise the word vectors
Word embeddings are multidimensional arrays, usually with hundreds of attributes that pose a challenge for its interpretation.
One way is to plot charts that will allow us to illustrate the mechanical part in Python.
Plotting the dots gives an idea of the words, the arrow representations help to visualise the vector's alignment as well.
End of explanation
def computePCA(X, n_components=2):
Input:
X: of dimension (m,n) where each row corresponds to a word vector
n_components: Number of components you want to keep (reduce to n_components)
Output:
X_reduced: data transformed in n dims/columns + regenerated original data
new matrix should be of dimension `m, n_components`
# First mean center the data
X_demeaned = X - np.mean(X,axis=0)
# calculate the covariance matrix
covariance_matrix = np.cov(X_demeaned, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
eigen_vals, eigen_vecs = np.linalg.eigh(covariance_matrix, UPLO='L')
# sort eigenvalue in increasing order (get the indices from the sort)
idx_sorted = np.argsort(eigen_vals)
# reverse the order so that it's from highest to lowest.
idx_sorted_decreasing = idx_sorted[::-1]
# sort the eigen values by idx_sorted_decreasing
eigen_vals_sorted = eigen_vals[idx_sorted_decreasing]
# sort eigenvectors using the idx_sorted_decreasing indices
eigen_vecs_sorted = eigen_vecs[:,idx_sorted_decreasing]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
eigen_vecs_subset = eigen_vecs_sorted[:,0:n_components]
# transform the data by multiplying the transpose of the eigenvectors
# with the transpose of the de-meaned data
# Then take the transpose of that product.
X_reduced = np.dot(eigen_vecs_subset.transpose(),X_demeaned.transpose()).transpose()
return X_reduced
def getVectors(embeddings, words):
Input:
embeddings: the words vectors
words: a list of words
Output:
X: a matrix where the rows are the embeddings corresponding to the rows on the list
m = len(words)
X = np.zeros((1, 300))
for word in words:
english = word
eng_emb = embeddings[english]
X = np.row_stack((X, eng_emb))
X = X[1:,:]
return X
Explanation: Plotting the vectors using PCA
Now we will explore plotting the vectors and the distance between word vectors after reducing their dimension, using the principal component analysis (PCA) technique that we have already seen. As we noticed, we are working in a 300-dimensional space in this case. Although from a computational perspective we will be able to perform a good job, it is impossible to visualise results in such high dimensional spaces.
PCA is a method that projects our vectors in a space of reduced dimension, while keeping the maximum information about the original vectors in their reduced counterparts. In this case, by maximum infomation we mean that the Euclidean distance between the original vectors and their projected siblings is minimal. Hence vectors that were originally close in the embeddings dictionary, will produce lower dimensional vectors that are still close to each other.
When you map out the words, similar words will be clustered next to each other. For example, the words 'sad', 'happy', 'joyful' all describe emotion and are supposed to be near each other when plotted. The words: 'oil', 'gas', and 'petroleum' all describe natural resources. Words like 'city', 'village', 'town' could be seen as synonyms and describe a similar thing.
Before plotting the words, we need to first reduce each word vector with PCA into 2 dimensions and compute the eigenvectors and the eigenvalues of the covariance matrix.
We start by defining two helper functions:
End of explanation
words = ['oil', 'gas', 'happy', 'sad', 'city', 'town',
'village', 'country', 'continent', 'petroleum', 'joyful']
# given a list of words and the embeddings, it returns a matrix with all the embeddings
# word_embeddings is the pre-trained dictionary of word vectors
X = getVectors(word_embeddings, words)
print('11 words each of 300 dimensions thus X.shape is:', X.shape)
Explanation: Plot the words
Now we will use the PCA function to plot a few words.
You will see that similar words tend to be clustered near each other.
Sometimes, even antonyms tend to be clustered near each other. Antonyms describe the same thing but just tend to be on the other end of the scale. They are usually found in the same location of a sentence, have the same parts of speech and thus - when learning the word vectors - you end up getting similar weights.
End of explanation
reduced = computePCA(X, 2) # reduction to two dimensions
reduced.shape
Explanation: Now we will reduce the dimensions down to 2 (two!) so that we will be able to plot them into a scatter plot.
End of explanation
import matplotlib.pyplot as plt # Import matplotlib
plt.scatter(reduced[:, 0], reduced[:, 1])
for i, word in enumerate(words):
plt.annotate(word, xy=(reduced[i, 0] - 0.05, reduced[i, 1] + 0.1))
plt.show()
Explanation: Now that we have a manageable number of dimensions we can draw the words on a scatter plot
End of explanation
fig, ax = plt.subplots(figsize = (10, 10)) # Create custom size image
# plot and annotate the words
plt.scatter(reduced[:, 0], reduced[:, 1])
for i, word in enumerate(words):
plt.annotate(word, xy=(reduced[i, 0] - 0.05, reduced[i, 1] + 0.1))
# Print an arrow for each word starting from the Origin (0,0)
for word in range(0, len(words)):
ax.arrow(0, 0, reduced[word,0], reduced[word,1], head_width=0.005, head_length=0.005, fc='r', ec='r', width = 1e-5)
plt.show()
Explanation: Do you notice it?
The word vectors for 'gas', 'oil' and 'petroleum' appear related to each other, because their vectors are close to each other. Similarly the other clusters, e.g. 'sad', 'joyful' and 'happy' all express emotions and are also near each other.
You can take advantage of this type of consistency encoding
to identify patterns. For example, if you had the word doctor and
you were to find the closest words
that are closest to it by computing
some kind of similarity, you might get the word doctors, nurse, cardiologist,
surgeon, etc.
To calculate a smilarity function we would need to be able to calculate the distance between the word vectors.
We start by plotting an arrow between each word and a common Origin (0,0):
End of explanation
# a subset of words
words = ['happy', 'sad', 'city', 'town', 'village', 'country', 'continent', 'joyful']
embeddedWords = np.array([getVector(word) for word in words]) # Convert each word to its vector representation
fig, ax = plt.subplots(figsize = (10, 10)) # Create custom size image
col1 = 3 # NOTE: we select a specific column (dimension) for the x axis
col2 = 2 # NOTE: and another specific column (one of the 300 dimensions) for the y axis
# Print an arrow for each word
for word in embeddedWords:
ax.arrow(0, 0, word[col1], word[col2], head_width=0.005, head_length=0.005, fc='r', ec='r', width = 1e-5)
ax.scatter(embeddedWords[:, col1], embeddedWords[:, col2]); # Plot a dot for each word
# Add the word label over each dot in the scatter plot
for i in range(0, len(words)):
ax.annotate(words[i], (embeddedWords[i, col1], embeddedWords[i, col2]))
plt.show()
Explanation: When you perform a PCA reduction, you retain as much as possible information but something is always lost. This is why you normally keep all dimensions for your model building and use PCA for visualisations or some reduction.
Here is an example of another characteristic of word vectors that was not immediately visible in the previous plots which used a reduced-dimension data set.
End of explanation
words = ['sad', 'happy', 'town', 'village']
embeddedWords = np.array([getVector(word) for word in words]) # Convert each word to its vector representation
fig, ax = plt.subplots(figsize = (10, 10)) # Create custom size image
col1 = 3 # Select the column for the x axe
col2 = 2 # Select the column for the y axe
# Print an arrow for each word
for word in embeddedWords:
ax.arrow(0, 0, word[col1], word[col2], head_width=0.0005, head_length=0.0005, fc='r', ec='r', width = 1e-5)
# print the vector difference between village and town
village = getVector('village')
town = getVector('town')
diff = town - village
ax.arrow(village[col1], village[col2], diff[col1], diff[col2], fc='b', ec='b', width = 1e-5)
# print the vector difference between village and town
sad = getVector('sad')
happy = getVector('happy')
diff = happy - sad
ax.arrow(sad[col1], sad[col2], diff[col1], diff[col2], fc='b', ec='b', width = 1e-5)
ax.scatter(embeddedWords[:, col1], embeddedWords[:, col2]); # Plot a dot for each word
# Add the word label over each dot in the scatter plot
for i in range(0, len(words)):
ax.annotate(words[i], (embeddedWords[i, col1], embeddedWords[i, col2]))
plt.show()
Explanation: The arrows are different since we have selected two specific dimensions and not the reduced PCA ones and words are more spread, not necessarily clustered together.
But note that similar words like 'village' and 'town' or 'country' and 'continent' tend to point in the same direction.
Also, note that 'sad' and 'happy' looks close to each other; however, the vectors point in opposite directions.
In this chart, one can figure out the angles and distances between the words. Some words are close in both kinds of similarity metrics.
Word distance
Now we will explore another peculiarity of the word vectors: how the distance between two vectors give an indication of the words similarity.
We plot the words 'sad', 'happy', 'town', and 'village' as in the previous chart. In this same chart, display the vector from 'village' to 'town' and the vector from 'sad' to 'happy'.
End of explanation
capital = getVector('France') - getVector('Paris')
country = getVector('Madrid') + capital
print(country[0:5]) # Print the first 5 values of the vector
Explanation: The distance between the words village and town is plotted in blue.
Note that it has a similar length than the distance between happy and sad. This is a characteristic that can be useful for pattern recognition and predictions and we will see how now.
Predict relationships among words
Now we will write a function that will use the word embeddings (specifically the distance) to predict relationships among words!
* The function will take as input three words.
* The first two are related to each other.
* It will predict a 4th word which is related to the third word in a similar manner as the two first words are related to each other.
* As an example, "Athens is to Greece as Bangkok is to ______"?
We will use the function to tell the capital of a country.
To do this, we will first compute the distance: the cosine similarity or the Euclidean distance.
Predicting capitals
Now, applying vector difference and addition, one can create a vector representation for a new word. For example, we can say that the vector difference between 'France' and 'Paris' represents the concept of Capital city.
One can move from the city of Madrid in the direction of the concept of Capital, and obtain something close to the corresponding country to which Madrid is the Capital.
How cool is that?
End of explanation
diff = country - getVector('Spain')
print(diff[0:5])
Explanation: We can observe that the vector 'country' that we expected to be the same as the vector for Spain is not exactly it.
End of explanation
def euclidean(A, B):
Input:
A: a numpy array which corresponds to a word vector
B: A numpy array which corresponds to a word vector
Output:
d: numerical number representing the Euclidean distance between A and B.
# euclidean distance
return np.linalg.norm(A - B)
# feel free to try different words
king = getVector('king')
queen = getVector('queen')
# Test the function
euclidean(king, queen)
euclidean(getVector('Italy'), getVector('Rome'))
# Create a dataframe out of the dictionary embedding.
# NOTE: This facilitate the algebraic operations
keys = word_embeddings.keys()
lista = []
for key in keys:
lista.append(word_embeddings[key])
embedding = pd.DataFrame(data=lista, index=keys)
# Define a function to find the closest word to a vector:
def find_closest_word(v):
# Calculate the vector difference from each word to the input vector
diff = embedding.values - v
# Get the norm of each difference vector.
# It means the squared euclidean distance from each word to the input vector
delta = np.sum(diff * diff, axis=1)
# Find the index of the minimun distance in the array
i = np.argmin(delta)
# Return the row name for this item
return embedding.iloc[i].name
Explanation: So, we have to look for the closest words in the embedding that matches the candidate country. If the word embedding works as expected, the most similar word must be 'Spain'.
Let us define a function that helps us to do it. We will store our word embedding as a DataFrame, which facilitate the lookup operations based on the numerical vectors.
The Euclidean similarity metric allows to identify how far two points or two vectors are apart from each other.
The Euclidean distance is the length of the straight line segment connecting two vector points in the vector space:
$ d(A,B) = \sqrt{(b1-a1)^2 + (b2-a2)^2} $
As you see, this formula is just from the Pythagorean theorem.
When you have higher dimensions, the Euclidean distance is not much more difficult: get the difference between each of their dimensions, square those differences, sum them up and then get the square root of the results.
This formula is known as the norm of the difference between the vectors that you are comparing and in Python you can use the linalg module from numpy to get the norm of the difference, which works for n-dimensional spaces.
By using this metric, you can get a sense of how similar two documents or words are.
End of explanation
find_closest_word(country)
Explanation: Now let us find the name that corresponds to our numerical country:
End of explanation
find_closest_word(getVector('Italy') - getVector('Rome') + getVector('Madrid'))
Explanation: Yes, as expected!
Now you have a simple process to get unknown relationships between words by the use of known relationships between others.
The only catch here is that you need a vector space where the representations capture the relative meaning of words.
End of explanation
# capital vector is coming from the first example: France and Paris
print(find_closest_word(getVector('Berlin') + capital))
print(find_closest_word(getVector('Beijing') + capital))
Explanation: Once we have a vector representing the "concept of Capital" (the difference between the vectors of a country and its capital city) we can directly search for the closest word, without calculating the capital vector each time.
Now let's try with another combination:
End of explanation
print(find_closest_word(getVector('Lisbon') + capital))
Explanation: However, it does not always work:
End of explanation
def cosine_similarity(A, B):
'''
Input:
A: a numpy array which corresponds to a word vector
B: another numpy array which corresponds to a second word vector
Output:
cosine: numerical number representing the cosine similarity between A and B.
'''
# The dot product between two vectors is the sum of the products between their elements in each dimension of the vector space.
dot = np.dot(A,B)
# The norm of a vector or its magnitude is defined to be the square root of the sum of its elements squared.
norma = np.sqrt(np.dot(A,A))
normb = np.sqrt(np.dot(B,B))
#the cosine of the angle is equal to the dot product between the vectors divided by the product of the two norms.
cosine = dot / (norma*normb)
return cosine
# feel free to try different words
king = word_embeddings['king']
queen = word_embeddings['queen']
cosine_similarity(king, queen)
village = word_embeddings['village']
cosine_similarity(king, village)
Explanation: We can improve this by using a different similarity metric than the Euclidean distance and that is the cosine distance.
The cosine similarity function is one of the most
popular similarity functions.
The cosine distance basically makes use of the cosine
of the angle between two vectors. And based off that, it tells
whether two vectors are close or not.
The cosine similarity also allows to overcome a problem when using euclidean distance: comparing vector
representations of documents or corpora when they have different number of words will return biased results.
As said, the cosine similarity is computing the cosine
of the vectors' inner angle. If the angle is small,
the cosine would be close to one. And as the angle approaches 90 degrees,
the cosine approaches zero.
In many cases, the cosine of those angles is a better
proxy of similarity between these vector representations than
their euclidean distance.
End of explanation
def get_country(city1, country1, city2, embeddings):
Input:
city1: a string (the capital city of country1)
country1: a string (the country of capital1)
city2: a string (another capital city of unknown country)
embeddings: a dictionary where the keys are words and values are their embeddings
Output:
country: tuple, the most likely country and its similarity score
# store the city1, country 1, and city 2 in a set called group
group = set((city1, country1, city2))
# get embeddings of city 1
city1Emb = word_embeddings[city1]
# get embedding of country 1
country1Emb = word_embeddings[country1]
# get embedding of city 2
city2Emb = word_embeddings[city2]
# get embedding of country 2 (it's a combination of the embeddings of country 1, city 1 and city 2)
# Remember: King - Man + Woman = Queen
connectingVector = country1Emb - city1Emb + city2Emb
# Initialize the similarity to -1 (it will be replaced by a similarities that are closer to +1)
bestSimilarity = -1
# initialize country to an empty string
country = ''
# loop through all words in the embeddings dictionary
for word in embeddings.keys():
# first check that the word is not already in the 'group'
if word not in group:
# get the word embedding
wordEmb = word_embeddings[word]
# calculate cosine similarity between embedding of country 2 and the word in the embeddings dictionary
currentSimilarity = cosine_similarity(connectingVector, wordEmb)
if currentSimilarity > bestSimilarity:
bestSimilarity = currentSimilarity
# store the country as a tuple, which contains the word and the similarity
country = (word, bestSimilarity)
return country
# Testing the function, note to make it more robust you can return the 5 most similar words.
get_country('Athens', 'Greece', 'Cairo', word_embeddings)
get_country('Rome', 'Italy', 'Madrid', word_embeddings)
get_country('Rome', 'Italy', 'Berlin', word_embeddings)
get_country('Rome', 'Italy', 'Beijing', word_embeddings)
get_country('Rome', 'Italy', 'Lisbon', word_embeddings)
Explanation: Remember that if the angle is small,
the cosine would be close to one and that means the two words are more similar. And as the angle approaches 90 degrees,
the cosine approaches zero and the words are less similar.
Finding the country of each capital
Now, we will use the cosine distance to find the capital cities of countries, in a similar way as previously, putting all together in one handy function:
End of explanation
def get_accuracy(word_embeddings, data):
'''
Input:
word_embeddings: a dictionary where the key is a word and the value is its embedding
data: a pandas dataframe containing all the country and capital city pairs
Output:
accuracy: the accuracy of the model
'''
num_correct = 0
# loop through the rows of the dataframe
for i, row in data.iterrows():
city1 = row['city1']
country1 = row['country1']
city2 = row['city2']
country2 = row['country2']
# use get_country to find the predicted country2
predicted_country2, _ = get_country(city1, country1, city2, word_embeddings)
# if the predicted country2 is the same as the actual country2...
if predicted_country2 == country2:
# increment the number of correct predictions by 1
num_correct += 1
m = len(data)
# calculate the accuracy by dividing the number of correct predictions by
# the number of rows in the data dataframe (length of dataframe)
accuracy = num_correct / m
return accuracy
accuracy = get_accuracy(word_embeddings, data)
print(f"Accuracy is {accuracy:.2f}")
Explanation: And this time it predicted correctly that Lisbon is the capital city of Portugal!
Model Accuracy
Now we will test the new function on the dataset and check the accuracy of the model:
$$\text{Accuracy}=\frac{\text{Correct # of predictions}}{\text{Total # of predictions}}$$
End of explanation |
10,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules
Step2: 1.10 The Limits of Single Dish Astronomy
In the previous section ➞ of this chapter we introduced the concepts and historical background of interferometry. Earlier in the chapter we presented some of the basic astrophysical sources which emit in the radio spectrum. In this section we will try to answer why we need to use interferometry in radio astronomy. A related question we will try to answer is why we can not just use a single telescope as is done in traditional optical astronomy.
Single telescopes are used in radio astronomy, and provide complimentary observational data to that of interferometric arrays. Astronomy with a single radio telescope is often called single dish astronomy as the telescope usually has a dish reflector (Figure 1.10.1). This dish is usually parabolic, but other shapes are also used, as it allows for the focusing of light to a single focal point where a receiver is placed - among other instruments this could be a camera in the optical, a bolometer in the far-infrared, or an antenna feed in the radio. Instead of a single dish telescope, a more general term would be a single element telescope which can be as simple as a dipole (Figure 1.10.2). An interferometric array (Figure 1.10.3) is used to create a synthesized telescope as it is considered a single telescope synthesized out of many elements (each element is also a telescope, it can get even more confusing).
Step3: Figure 1.10.1
Step4: Figure 1.10.2
Step7: Figure 1.10.3
Step8: 1.10.2 Physical limitations of single dishes
There are certain physical limitations to account for when designing single dish radio telescopes. As an example consider that, due to its limited field of view and the rotation of the earth, an antenna will have to track a source on the sky to maintain a constant sensitivity. In principle this can be achieved by mounting the antenna on a pedestal and mechanically steering it with a suitable engines. However, in order to maintain the integrity of the antenna, the control systems for these engines need to be incredibly precise. Clearly, this gets harder as the size of the instrument increases and will constitute a critical design point on the engineering side. This is true in the optical case as well, but it is easier to manage as the telescopes are physically much smaller.
There is an upper limit on how large we can build steerable single dish radio telescopes. This is because, just like everything else, the metals that these telescopes are made out of can only withstand finite amounts of stress and strain before deforming. Perhaps one of the greatest reminders of this fact came in 1988 with the <cite data-cite='2008ASPC..395..323C'>collapse of the 300 foot Green Bank Telescope</cite> ⤴ (see Figure 1.10.4). Clearly, large steerable telescopes run the risk of collapsing under their own weight. The 100 meter Green Bank Telescope (GBT) which replaced the 300 foot telescope is the largest steerable telescope in the world.
Larger single dish apertures can still be reached though. By leaving the reflector fixed and allowing the receiver at the focus to move along the focal plane (or along the caustic) of the instrument will mimic a slowly varying pointing in the sky (a so called steerable focus telescope). Indeed, this is how the Arecibo Observatory radio telescope (see Figure 1.10.5) operates. However, steerable focus telescopes come with limitations of their own (e.g. material cost and available space). In order to overcome these physical limitations and achieve a higher angular resolution we must use interferometric arrays to form a synthesized telescope.
Step9: Figure 1.10.4a
Step10: Figure 1.10.4b
Step11: Figure 1.10.5
Step12: Figure 1.10.6a
Step13: Figure 1.10.6b
Step14: Figure 1.10.6c
Step15: Figure 1.10.6d
Step16: Figure 1.10.6e | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.9 A brief introduction to interferometry
Next: 1.11 Modern Interferometric Arrays
Section status: <span style="background-color:yellow"> </span>
Import standard modules:
End of explanation
import ipywidgets
from IPython.display import Image
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
Image(filename='figures/hart_26m_15m_2012-09-11_08511.jpg')
Explanation: 1.10 The Limits of Single Dish Astronomy
In the previous section ➞ of this chapter we introduced the concepts and historical background of interferometry. Earlier in the chapter we presented some of the basic astrophysical sources which emit in the radio spectrum. In this section we will try to answer why we need to use interferometry in radio astronomy. A related question we will try to answer is why we can not just use a single telescope as is done in traditional optical astronomy.
Single telescopes are used in radio astronomy, and provide complimentary observational data to that of interferometric arrays. Astronomy with a single radio telescope is often called single dish astronomy as the telescope usually has a dish reflector (Figure 1.10.1). This dish is usually parabolic, but other shapes are also used, as it allows for the focusing of light to a single focal point where a receiver is placed - among other instruments this could be a camera in the optical, a bolometer in the far-infrared, or an antenna feed in the radio. Instead of a single dish telescope, a more general term would be a single element telescope which can be as simple as a dipole (Figure 1.10.2). An interferometric array (Figure 1.10.3) is used to create a synthesized telescope as it is considered a single telescope synthesized out of many elements (each element is also a telescope, it can get even more confusing).
End of explanation
Image(filename='figures/kaira_lba_element.jpg')
Explanation: Figure 1.10.1: 26 meter dish at HartRAO, South Africa used for single dish observations and as part of interferometric VLBI networks. Credit: M Gaylard / HartRAO⤴
End of explanation
Image(filename='../5_Imaging/figures/2013_kat7_20.jpg')
Explanation: Figure 1.10.2: LOFAR LBA dipole element. Credit: KAIRA/D. McKay-Bukowski⤴
End of explanation
def WhichDiameter(wavelength=1., angres=(15e-3/3600)):
Compute the diameter of an aperture as a function of angular resolution and observing wavelength
c = 299792458. #spped of light, m/s
freq = c/(wavelength)/1e6 #
D = 1.22 * wavelength/np.radians(angres) # assuming a circular aperture
print(
At a frequency of %.3f MHz (Lambda = %.3f m)'%(freq, wavelength)
the aperture diameter is D = %f m'%D
to achieve an angular resolution of %f degrees / %f arcmin / %f arcsec'%(angres, angres*60, angres*3600)
)
w = ipywidgets.interact(WhichDiameter, angres=((15e-3/3600), 10, 1e-5), wavelength=(0.5e-6, 1, 1e-7))
Explanation: Figure 1.10.3: Inner 5 dishes of KAT-7, a 7 element interferometric array located in South Africa which can be combined into a single synthesized telescope. Credit: SKA-SA⤴
<span style="background-color:yellow"> LB:LF:this link seems to have died</span>
Depending on the science goals of an experiment or observatory, different types of telescopes are built. So what is the main driver for building an interferometric array to create a synthesized telescope? It all comes down to the resolution of a telescope, a property which is related to the wavelength of incoming light and the physical size of the telescope.
1.10.1. Aperture Diameter and Angular Resolution
If we consider a generic dish radio telescope, ignoring blockage from feeds and structure and any practical issues, we can think of the dish as having a circular aperture. We will use the term 'primary beam' later in Chapter 7 to discuss this aperture in detail. Until then we can think of the dish aperture size as being the collecting area. The larger the aperture the more collecting area, thus the more sensitive (a measure of how well the telescope is able to measure a signal) the telescope. This is the same as in photography. Since we are modelling our simple telescope as a circle then the collection area $A$, or aperture size, is proportional to the diameter of the dish $D$.
$$A \propto D^2$$
A larger aperture also increases the maximum angular resolution of the telescope i.e. the ability to differentiate between two sources (say stars) which are separated by some angular distance. Using the Rayleigh criterion the angular resolution $\Delta \theta$ (in radians) of a dish of diameter $D$ is
$$\Delta \theta = 1.22 \frac{\lambda}{D}$$
where $\lambda$ is the observing wavelength. Since light in the radio regime of the electromagnetic spectrum has a longer wavelength compared to that in the optical regime, we can see that a radio telescope with the same collecting area diameter as an optical telescope will have a much lower angular resolution.
<div class=warn>
<b>Warning:</b> Note that a higher value of $\Delta \theta$ implies lower angular resolution and vice versa.
</div>
The sensitivity of a telescope is directly proportional to its collecting area. The angular resolution of the telescope is inversely proportional to the aperture diameter. Usually, we want both high sensitivity and fine angular resolution, since we are interested in accurately measuring the strength of the signal and positions of sources. A natural way to improve both the sensitivity and angular resolution of a single telescope is to increase the collecting area.
The following table shows the angular resolution as a function of aperture diameter $D$ and observing wavelength for a single dish telescope.
| Telescope Type | Angular Resolution <br> $\Delta \theta$ | Visible <br> $\lambda$ = 500 nm | Infrared <br> $\lambda$ = 10 $\mu$m | Radio EHF <br> $\lambda$ = 10 mm <br> 30 GHz | Radio UHF <br> $\lambda$ = 1 m <br> 300 Mhz|
|:---:|:---:|:---:|:---:|:---:|:---:|
| Amatuer | 0.8'' | 15 cm | 3 m | 3 km | 300 km |
| Automated Follow-up | 0.25'' | 50 cm | 10 m | 10 km | 100 km |
| Small Science | 0.12'' | 1 m | 21 m | 21 km | 2100 km |
| Large Science | 0.015'' (15 mas) | 8 m | 168 m | 168 km | 16800 km |
Table 1.10.1: Angular resolution of a telescope as a function of the aperture diameter $D$ and observing wavelength.
As we can see from the table, a radio telescope requires a diameter which is many orders of magnitude larger than that of an optical telescope to achieve the same angular resolution. It is very reasonable to build a 15 cm optical telescope, in fact they can be easily bought at a store. But a radio telescope, observing at 300 MHz, which has the same resolution (0.8 arcseconds) needs to have an aperture of 300 km! Now, this would not only be prohibitively expensive, but the engineering is completely infeasible. Just for reference, the largest single dish telescopes are on the order of a few hundred meters in diameter (see FAST in China, Arecibo in Puerto Rico). The following example shows how the diameter of a telescope varies as a function of observing wavelength and desired angular resolution.
End of explanation
Image(filename='figures/gbt_300foot_telescope.jpg')
Explanation: 1.10.2 Physical limitations of single dishes
There are certain physical limitations to account for when designing single dish radio telescopes. As an example consider that, due to its limited field of view and the rotation of the earth, an antenna will have to track a source on the sky to maintain a constant sensitivity. In principle this can be achieved by mounting the antenna on a pedestal and mechanically steering it with a suitable engines. However, in order to maintain the integrity of the antenna, the control systems for these engines need to be incredibly precise. Clearly, this gets harder as the size of the instrument increases and will constitute a critical design point on the engineering side. This is true in the optical case as well, but it is easier to manage as the telescopes are physically much smaller.
There is an upper limit on how large we can build steerable single dish radio telescopes. This is because, just like everything else, the metals that these telescopes are made out of can only withstand finite amounts of stress and strain before deforming. Perhaps one of the greatest reminders of this fact came in 1988 with the <cite data-cite='2008ASPC..395..323C'>collapse of the 300 foot Green Bank Telescope</cite> ⤴ (see Figure 1.10.4). Clearly, large steerable telescopes run the risk of collapsing under their own weight. The 100 meter Green Bank Telescope (GBT) which replaced the 300 foot telescope is the largest steerable telescope in the world.
Larger single dish apertures can still be reached though. By leaving the reflector fixed and allowing the receiver at the focus to move along the focal plane (or along the caustic) of the instrument will mimic a slowly varying pointing in the sky (a so called steerable focus telescope). Indeed, this is how the Arecibo Observatory radio telescope (see Figure 1.10.5) operates. However, steerable focus telescopes come with limitations of their own (e.g. material cost and available space). In order to overcome these physical limitations and achieve a higher angular resolution we must use interferometric arrays to form a synthesized telescope.
End of explanation
Image(filename='figures/gbt_300foot_collapse.jpg')
Explanation: Figure 1.10.4a: 300 foot Green Bank Telescope located in West Virgina, USA during initial operations in 1962. Credit: NRAO⤴
End of explanation
Image(filename='figures/arecibo_observatory.jpg')
Explanation: Figure 1.10.4b: November, 1988, a day after the collapse of the 300 foot GBT telescope due to structural defects. Credit: NRAO⤴
End of explanation
Image(filename='figures/cartoon_1.png')
Explanation: Figure 1.10.5: 300 m Arecibo Telescope lying in a natural cavity in Puerto Rico. The receiver is located in the white spherical structure held up by wires, and is repositioned to "point" the telescope. Credit: courtesy of the NAIC - Arecibo Observatory, a facility of the NSF⤴
1.10.3 Creating a Synthesized Telescope using Interferometry
Here we will attempt to develop some intuition for what an interferometric array is and how it is related to a single dish telescope. We will construct a cartoon example before getting into the mathematics. A simple single dish telescope is made up of a primary reflector dish on a mount to point in some direction in the sky and a signal receptor at the focal point of the reflector (Figure 1.3.6a). The receptor is typically an antenna in the case of radio astronomy or a camera in optical astronomy.
Basic optics tells us how convex lenses can be used to form real images of sources that are very far away. The image of a source that is infinitely far away will form at exactly the focal point of the lens, the location of which is completely determined by the shape of the lens (under the "thin lens" approximation). Sources of astrophysical interest can be approximated as being infinitely far away as long as they are at distances much farther away than the focal point of the lens. This is immediately obvious from the equation of a thin convex lens:
$$ \frac{1}{o} + \frac{1}{i} = \frac{1}{f}, $$
where $i, ~ o$ and $f$ are the image, object and focal distances respectively. Early astronomers exploited this useful property of lenses to build the first optical telescopes. Later on concave mirrors replaced lenses because it was easier to control their physical and optical properties (e.g. curvature, surface quality etc.). Reflective paraboloids are the most efficient at focussing incoming plane waves (travelling on-axis) into a single locus (the focal point) and are therefore a good choice for the shape of a collector.
In our simple model the sky only contains a single astrophysical source, which is detected by pointing the telescope towards its location in the sky.
End of explanation
Image(filename='figures/cartoon_2.png')
Explanation: Figure 1.10.6a: A simple dish telescope which reflects incoming plane waves (red dashed) along ray tracing paths (cyan) to a receptor at the focal point of the parabolic dish.
Ignoring real world effects like aperture blockage and reflector inefficiencies, plane waves are focused to a single point using a parabolic reflector (at that focus if a signal receptor). We can imagine the reflector is made up of many smaller reflectors, each with its own reflection path. A single dish, in the limit of fully sampling the observing wavelength $\lambda$, can be thought of as being made up of enough reflectors of diameter $\lambda/2$ to fill the collecting area of the dish. In our simple example, we just break the dish into 8 reflectors (Figure 1.10.6b). This is in fact what is often done with very large telescopes when it is not feasible to build a single large mirror, such as in the W. M. Keck Observatory. At this point we have not altered the telescope, we are just thinking about the reflector as being made up of multiple smaller reflectors.
<div class=advice>
<b>Note:</b> We can interpret a single dish telescope as a *continuous interferometer* by applying the Wiener-Khinchin theorem. See Chapter 2 of [<cite data-cite='2007isra.book.....T'>Interferometry and Synthesis in Radio Astronomy</cite> ⤴](http://adsabs.harvard.edu/abs/2007isra.book.....T) for an in depth discussion.
</div>
End of explanation
Image(filename='figures/cartoon_3.png')
Explanation: Figure 1.10.6b: The dish reflector can be thought of as being made up of multiple smaller reflectors, each with its own light path to the focus.
Now, instead of capturing all the signal at a single point, there is no reason we can not capture the signal at the smaller, individual reflector focus points. If that signal is captured, we can digitally combine the signals at the main focus point later (Figure 1.10.6c). This is the first trick of interferometry. Radio waves can be sufficiently sampled in time to digitally record the signals (this becomes more difficult at higher frequencies, and not possible in the near-infrared and higher). The cost is that a receptor needs to be built for each sub-reflector, and additional hardware is required to combine the signals. The dish optically combines the light, we are simply doing the same thing digitally.
End of explanation
Image(filename='figures/cartoon_4.png')
Explanation: Figure 1.10.6c: A receptor at each sub-reflector captures the light signals. To recreate the combined signal at the main receptor the signals are digitally combined.
The next leap is that there is no reason the sub-reflectors need to be set in the shape of a dish since the combination of the signal at the main focus is performed done digitally. Since light travels at a constant speed any repositioning of a sub-reflector just requires a time delay correction. So we can move each element to the ground and construct a pointing system for each sub-reflector (Figure 1.10.6d). We now have an array of smaller single dish telescopes! By including the correct time delays on each signal, we can measure the same signal as the original, larger single dish telescope. This digital operation is called beamforming and is very important in interferometry.
End of explanation
Image(filename='figures/cartoon_5.png')
Explanation: Figure 1.10.6d: The sub-reflector elements of the original telescope are set on the ground with their own pointing systems. The original signal can be reconstructed digitally and by including the appropriate time delay for each telescope.
The beamforming operation recombines all the signals into a single signal, which can be thought of as a single pixel camera. However, we can do even better using a correlator. By correlating the signals we can compute visibilities which are then used to form an image (Figure 1.10.6e). This will be explained in more depth in the chapters that follow. For now it is important to know that interferometric arrays have an advantage over single dish telescopes viz. by combining signals from multiple smaller telescopes we can 'synthesize' a much larger telescope than can be constructed from a single dish. The correlator also allows for the creation of image over a beamformer at the cost of additional computing hardware.
<span style="background-color:yellow"> LB:RC: this last sentence is not clear, I am not sure what it is trying to say</span>
End of explanation
Image(filename='figures/cartoon_6.png')
Explanation: Figure 1.10.6e: By using correlator hardware instead of a beamformer an image of the sky can be created.
The next trick of interferometry is that we do not necessarily need to sample the entire original dish (Figure 1.10.6f). We do lose sensitivity and, as will be discussed in later chapters, spatial frequency modes, but by using only a subset of elements and exploiting interferometry we can build synthesized telescopes that are many kilometres in diameter (e.g. MeerKAT) or as large as the Earth (e.g. VLBI networks). This is why radio interferometry can be used to produce the highest resolution telescopes in the world.
End of explanation |
10,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regressão Linear Simples
1. Introdução
A regressão linear é um método de predição com mais de 200 anos de idade. A regressão linear simples é um ótimo primeiro algoritmo de aprendizado de máquina para implementar, pois requer que você avalie as propriedades do seu conjunto de dados de treinamento, mas é simples o suficiente para que os iniciantes entendam.
Neste tutorial, você descobrirá como implementar o algoritmo de regressão linear simples a partir do zero em Python.
Depois de completar este tutorial, você saberá
Step1: The variance is the sum squared difference for each value from the mean value. Variance for a list of numbers can be calculated as
Step2: Exercicio 1
(a) Junte as duas funções acima e teste-as em um conjunto de dados pequeno dado. Utiloze como exemplo, o pequeno conjunto de dados de valores x e y.
x | y
--| -
1 | 1
2 | 3
4 | 3
3 | 2
5 | 5
(b) Em seguida crie um gráfico onde voce plot esses pontos
Step3: 2.2 Calcular Covariância
A covariância de dois grupos de números descreve como esses números mudam juntos. A co-variância é uma generalização da correlação. A correlação descreve a relação entre dois grupos de números, enquanto a covariância pode descrever a relação entre dois ou mais grupos de números. Além disso, a covariância pode ser normalizada para produzir um valor de correlação. No entanto, podemos calcular a covariância entre duas variáveis da seguinte forma
Step4: Exercicio 2
Teste o cálculo da covariância no mesmo pequeno conjunto de dados apresentado na
seção anterior.
Step5: 2.3 Estimativa dos Coeficientes
Agora, devemos estimar os valores dos dois coeficientes em regressão linear simples. O primeiro é B1 que pode ser estimado como
Step6: Exercicio 3
Estenda o exercício anterior incluindo o cáculo dos coeficientes para os dados sintetizados.
Step7: 2.2 Fazer previsões
O modelo de regressão linear simples é uma linha definida pelos coeficientes estimados a partir dos dados de treinamento. Uma vez que os coeficientes são estimados, podemos usá-los para fazer previsões. A equação para fazer previsões com um modelo de regressão linear simples é a seguinte
Step8: Para avaliar o modelo
adicionaremos uma função para gerenciar a avaliação das previsões denominadas evaluate_algorithm () e outra função para estimar o erro quadrático médio da raiz das previsões denominadas métrica rmse_metric (). Veja as funções abaixo
Step9: Exercicio 4
Agora junte tudo que foi criado para fazer previsões para o nosso conjunto de dados de teste.
Step10: Exercício 5
Crie um scatter plot para mostrar as previsões como uma linha e compará-lo com o conjunto de dados original. | Python Code:
# Calculate the mean value of a list of numbers
def mean(values):
return sum(values) / float(len(values))
Explanation: Regressão Linear Simples
1. Introdução
A regressão linear é um método de predição com mais de 200 anos de idade. A regressão linear simples é um ótimo primeiro algoritmo de aprendizado de máquina para implementar, pois requer que você avalie as propriedades do seu conjunto de dados de treinamento, mas é simples o suficiente para que os iniciantes entendam.
Neste tutorial, você descobrirá como implementar o algoritmo de regressão linear simples a partir do zero em Python.
Depois de completar este tutorial, você saberá:
Como estimar quantidades estatísticas a partir de dados de treinamento.
Como estimar os coeficientes de regressão linear a partir dos dados.
Como fazer previsões usando regressão linear para novos dados.
1.1 Dataset - Seguro de Veículo Sueco
Neste tutorial, usaremos o Dataset Swedish Auto Insurance. Este conjunto de dados envolve a previsão de pagamentos de reclamações totais. Faça o download do conjunto de dados e guarde-o no seu diretório de trabalho atual com o nome do arquivo insurance.csv.
Nota: talvez seja necessário converter a vírgula européia (,) para o ponto decimal (.). Você também precisará alterar o arquivo de variáveis separadas em espaço branco para formato CSV.
1.2 Algoritmo de Regressão Linear Simples
A regressão linear assume uma relação linear ou linha reta entre as variáveis de entrada (X) e a variável de saída única (y). Mais especificamente, essa saída (y) pode ser calculada a partir de uma combinação linear das variáveis de entrada (X). Quando existe uma única variável de entrada, o método é referido como uma regressão linear simples.
Em regressão linear simples, podemos usar estatísticas sobre os dados de treinamento para estimar os coeficientes exigidos pelo modelo para fazer previsões em novos dados. A linha reta para um modelo de regressão linear simples pode ser escrita como:
Onde b0 e b1 são os coeficientes que devemos estimar a partir dos dados de treinamento. Uma vez que os coeficientes são conhecidos, podemos usar esta equação para estimar os valores de saída para y dado novos exemplos de entrada de x. Exige que você calcule propriedades estatísticas dos dados, como média, variância e covariância.
Toda a álgebra foi dada e ficamos apenas com alguma aritmética para implementar a estimativa dos coeficientes de regressão linear simples. Resumidamente, podemos estimar os coeficientes da seguinte forma:
Onde o i se refere ao valor do i-ésimo valor da entrada x ou saída y. Não se preocupe se isso não estiver claro agora, estas são as funções que implementaremos no tutorial.
2. Passos do Tutorial
Este tutorial é dividido em cinco partes:
Calcule Média e Variância.
Calcule Covariância.
Estimar Coeficientes.
Faça previsões.
Estudo de caso do dataset de seguro de automóvel sueco.
Essas etapas lhe darão a base que você precisa para implementar e treinar modelos simples de regressão linear para seus próprios problemas de previsão.
2.1 Calcule Média e Variância
O primeiro passo é estimar a média e a variância das variáveis de entrada e saída dos dados de treinamento. A média de uma lista de números pode ser calculada como:
Abaixo está uma função chamada mean () que implementa esse comportamento para uma lista de números.
End of explanation
# Calculate the variance of a list of numbers
def variance(values, mean):
return sum([(x-mean)**2 for x in values])
Explanation: The variance is the sum squared difference for each value from the mean value. Variance for a list of numbers can be calculated as:
Abaixo está uma função chamada variance () que calcula a variância de uma lista de números. Isto exige que a média da lista seja fornecida como um argumento, apenas não precisamos calcular mais de uma vez.
End of explanation
## COLOQUE SEU CODIGO AQUI
## FAÇA O PLOT DOS DADOS AQUI
Explanation: Exercicio 1
(a) Junte as duas funções acima e teste-as em um conjunto de dados pequeno dado. Utiloze como exemplo, o pequeno conjunto de dados de valores x e y.
x | y
--| -
1 | 1
2 | 3
4 | 3
3 | 2
5 | 5
(b) Em seguida crie um gráfico onde voce plot esses pontos
End of explanation
# Calculate covariance between x and y
def covariance(x, mean_x, y, mean_y):
covar = 0.0
for i in range(len(x)):
covar += (x[i] - mean_x) * (y[i] - mean_y)
return covar
Explanation: 2.2 Calcular Covariância
A covariância de dois grupos de números descreve como esses números mudam juntos. A co-variância é uma generalização da correlação. A correlação descreve a relação entre dois grupos de números, enquanto a covariância pode descrever a relação entre dois ou mais grupos de números. Além disso, a covariância pode ser normalizada para produzir um valor de correlação. No entanto, podemos calcular a covariância entre duas variáveis da seguinte forma:
Abaixo está uma função chamada covariance() que implementa esta estatística. Esta função baseia-se no passo anterior e leva as listas de valores x e y, bem como a média desses valores como argumentos.
End of explanation
## COLOQUE SEU CODIGO AQUI
Explanation: Exercicio 2
Teste o cálculo da covariância no mesmo pequeno conjunto de dados apresentado na
seção anterior.
End of explanation
# Calculate coefficients
def coefficients(dataset):
x = [row[0] for row in dataset]
y = [row[1] for row in dataset]
x_mean, y_mean = mean(x), mean(y)
b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean)
b0 = y_mean - b1 * x_mean
return [b0, b1]
Explanation: 2.3 Estimativa dos Coeficientes
Agora, devemos estimar os valores dos dois coeficientes em regressão linear simples. O primeiro é B1 que pode ser estimado como:
Podemos simplificar esta fórmula usando as funcões covariance e variance apresentadas acima, conforme a fórmula abaixo.
Em seguida, precisamos estimar um valor para B0, também chamado de interceptação, pois controla o ponto inicial da linha onde ele intersecta o eixo y.
Mais uma vez, sabemos como estimar B1 e temos uma função para estimar a média (). Podemos juntar tudo isso em uma função denominada coefficients () que leva o conjunto de dados como um argumento e retorna os coeficientes.
End of explanation
## COLOQUE SEU CODIGO AQUI
Explanation: Exercicio 3
Estenda o exercício anterior incluindo o cáculo dos coeficientes para os dados sintetizados.
End of explanation
def simple_linear_regression(train, test):
predictions = list()
b0, b1 = coefficients(train)
for row in test:
ypred = b0 + b1 * row[0]
predictions.append(ypred)
return predictions
Explanation: 2.2 Fazer previsões
O modelo de regressão linear simples é uma linha definida pelos coeficientes estimados a partir dos dados de treinamento. Uma vez que os coeficientes são estimados, podemos usá-los para fazer previsões. A equação para fazer previsões com um modelo de regressão linear simples é a seguinte:
Abaixo é apresentada a função chamada simple_linear_regression () que implementa a equação de predição para fazer previsões em um conjunto de dados de teste. Também une a estimativa dos coeficientes nos dados de treinamento das etapas acima. Os coeficientes preparados a partir dos dados de treinamento são usados para fazer previsões nos dados do teste, que são retornados.
End of explanation
from math import sqrt
# Calculate root mean squared error
def rmse_metric(actual, predicted):
sum_error = 0.0
for i in range(len(actual)):
prediction_error = predicted[i] - actual[i]
sum_error += (prediction_error ** 2)
mean_error = sum_error / float(len(actual))
return sqrt(mean_error)
# Evaluate regression algorithm on training dataset
def evaluate_algorithm(dataset, algorithm):
test_set = list()
for row in dataset:
row_copy = list(row)
row_copy[-1] = None
test_set.append(row_copy)
predicted = algorithm(dataset, test_set)
print(predicted)
actual = [row[-1] for row in dataset]
rmse = rmse_metric(actual, predicted)
return rmse
Explanation: Para avaliar o modelo
adicionaremos uma função para gerenciar a avaliação das previsões denominadas evaluate_algorithm () e outra função para estimar o erro quadrático médio da raiz das previsões denominadas métrica rmse_metric (). Veja as funções abaixo:
End of explanation
## COLOQUE SEU CODIGO AQUI
Explanation: Exercicio 4
Agora junte tudo que foi criado para fazer previsões para o nosso conjunto de dados de teste.
End of explanation
## COLOQUE SEU CODIGO AQUI
Explanation: Exercício 5
Crie um scatter plot para mostrar as previsões como uma linha e compará-lo com o conjunto de dados original.
End of explanation |
10,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Deploying and predicting with model </h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create the model using ai-platform CLI commands</li>
<li>Deploy the ML model to production</li>
<li>Perform predictions with the model</li>
</ol>
TODO
Step1: <h2> Deploy trained model </h2>
<p>
Deploying the trained model to act as a REST web service is a simple gcloud call.
Step2: <h2> Use model to predict (online prediction) </h2>
<p>
Send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
Step3: The predictions for the four instances were | Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.1'
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/trained_model; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical model if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight
fi
Explanation: <h1> Deploying and predicting with model </h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create the model using ai-platform CLI commands</li>
<li>Deploy the ML model to production</li>
<li>Perform predictions with the model</li>
</ol>
TODO: Complete the lab notebook #TODO sections. You can refer to the solutions/ notebook for reference.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
# Optional: Delete the version of the model if it already exists:
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
# TODO: Create the model
gcloud ai-platform models create
# TODO: Create the model version
gcloud ai-platform versions create
Explanation: <h2> Deploy trained model </h2>
<p>
Deploying the trained model to act as a REST web service is a simple gcloud call.
End of explanation
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = 'babyweight'
MODEL_VERSION = 'ml_on_gcp'
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = 'https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict' \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {'Authorization': 'Bearer ' + token }
data = {
'instances': [
{
'key': 'b1',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
{
'key': 'g1',
'is_male': 'False',
'mother_age': 29.0,
'plurality': 'Single(1)',
'gestation_weeks': 38
},
{
'key': 'b2',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Triplets(3)',
'gestation_weeks': 39
},
{
'key': 'u1',
'is_male': 'Unknown',
'mother_age': 29.0,
'plurality': 'Multiple(2+)',
'gestation_weeks': 38
},
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
Explanation: <h2> Use model to predict (online prediction) </h2>
<p>
Send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
End of explanation
%%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT --region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight --version=ml_on_gcp
Explanation: The predictions for the four instances were: 7.66, 7.22, 6.32 and 6.19 pounds respectively when I ran it (your results might be different).
<h2> Use model to predict (batch prediction) </h2>
<p>
Batch prediction is commonly used when you thousands to millions of predictions.
Create a file with one instance per line and submit using gcloud.
End of explanation |
10,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook for Airconics examples
This IPython notebook contains examples for generating and rendering the AirCONICS parametric transonic airliner example using the interactive WebServer from PythonOCC-contrib. Parts are generated under their respective headings and rendered collectively in the final cells.
For examples using the pythonocc-core Qt viewer, refer to the airconics examples/core directory
Step1: Parameter Definitions
Parameters used here correspond to a geometry similar to that of the Boeing 787-8
Step2: Wing, Transonic Airliner
Formulation of lifting surfaces in occ_airconics (and AirCONICS) follows the suggestions in Sobester [1] in which geometry--attached curvilinear functionals are used instead of parameters for shape definition. That is, $G(\textbf{f}, \textbf{X})$, where
$$\qquad \textbf{f} = \left[ f_1(\textbf{X}_1), f_2(\textbf{X}_2), ... f_m(\textbf{X}_m)\right],$$
and
$$\textbf{X}_i = \left[x_1^i, x_2^i,...\right], \forall i = 1,...m$$
as opposed to the conventional $G(\bf{X})$ formulation where the shape $G$ changes in response to changes in design parameters $\textbf{X}$. The functions $f_i$ are defined by
Step3: Tailplane, Transonic Airliner
The same Lifting Surface class is used here to generate the fin and tailplane of the aircraft, using a different set of input functionals (also defined in airconics.examples).
Step4: Fuselage Transonic Airliner
Fuselage shapes are created following the parameterisation used in Sobester [2]. That is, the outer mould line (OML) is split into a Nose, Central and Tail section, the length of which is described on input to Fuselage class as a percentage of the total length. Rib curves are then formed by fitting a NURBS curve to the intersection points of sectional planar cuts and the guide curves of the extremeties of the OML e.g. Port, top and bottom curves. The OML is fitted in occ_airconics using the Open CASCADE ThruSections loft.
Step5: Wing-Body Fairing
Step6: Engine + Pylon
First, obtain the wing section and chord at which the engine will be fitted, then fit then engine. The default inputs to the Engine class produce a turbofan engine with Nacelle similar to that of the RR Trent 1000 / GEnx and its pylon (Currently a flat plate only).
Step7: Miscelaneous operations
Step8: Ipython Cell Renderer
Step9: Development
Topology model
This is a work in progress towards a topologically flexible model based on the tree-type definition described in Sobester [1]. Note the geometry is not currently defined by the tree however, the tree is simply stored as a result of adding components - this is for demonstration only, and the process is yet to be automated.
The $xz$ mirror plane is included in this representation, between central objects (Fuselage, Fin) and the mirrored objects (Tail Plane, Wing, Engine).
Step10: Let's try some further tests to the topology class representation using some other examples. For now, these are empty geometries, and inputs to the Fuselage, LiftingSurface and Engine classes are not yet included in the Topology tree.
Predator UAV
Photo source
Step11: Fairchild Republic A-10 Thunderbolt
Photo source
Step12: Scaled Composites Proteus
Photo source | Python Code:
from airconics import LiftingSurface, Engine, Fuselage
import airconics.AirCONICStools as act
from airconics.Addons.WebServer.TornadoWeb import TornadoWebRenderer
from IPython.display import display
Explanation: Notebook for Airconics examples
This IPython notebook contains examples for generating and rendering the AirCONICS parametric transonic airliner example using the interactive WebServer from PythonOCC-contrib. Parts are generated under their respective headings and rendered collectively in the final cells.
For examples using the pythonocc-core Qt viewer, refer to the airconics examples/core directory
End of explanation
Propulsion = 1
EngineDia = 2.9
FuselageScaling = [55.902, 55.902, 55.902]
WingScaleFactor = 44.56
WingChordFactor = 1.0
Topology = 1
EngineSpanStation = 0.31
EngineCtrBelowLE = 0.3558
EngineCtrFwdOfLE = 0.9837
Scarf_deg = 3
# Derived Parameters
FuselageHeight = FuselageScaling[2]*0.105
FuselageLength = FuselageScaling[0]
FuselageWidth = FuselageScaling[1]*0.106
WingApex = [0.1748*FuselageLength,0,-0.0523*FuselageHeight]
# Fin:
FinChordFact = 1.01
FinScaleFact = WingScaleFactor/2.032
# TailPlane
TPChordFact = 1.01
TPScaleFact = WingScaleFactor * 0.388
# Engine:
NacelleLength = 1.95*EngineDia
Explanation: Parameter Definitions
Parameters used here correspond to a geometry similar to that of the Boeing 787-8
End of explanation
# Import all example functional definitions for the Common Research Model (CRM) Wing:
from airconics.examples.wing_example_transonic_airliner import *
# Position of the apex of the wing
P = WingApex
# Class definition
NSeg = 11
ChordFactor = 1
ScaleFactor = 50
# Generate (surface building is done during construction of the class)
Wing = LiftingSurface(P, mySweepAngleFunctionAirliner,
myDihedralFunctionAirliner,
myTwistFunctionAirliner,
myChordFunctionAirliner,
myAirfoilFunctionAirliner,
SegmentNo=NSeg,
ScaleFactor=WingScaleFactor,
ChordFactor=WingChordFactor)
RootChord = Wing.RootChord
# Display
renderer = TornadoWebRenderer()
Wing.Display(renderer)
display(renderer)
Explanation: Wing, Transonic Airliner
Formulation of lifting surfaces in occ_airconics (and AirCONICS) follows the suggestions in Sobester [1] in which geometry--attached curvilinear functionals are used instead of parameters for shape definition. That is, $G(\textbf{f}, \textbf{X})$, where
$$\qquad \textbf{f} = \left[ f_1(\textbf{X}_1), f_2(\textbf{X}_2), ... f_m(\textbf{X}_m)\right],$$
and
$$\textbf{X}_i = \left[x_1^i, x_2^i,...\right], \forall i = 1,...m$$
as opposed to the conventional $G(\bf{X})$ formulation where the shape $G$ changes in response to changes in design parameters $\textbf{X}$. The functions $f_i$ are defined by:
$Sweep (\epsilon)$
$Chord (\epsilon)$
$Rotation (\epsilon)$
$Twist (\epsilon)$
$Airfoil (\epsilon)$
where $\epsilon$ represents the spanwise coordinate ranging from 0 at the root of the wing to 1 at the tip. Output of the airfoil function uses the airconics.primitives.Airfoil class here, which fits a NURBS curve to airfoil coordinates.
The following code demonstrates construction of a wing using built in examples for a transonic airliner wing and tailplane (below).
End of explanation
from OCC.gp import gp_Ax1, gp_Pnt, gp_Dir
from airconics.examples.tailplane_example_transonic_airliner import *
# Position of the apex of the fin
P = [36.98-0.49-0.02, 0.0, 2.395-0.141]
SegmentNo = 10
Fin = liftingsurface.LiftingSurface(P, mySweepAngleFunctionFin,
myDihedralFunctionFin,
myTwistFunctionFin,
myChordFunctionFin,
myAirfoilFunctionFin,
SegmentNo=SegmentNo,
ChordFactor=FinChordFact,
ScaleFactor=FinScaleFact)
# Create the rotation axis centered at the apex point in the x direction
RotAxis = gp_Ax1(gp_Pnt(*P), gp_Dir(1, 0, 0))
Fin.RotateComponents(RotAxis, 90)
# Position of the apex of the tailplane
P = [43, 0.000, 1.633+0.02]
SegmentNo = 100
ChordFactor = 1.01
ScaleFactor = 17.3
TP = liftingsurface.LiftingSurface(P, mySweepAngleFunctionTP,
myDihedralFunctionTP,
myTwistFunctionTP,
myChordFunctionTP,
myAirfoilFunctionTP,
SegmentNo=SegmentNo,
ChordFactor=TPChordFact,
ScaleFactor=TPScaleFact)
# Display
renderer = TornadoWebRenderer()
Fin.Display(renderer)
TP.Display(renderer)
display(renderer)
Explanation: Tailplane, Transonic Airliner
The same Lifting Surface class is used here to generate the fin and tailplane of the aircraft, using a different set of input functionals (also defined in airconics.examples).
End of explanation
NoseLengthRatio=0.182
TailLengthRatio=0.293
Fus = Fuselage(NoseLengthRatio, TailLengthRatio, Scaling=FuselageScaling,
NoseCoordinates=[0., 0., 0],
CylindricalMidSection=False,
Maxi_attempt=5)
# Display
renderer = TornadoWebRenderer()
Fus.Display(renderer)
display(renderer)
# Export (can be commented out)
# act.export_STEPFile([Fus['OML']], 'fuselage.stp')
Explanation: Fuselage Transonic Airliner
Fuselage shapes are created following the parameterisation used in Sobester [2]. That is, the outer mould line (OML) is split into a Nose, Central and Tail section, the length of which is described on input to Fuselage class as a percentage of the total length. Rib curves are then formed by fitting a NURBS curve to the intersection points of sectional planar cuts and the guide curves of the extremeties of the OML e.g. Port, top and bottom curves. The OML is fitted in occ_airconics using the Open CASCADE ThruSections loft.
End of explanation
# WingBodyFairing - A simple ellipsoid:
from airconics.base import AirconicsShape
WTBFZ = RootChord*0.009 #787: 0.2
WTBFheight = 1.8*0.1212*RootChord #787:2.7
WTBFwidth = 1.08*FuselageWidth
WTBFXCentre = WingApex[0] + RootChord/2.0 + RootChord*0.1297 # 787: 23.8
WTBFlength = 1.167*RootChord #787:26
WBF_shape = act.make_ellipsoid([WTBFXCentre, 0, WTBFZ], WTBFlength, WTBFwidth, WTBFheight)
WBF = AirconicsShape(components={'WBF': WBF_shape})
Explanation: Wing-Body Fairing:
The wing-body fairing is here created as a simple ellipsoid shape around the root section of the wing.
Note that this component will be displayed only in the final model.
End of explanation
EngineSection, HChord = act.CutSect(Wing['Surface'], EngineSpanStation)
Chord = HChord.GetObject()
CEP = Chord.EndPoint()
Centreloc = [CEP.X()-EngineCtrFwdOfLE*NacelleLength,
CEP.Y(),
CEP.Z()-EngineCtrBelowLE*NacelleLength]
eng = Engine(HChord,
CentreLocation=Centreloc,
ScarfAngle=Scarf_deg,
HighlightRadius=EngineDia/2.0,
MeanNacelleLength=NacelleLength)
# Display
renderer = TornadoWebRenderer()
eng.Display(renderer)
display(renderer)
Explanation: Engine + Pylon
First, obtain the wing section and chord at which the engine will be fitted, then fit then engine. The default inputs to the Engine class produce a turbofan engine with Nacelle similar to that of the RR Trent 1000 / GEnx and its pylon (Currently a flat plate only).
End of explanation
# Trim the inboard section of the main wing:
CutCirc = act.make_circle3pt([0,WTBFwidth/4.,-45], [0,WTBFwidth/4.,45], [90,WTBFwidth/4.,0])
CutCircDisk = act.PlanarSurf(CutCirc)
Wing['Surface'] = act.TrimShapebyPlane(Wing['Surface'], CutCircDisk)
#Mirror the main wing and tailplane using class methods:
Wing2 = Wing.MirrorComponents(plane='xz')
TP2 = TP.MirrorComponents(plane='xz')
eng2 = eng.MirrorComponents(plane='xz')
Explanation: Miscelaneous operations
End of explanation
renderer = TornadoWebRenderer()
# display all entities:
# Fuselage and wing-body fairing
Fus.Display(renderer)
WBF.Display(renderer)
# #The Wings:
Wing.Display(renderer)
Wing2.Display(renderer)
#The Tailplane:
TP.Display(renderer)
TP2.Display(renderer)
#The Fin:
Fin.Display(renderer)
#The Engines:
eng.Display(renderer)
eng2.Display(renderer)
# Finally show the renderer
display(renderer)
Explanation: Ipython Cell Renderer:
End of explanation
from airconics import Topology
from IPython.display import Image
import pydot
topo_renderer = TornadoWebRenderer()
topo = Topology()
# Note: no checks are done on the validity of the tree yet,
topo.AddPart(Fus, 'Fuselage', 3)
topo.AddPart(Fin, 'Fin', 0)
# Need to add a mirror plane here, arity zero
from OCC.gp import gp_Ax2, gp_Dir, gp_Pnt
xz_pln = gp_Ax2(gp_Pnt(0, 0, 0), gp_Dir(0, 1, 0))
topo.AddPart(xz_pln, 'Mirror', 0)
# These are the mirrored entities, with their arities
topo.AddPart(TP, 'Tail Plane', 0)
topo.AddPart(Wing, 'Wing', 1)
topo.AddPart(eng, 'Engine', 0)
# print the Topology (resembles a LISP tree)
print(topo)
# Create the graph with pydot
graph = pydot.graph_from_dot_data(topo.export_graphviz())
Image(graph.create_png())
# This line will mirror geometry 'under' (added after) the mirror plane
topo.Build()
topo.Display(topo_renderer)
display(topo_renderer)
Explanation: Development
Topology model
This is a work in progress towards a topologically flexible model based on the tree-type definition described in Sobester [1]. Note the geometry is not currently defined by the tree however, the tree is simply stored as a result of adding components - this is for demonstration only, and the process is yet to be automated.
The $xz$ mirror plane is included in this representation, between central objects (Fuselage, Fin) and the mirrored objects (Tail Plane, Wing, Engine).
End of explanation
# Setup
# Create mock components, without generating any geometry
fus = Fuselage(construct_geometry=False)
engine = Engine(construct_geometry=False)
fin = LiftingSurface(construct_geometry=False)
mirror_pln = gp_Ax2()
wing = LiftingSurface(construct_geometry=False)
Vfin = LiftingSurface(construct_geometry=False)
# For now we must manually add parts and affinities
topo = Topology()
topo.AddPart(fus, 'Fuselage', 4)
topo.AddPart(engine, 'engine', 0)
topo.AddPart(fin, 'fin', 0)
topo.AddPart(mirror_pln, 'mirror_pln', 0)
topo.AddPart(wing, 'wing', 0)
topo.AddPart(Vfin, 'V-Fin', 0)
print(topo)
graph = pydot.graph_from_dot_data(topo.export_graphviz())
Image(graph.create_png())
Explanation: Let's try some further tests to the topology class representation using some other examples. For now, these are empty geometries, and inputs to the Fuselage, LiftingSurface and Engine classes are not yet included in the Topology tree.
Predator UAV
Photo source: US Air Force
End of explanation
# Setup
# Create mock components, without generating any geometry
fus = Fuselage(construct_geometry=False)
mirror_pln = gp_Ax2()
engine = Engine(construct_geometry=False)
wing = LiftingSurface(construct_geometry=False)
tailplane = LiftingSurface(construct_geometry=False)
tail_fin = LiftingSurface(construct_geometry=False)
topo = Topology()
topo.AddPart(fus, 'Fuselage', 3)
topo.AddPart(mirror_pln, 'mirror', 0)
topo.AddPart(engine, 'powerplant', 0)
topo.AddPart(tailplane, 'Tailplane', 1)
topo.AddPart(tail_fin, "Tail fin", 0)
topo.AddPart(wing, "wing", 0)
print(topo)
graph = pydot.graph_from_dot_data(topo.export_graphviz())
Image(graph.create_png())
Explanation: Fairchild Republic A-10 Thunderbolt
Photo source: Airman Magazine 1999
End of explanation
# Setup
# Create mock components, without generating any geometry
fus = Fuselage(construct_geometry=False)
mirror_pln = gp_Ax2()
engine = Engine(construct_geometry=False)
wing_in = LiftingSurface(construct_geometry=False)
tailplane = LiftingSurface(construct_geometry=False)
pod = Fuselage(construct_geometry=False)
finup = LiftingSurface(construct_geometry=False)
findown = LiftingSurface(construct_geometry=False)
wing_out = LiftingSurface(construct_geometry=False)
topo = Topology()
topo.AddPart(fus, 'Fuselage', 3)
topo.AddPart(mirror_pln, 'mirror', 0)
topo.AddPart(engine, 'powerplant', 0)
topo.AddPart(wing, "wing", 0)
topo.AddPart(wing_in, "TP/inbbd wing", 1)
topo.AddPart(pod, 'Pod/tail boom', 3)
topo.AddPart(wing_out, "outbd wing", 0)
topo.AddPart(finup, "Fin (up)", 0)
topo.AddPart(findown, "Fin (down)", 0)
for node in topo._Tree:
print(node)
graph = pydot.graph_from_dot_data(topo.export_graphviz())
Image(graph.create_png())
Explanation: Scaled Composites Proteus
Photo source: NASA
End of explanation |
10,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
New functions
These are recently written functions that have not made it into the main documentation
Python Lesson
Step1: When things go wrong in your eppy script, you get "Errors and Exceptions".
To know more about how this works in python and eppy, take a look at Python
Step2: Now let us open file fname1 without setting the idd file
Step3: OK. It does not let you do that and it raises an exception
So let us set the idd file and then open the idf file
Step4: That worked without raising an exception
Now let us try to change the idd file. Eppy should not let you do this and should raise an exception.
Step5: Excellent!! It raised the exception we were expecting.
Check range for fields
The fields of idf objects often have a range of legal values. The following functions will let you discover what that range is and test if your value lies within that range
demonstrate two new functions
Step6: Let us set these values outside the range and see what happens
Step7: So the Range Check works
Looping through all the fields in an idf object
We have seen how to check the range of field in the idf object. What if you want to do a range check on all the fields in an idf object ? To do this we will need a list of all the fields in the idf object. We can do this easily by the following line
Step8: So let us use this
Step9: Now let us test if the values are in the legal range. We know that "Loads_Convergence_Tolerance_Value" is out of range
Step10: You see, we caught the out of range value
Blank idf file
Until now in all our examples, we have been reading an idf file from disk
Step11: It did not print anything. Why should it. It was empty.
What if we give it a string that was not blank
Step12: Aha !
Now let us give it a file name
Step13: Let us confirm that the file was saved to disk
Step14: Yup ! that file was saved. Let us delete it since we were just playing
Step15: Deleting, copying/adding and making new idfobjects
Making a new idf object
Let us start with a blank idf file and make some new "MATERIAL" objects in it
Step16: To make and add a new idfobject object, we use the function IDF.newidfobject(). We want to make an object of type "MATERIAL"
Step17: Let us give this a name, say "Shiny new material object"
Step18: Let us look at all the "MATERIAL" objects
Step19: As we can see there are three MATERIAL idfobjects. They are
Step20: You can see that the second material is gone ! Now let us remove the first material, but do it using a different function
Step21: So we have two ways of deleting an idf object
Step22: So now we have a copy of the material. You can use this method to copy idf objects from other idf files too.
Making an idf object with named arguments
What if we wanted to make an idf object with values for it's fields? We can do that too.
Step23: newidfobject() also fills in the default values like "Thermal Absorptance", "Solar Absorptance", etc.
Step24: Renaming an idf object
It is easy to rename an idf object. If we want to rename the gypboard object that we created above, we simply say
Step25: to rename gypboard and have that name change in all the places we call modeleditor.rename(idf, key, oldname, newname)
Step26: Now we have "peanut butter" everywhere. At least where we need it. Let us look at the entir idf file, just to be sure
Step27: Turn off default values
Can I turn off the defautl values. Yes you can
Step28: But why would you want to turn it off.
Well .... sometimes you have to
Try it with the object DAYLIGHTING
Step29: Can we do the same for zones ?
Not yet .. not yet. Not in this version on eppy
But we can still get the area and volume of the zone
Step30: Not as slick, but still pretty easy
Some notes on the zone area calculation
Step31: Compare the first printidf() and the second printidf().
The syntax of the json string is described below
Step32: What if you object name had a dot . in it? Will the json_function get confused?
If the name has a dot in it, there are two ways of doing this.
Step33: Note When you us the json update function
Step34: You have to find the IDD file on your hard disk.
Then set the IDD using setiddname(iddfile).
Now you can open the IDF file
Why can’t you just open the IDF file without jumping thru all those hoops. Why do you have to find the IDD file. What is the point of having a computer, if it does not do the grunt work for you.
The function easyopen will do the grunt work for you. It will automatically read the version number from the IDF file, locate the correct IDD file and set it in eppy and then open your file. It works like this
Step35: For this to work,
the IDF file should have the VERSION object. You may not have this if you are just working on a file snippet.
you need to have the version of EnergyPlus installed that matches your IDF version.
Energyplus should be installed in the default location.
If easyopen does not work, use the long winded steps shown in the tutorial. That is guaranteed to work
Other miscellaneous functions¶
Fan power in Watts, BHP and fan cfm¶
We normally think of fan power in terms of Brake Horsepower (BHP), Watts. Also when working with IP units it is useful to think of fan flow volume in terms of cubic feet per minute (cfm).
Energyplus does not have fields for those values. With eppy we have functions that will calculate the values
fan power in BHP
fan power in Watts
fan flow in CFM
It will work for the following objects | Python Code:
# you would normaly install eppy by doing
# python setup.py install
# or
# pip install eppy
# or
# easy_install eppy
# if you have not done so, uncomment the following three lines
import sys
# pathnameto_eppy = 'c:/eppy'
pathnameto_eppy = '../'
sys.path.append(pathnameto_eppy)
Explanation: New functions
These are recently written functions that have not made it into the main documentation
Python Lesson: Errors and Exceptions
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
Explanation: When things go wrong in your eppy script, you get "Errors and Exceptions".
To know more about how this works in python and eppy, take a look at Python: Errors and Exceptions
Setting IDD name
When you work with Energyplus you are working with idf files (files that have the extension *.idf). There is another file that is very important, called the idd file. This is the file that defines all the objects in Energyplus. Esch version of Energyplus has a different idd file.
So eppy needs to know which idd file to use. Only one idd file can be used in a script or program. This means that you cannot change the idd file once you have selected it. Of course you have to first select an idd file before eppy can work.
If you use eppy and break the above rules, eppy will raise an exception. So let us use eppy incorrectly and make eppy raise the exception, just see how that happens.
First let us try to open an idf file without setting an idd file.
End of explanation
try:
idf1 = IDF(fname1)
except Exception, e:
raise e
Explanation: Now let us open file fname1 without setting the idd file
End of explanation
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
IDF.setiddname(iddfile)
idf1 = IDF(fname1)
Explanation: OK. It does not let you do that and it raises an exception
So let us set the idd file and then open the idf file
End of explanation
try:
IDF.setiddname("anotheridd.idd")
except Exception, e:
raise e
Explanation: That worked without raising an exception
Now let us try to change the idd file. Eppy should not let you do this and should raise an exception.
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
# IDF.setiddname(iddfile)# idd ws set further up in this page
idf1 = IDF(fname1)
building = idf1.idfobjects['building'.upper()][0]
print building
print building.getrange("Loads_Convergence_Tolerance_Value")
print building.checkrange("Loads_Convergence_Tolerance_Value")
Explanation: Excellent!! It raised the exception we were expecting.
Check range for fields
The fields of idf objects often have a range of legal values. The following functions will let you discover what that range is and test if your value lies within that range
demonstrate two new functions:
EpBunch.getrange(fieldname) # will return the ranges for that field
EpBunch.checkrange(fieldname) # will throw an exception if the value is outside the range
End of explanation
building.Loads_Convergence_Tolerance_Value = 0.6
from eppy.bunch_subclass import RangeError
try:
print building.checkrange("Loads_Convergence_Tolerance_Value")
except RangeError, e:
raise e
Explanation: Let us set these values outside the range and see what happens
End of explanation
print building.fieldnames
Explanation: So the Range Check works
Looping through all the fields in an idf object
We have seen how to check the range of field in the idf object. What if you want to do a range check on all the fields in an idf object ? To do this we will need a list of all the fields in the idf object. We can do this easily by the following line
End of explanation
for fieldname in building.fieldnames:
print "%s = %s" % (fieldname, building[fieldname])
Explanation: So let us use this
End of explanation
from eppy.bunch_subclass import RangeError
for fieldname in building.fieldnames:
try:
building.checkrange(fieldname)
print "%s = %s #-in range" % (fieldname, building[fieldname],)
except RangeError as e:
print "%s = %s #-****OUT OF RANGE****" % (fieldname, building[fieldname],)
Explanation: Now let us test if the values are in the legal range. We know that "Loads_Convergence_Tolerance_Value" is out of range
End of explanation
# some initial steps
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
# IDF.setiddname(iddfile) # Has already been set
# - Let us first open a file from the disk
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
idf_fromfilename = IDF(fname1) # initialize the IDF object with the file name
idf_fromfilename.printidf()
# - now let us open a file from the disk differently
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
fhandle = open(fname1, 'r') # open the file for reading and assign it a file handle
idf_fromfilehandle = IDF(fhandle) # initialize the IDF object with the file handle
idf_fromfilehandle.printidf()
# So IDF object can be initialized with either a file name or a file handle
# - How do I create a blank new idf file
idftxt = "" # empty string
from StringIO import StringIO
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_emptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_emptyfile.printidf()
Explanation: You see, we caught the out of range value
Blank idf file
Until now in all our examples, we have been reading an idf file from disk:
How do I create a blank new idf file
give it a file name
Save it to the disk
Here are the steps to do that
End of explanation
# - The string does not have to be blank
idftxt = "VERSION, 7.3;" # Not an emplty string. has just the version number
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_notemptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_notemptyfile.printidf()
Explanation: It did not print anything. Why should it. It was empty.
What if we give it a string that was not blank
End of explanation
# - give it a file name
idf_notemptyfile.idfname = "notemptyfile.idf"
# - Save it to the disk
idf_notemptyfile.save()
Explanation: Aha !
Now let us give it a file name
End of explanation
txt = open("notemptyfile.idf", 'r').read()# read the file from the disk
print txt
Explanation: Let us confirm that the file was saved to disk
End of explanation
import os
os.remove("notemptyfile.idf")
Explanation: Yup ! that file was saved. Let us delete it since we were just playing
End of explanation
# making a blank idf object
blankstr = ""
from StringIO import StringIO
idf = IDF(StringIO(blankstr))
Explanation: Deleting, copying/adding and making new idfobjects
Making a new idf object
Let us start with a blank idf file and make some new "MATERIAL" objects in it
End of explanation
newobject = idf.newidfobject("material".upper()) # the key for the object type has to be in upper case
# .upper() makes it upper case
print newobject
Explanation: To make and add a new idfobject object, we use the function IDF.newidfobject(). We want to make an object of type "MATERIAL"
End of explanation
newobject.Name = "Shiny new material object"
print newobject
anothermaterial = idf.newidfobject("material".upper())
anothermaterial.Name = "Lousy material"
thirdmaterial = idf.newidfobject("material".upper())
thirdmaterial.Name = "third material"
print thirdmaterial
Explanation: Let us give this a name, say "Shiny new material object"
End of explanation
print idf.idfobjects["MATERIAL"]
Explanation: Let us look at all the "MATERIAL" objects
End of explanation
idf.popidfobject('MATERIAL', 1) # first material is '0', second is '1'
print idf.idfobjects['MATERIAL']
Explanation: As we can see there are three MATERIAL idfobjects. They are:
Shiny new material object
Lousy material
third material
Deleting an idf object
Let us remove 2. Lousy material. It is the second material in the list. So let us remove the second material
End of explanation
firstmaterial = idf.idfobjects['MATERIAL'][-1]
idf.removeidfobject(firstmaterial)
print idf.idfobjects['MATERIAL']
Explanation: You can see that the second material is gone ! Now let us remove the first material, but do it using a different function
End of explanation
onlymaterial = idf.idfobjects["MATERIAL"][0]
idf.copyidfobject(onlymaterial)
print idf.idfobjects["MATERIAL"]
Explanation: So we have two ways of deleting an idf object:
popidfobject -> give it the idf key: "MATERIAL", and the index number
removeidfobject -> give it the idf object to be deleted
Copying/Adding an idf object
Having deleted two "MATERIAL" objects, we have only one left. Let us make a copy of this object and add it to our idf file
End of explanation
gypboard = idf.newidfobject('MATERIAL', Name="G01a 19mm gypsum board",
Roughness="MediumSmooth",
Thickness=0.019,
Conductivity=0.16,
Density=800,
Specific_Heat=1090)
print gypboard
Explanation: So now we have a copy of the material. You can use this method to copy idf objects from other idf files too.
Making an idf object with named arguments
What if we wanted to make an idf object with values for it's fields? We can do that too.
End of explanation
print idf.idfobjects["MATERIAL"]
Explanation: newidfobject() also fills in the default values like "Thermal Absorptance", "Solar Absorptance", etc.
End of explanation
interiorwall = idf.newidfobject("CONSTRUCTION", Name="Interior Wall",
Outside_Layer="G01a 19mm gypsum board",
Layer_2="Shiny new material object",
Layer_3="G01a 19mm gypsum board")
print interiorwall
Explanation: Renaming an idf object
It is easy to rename an idf object. If we want to rename the gypboard object that we created above, we simply say:
But this could create a problem. What if this gypboard is part of a "CONSTRUCTION" object. The construction object will refer to the gypboard by name. If we change the name of the gypboard, we should change it in the construction object.
But there may be many constructions objects using the gypboard. Now we will have to change it in all those construction objects. Sounds painfull.
Let us try this with an example:
End of explanation
modeleditor.rename(idf, "MATERIAL", "G01a 19mm gypsum board", "peanut butter")
print interiorwall
Explanation: to rename gypboard and have that name change in all the places we call modeleditor.rename(idf, key, oldname, newname)
End of explanation
idf.printidf()
Explanation: Now we have "peanut butter" everywhere. At least where we need it. Let us look at the entir idf file, just to be sure
End of explanation
defaultmaterial = idf.newidfobject("MATERIAL",
Name='with default')
print defaultmaterial
nodefaultmaterial = idf.newidfobject("MATERIAL",
Name='Without default',
defaultvalues=False)
print nodefaultmaterial
Explanation: Turn off default values
Can I turn off the defautl values. Yes you can:
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/box.idf"
# IDF.setiddname(iddfile)
idf = IDF(fname1)
surfaces = idf.idfobjects["BuildingSurface:Detailed".upper()]
surface = surfaces[0]
print "area = %s" % (surface.area, )
print "tilt = %s" % (surface.tilt, )
print "azimuth = %s" % (surface.azimuth, )
Explanation: But why would you want to turn it off.
Well .... sometimes you have to
Try it with the object DAYLIGHTING:CONTROLS, and you will see the need for defaultvalues=False
Of course, internally EnergyPlus will still use the default values it it is left blank. If just won't turn up in the IDF file.
Zone area and volume
The idf file has zones with surfaces and windows. It is easy to get the attributes of the surfaces and windows as we have seen in the tutorial. Let us review this once more:
End of explanation
zones = idf.idfobjects["ZONE"]
zone = zones[0]
area = modeleditor.zonearea(idf, zone.Name)
volume = modeleditor.zonevolume(idf, zone.Name)
print "zone area = %s" % (area, )
print "zone volume = %s" % (volume, )
Explanation: Can we do the same for zones ?
Not yet .. not yet. Not in this version on eppy
But we can still get the area and volume of the zone
End of explanation
idf1.printidf()
import eppy.json_functions as json_functions
json_str = {"idf.VERSION..Version_Identifier":8.5,
"idf.SIMULATIONCONTROL..Do_Zone_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_System_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_Plant_Sizing_Calculation": "No",
"idf.BUILDING.Empire State Building.North_Axis": 52,
"idf.BUILDING.Empire State Building.Terrain": "Rural",
}
json_functions.updateidf(idf1, json_str)
idf1.printidf()
Explanation: Not as slick, but still pretty easy
Some notes on the zone area calculation:
area is calculated by summing up all the areas of the floor surfaces
if there are no floors, then the sum of ceilings and roof is taken as zone area
if there are no floors, ceilings or roof, we are out of luck. The function returns 0
Using JSON to update idf
we are going to update idf1 using json. First let us print the idf1 before changing it, so we can see what has changed once we make an update
End of explanation
json_str = {"idf.BUILDING.Taj.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building'.upper()]
# of course, you are creating an invalid E+ file. But we are just playing here.
Explanation: Compare the first printidf() and the second printidf().
The syntax of the json string is described below::
You can also create a new object using JSON, using the same syntax. Take a look at this:
End of explanation
# first way
json_str = {"idf.BUILDING.Taj.with.dot.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
# second way (put the name in single quotes)
json_str = {"idf.BUILDING.'Another.Taj.with.dot'.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building'.upper()]
Explanation: What if you object name had a dot . in it? Will the json_function get confused?
If the name has a dot in it, there are two ways of doing this.
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
IDF.setiddname(iddfile)
idf = IDF(fname)
Explanation: Note When you us the json update function:
The json function expects the Name field to have a value.
If you try to update an object with a blank Name field, the results may be unexpected (undefined ? :-). So don't do this.
If the object has no Name field (some don't), changes are made to the first object in the list. Which should be fine, since usually there is only one item in the list
In any case, if the object does not exist, it is created with the default values
Use Case for JSON update
If you have an eppy running on a remote server somewhere on the internet, you can change an idf file by sending it a JSON over the internet. This is very useful if you ever need it. If you don't need it, you shouldn't care :-)
Open a file quickly¶
It is rather cumbersome to open an IDF file in eppy. From the tutorial, the steps look like this:
End of explanation
from eppy.easyopen import easyopen
fname = './eppy/resources/idffiles/V8_8/smallfile.idf'
idf = easyopen(fname)
Explanation: You have to find the IDD file on your hard disk.
Then set the IDD using setiddname(iddfile).
Now you can open the IDF file
Why can’t you just open the IDF file without jumping thru all those hoops. Why do you have to find the IDD file. What is the point of having a computer, if it does not do the grunt work for you.
The function easyopen will do the grunt work for you. It will automatically read the version number from the IDF file, locate the correct IDD file and set it in eppy and then open your file. It works like this:
End of explanation
thefans = idf.idfobjects['Fan:VariableVolume'.upper()]
thefan = thefans[0]
bhp = thefan.fanpower_bhp
watts = thefan.fanpower_watts
cfm = thefan.fan_maxcfm
Explanation: For this to work,
the IDF file should have the VERSION object. You may not have this if you are just working on a file snippet.
you need to have the version of EnergyPlus installed that matches your IDF version.
Energyplus should be installed in the default location.
If easyopen does not work, use the long winded steps shown in the tutorial. That is guaranteed to work
Other miscellaneous functions¶
Fan power in Watts, BHP and fan cfm¶
We normally think of fan power in terms of Brake Horsepower (BHP), Watts. Also when working with IP units it is useful to think of fan flow volume in terms of cubic feet per minute (cfm).
Energyplus does not have fields for those values. With eppy we have functions that will calculate the values
fan power in BHP
fan power in Watts
fan flow in CFM
It will work for the following objects:
FAN:CONSTANTVOLUME
FAN:VARIABLEVOLUME
FAN:ONOFF
FAN:ZONEEXHAUST
FANPERFORMANCE:NIGHTVENTILATION
The sample code would look like this:
End of explanation |
10,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'></a>
Frequency Response Functions (FRFs) plots
This notebook is about frequency response functions (FRFs) and the various ways they can be plotted.
Table of contents
Preamble
Dynamic system setup
Frequency response function
Nyquist plot
Bode plot
Nichols plot
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Since it was created with Python 2.7, we will import a few things from the "future". Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots
Step1: We will also need some specific modules and a litle "IPython magic" to show the plots
Step2: Back to top
Dynamic system setup
In this example we will simulate a two degree of freedom system (2DOF) as a LTI system. For that purpose, we will define a mass and a stiffness matrix and use proportional damping
Step3: For the LTI system we will use a state space formulation. For that we will need the four matrices describing the system (A), the input (B), the output (C) and the feedthrough (D)
Step4: The LTI system is simply defined as
Step5: To check the results presented ahead we will need the angular frequencies and damping coefficients of this system. The eigenanalysis of the system matrix yields them after some computations
Step6: Back to top
Frequency response function
A frequency response function is a complex valued function of frequency. Let us see how it looks when we plot the real and imaginary parts in separate
Step7: Back to top
Nyquist plot
A Nyquist plot represents the real and imaginary parts of the complex FRF in a single plot
Step8: Back to top
Bode plot
A Bode plot represents the complex FRF in magnitude-phase versus frequency
Step9: Back to top
Nichols plot
A Nichols plot combines the Bode plot in a single plot of magnitude versus phase | Python Code:
from __future__ import division, print_function
import sys
import numpy as np
import scipy as sp
import matplotlib as mpl
print('System: {}'.format(sys.version))
print('numpy version: {}'.format(np.__version__))
print('scipy version: {}'.format(sp.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
Explanation: <a id='top'></a>
Frequency Response Functions (FRFs) plots
This notebook is about frequency response functions (FRFs) and the various ways they can be plotted.
Table of contents
Preamble
Dynamic system setup
Frequency response function
Nyquist plot
Bode plot
Nichols plot
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Since it was created with Python 2.7, we will import a few things from the "future". Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots:
End of explanation
from numpy import linalg as LA
from scipy import signal
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: We will also need some specific modules and a litle "IPython magic" to show the plots:
End of explanation
MM = np.asmatrix(np.diag([1., 2.]))
print(MM)
KK = np.asmatrix([[20., -10.],[-10., 10.]])
print(KK)
C1 = 0.1*MM+0.02*KK
print(C1)
Explanation: Back to top
Dynamic system setup
In this example we will simulate a two degree of freedom system (2DOF) as a LTI system. For that purpose, we will define a mass and a stiffness matrix and use proportional damping:
End of explanation
A = np.bmat([[np.zeros_like(MM), np.identity(MM.shape[0])], [LA.solve(-MM,KK), LA.solve(-MM,C1)]])
print(A)
Bf = KK*np.asmatrix(np.ones((2, 1)))
B = np.bmat([[np.zeros_like(Bf)],[LA.solve(MM,Bf)]])
print(B)
Cd = np.matrix((1,0))
Cv = np.asmatrix(np.zeros((1,MM.shape[1])))
Ca = np.asmatrix(np.zeros((1,MM.shape[1])))
C = np.bmat([Cd-Ca*LA.solve(MM,KK),Cv-Ca*LA.solve(MM,C1)])
print(C)
D = Ca*LA.solve(MM,Bf)
print(D)
Explanation: For the LTI system we will use a state space formulation. For that we will need the four matrices describing the system (A), the input (B), the output (C) and the feedthrough (D):
End of explanation
system = signal.lti(A, B, C, D)
Explanation: The LTI system is simply defined as:
End of explanation
w1, v1 = LA.eig(A)
ix = np.argsort(np.absolute(w1)) # order of ascending eigenvalues
w1 = w1[ix] # sorted eigenvalues
v1 = v1[:,ix] # sorted eigenvectors
zw = -w1.real # damping coefficient time angular frequency
wD = w1.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
Explanation: To check the results presented ahead we will need the angular frequencies and damping coefficients of this system. The eigenanalysis of the system matrix yields them after some computations:
End of explanation
w, H = system.freqresp()
fig, ax = plt.subplots(2, 1)
fig.suptitle('Real and imaginary plots')
# Real part plot
ax[0].plot(w, H.real, label='FRF')
ax[0].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[0].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[0].set_ylabel('Real [-]')
ax[0].grid(True)
ax[0].legend()
# Imaginary part plot
ax[1].plot(w, H.imag, label='FRF')
ax[1].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[1].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[1].set_ylabel('Imaginary [-]')
ax[1].set_xlabel('Frequency [rad/s]')
ax[1].grid(True)
ax[1].legend()
plt.show()
Explanation: Back to top
Frequency response function
A frequency response function is a complex valued function of frequency. Let us see how it looks when we plot the real and imaginary parts in separate:
End of explanation
plt.figure()
plt.title('Nyquist plot')
plt.plot(H.real, H.imag, 'b')
plt.plot(H.real, -H.imag, 'r')
plt.xlabel('Real [-]')
plt.ylabel('Imaginary[-]')
plt.grid(True)
plt.axis('equal')
plt.show()
Explanation: Back to top
Nyquist plot
A Nyquist plot represents the real and imaginary parts of the complex FRF in a single plot:
End of explanation
w, mag, phase = system.bode()
fig, ax = plt.subplots(2, 1)
fig.suptitle('Bode plot')
# Magnitude plot
ax[0].plot(w, mag, label='FRF')
ax[0].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[0].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[0].set_ylabel('Magnitude [dB]')
ax[0].grid(True)
ax[0].legend()
# Phase plot
ax[1].plot(w, phase*np.pi/180., label='FRF')
ax[1].axvline(wn[0], color='k', label='First mode', linestyle='--')
ax[1].axvline(wn[2], color='k', label='Second mode', linestyle='--')
ax[1].set_ylabel('Phase [rad]')
ax[1].set_xlabel('Frequency [rad/s]')
ax[1].grid(True)
ax[1].legend()
plt.show()
Explanation: Back to top
Bode plot
A Bode plot represents the complex FRF in magnitude-phase versus frequency:
End of explanation
plt.figure()
plt.title('Nichols plot')
plt.plot(phase*np.pi/180., mag)
plt.xlabel('Phase [rad/s]')
plt.ylabel('Magnitude [dB]')
plt.grid(True)
plt.show()
Explanation: Back to top
Nichols plot
A Nichols plot combines the Bode plot in a single plot of magnitude versus phase:
End of explanation |
10,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>
<h3 align="center"> Professor Fernando Vieira da Silva MSc.</h3>
<h2> Técnicas para Pré-Processamento - Parte 2</h2>
<p>Uma vez que o texto já foi devidamente tratado, removendo stopwords e pontuações, e aplicando stemming ou lemmatization, agora precisamos contar a frequência das palavras (ou n-grams) que utilizaremos em seguida como atributos para as técnicas de aprendizado de máquina.</p>
<b>1. TF-IDF (Term Frequency - Inverse Document Frequency)</b>
<p><b>Term Frequency
Step1: <p>Agora vamos definir uma função para tokenização pelo scikit-learn.</p>
Step2: E essa função será chamada pelo objeto TfidfVectorizer
Step3: <b>2. TF-IDF de N-gramas</b>
Opcionalmente, podemos obter os atributos tf-idf de n-grams, combinando as classes CountVectorizer e TfidfTransformer. Em nosso exemplo, vamos utilizar apenas trigramas
Step4: <b>3. Redução de Dimensionalidade</b>
<p>A transformação do corpus em atributos contendo as frequências TF-IDF em geral resultará numa ndarray bastante esparsa, ou seja, com muitas dimensões. Porém, além de isso tornar o treinamento de algoritmos mais demorado e custoso (computacionalmente falando), muitas dessas dimensões provavelmente são pouco representativas ou mesmo podem causar ruído durante o treinamento. Para resolver esse problema, podemos aplicar uma técnica de redução de dimensionalidade simples chamada <b>Singular Value Decomposition (SVD)</b>.
<p>Essa técnica transformará os vetores da matriz original, rotacionando e escalando-os, resultando em novas representações. A redução de dimensionalidade é feita ao manter apenas as <i>k</i> dimensões mais representativas que escolhermos. Outra vantagem dessa técnica é que as dimensões originais são, de certa forma, "combinadas", o que resulta em uma nova forma de representar a combinação de termos. No contexto de PLN, essa técnica é conhecida como <b>Latent Semantic Analysis (LSA)</b></p>
Step5: <p>Agora vamos manter as dimensões até que a variância acumulada seja maior ou igual a 0.50.</p>
Step6: <p>Transformarmos novamente, mas desta vez com o número de k componentes que obtemos anteriormente.</p> | Python Code:
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
hamlet_raw = nltk.corpus.gutenberg.raw('shakespeare-hamlet.txt')
sents = sent_tokenize(hamlet_raw)
hamlet_np = np.array(sents)
print(hamlet_np.shape)
Explanation: <h1 align="center"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>
<h3 align="center"> Professor Fernando Vieira da Silva MSc.</h3>
<h2> Técnicas para Pré-Processamento - Parte 2</h2>
<p>Uma vez que o texto já foi devidamente tratado, removendo stopwords e pontuações, e aplicando stemming ou lemmatization, agora precisamos contar a frequência das palavras (ou n-grams) que utilizaremos em seguida como atributos para as técnicas de aprendizado de máquina.</p>
<b>1. TF-IDF (Term Frequency - Inverse Document Frequency)</b>
<p><b>Term Frequency:</b> um termo que aparece muito em um documento, tende a ser um termo importante. Em resumo, divide-se o número de vezes em que um termo apareceu pelo maior número de vezes em que algum outro termo apareceu no documento.</p>
tf<sub>wd</sub> = f<sub>wd</sub> / m<sub>wd</sub>
onde:<br>
f<sub>wd</sub> é o número de vezes em que o termo <i>w</i> aparece no documento <i>d</i>.<br>
m<sub>wd</sub> é o maior valor de f<sub>wd</sub> obtido para algum termo do documento <i>d</i><br>
<p><b>Inverse Document Frequency:</b> um termo que aparece em poucos documentos pode ser um bom descriminante. Obtem-se dividindo o número de documentos pelo número de documentos em que o termo aparece.</p>
idf<sub>w</sub> = log<sub>2</sub>(n / n<sub>w</sub>)
onde:<br>
n é o número de documentos no corpus
n<sub>w</sub> é o número de documentos em que o termo <i>w</i> aparece.
Na prática, usa-se:
tf-idf = tf<sub>wd</sub> * idf<sub>w</sub>
Podemos calcular o TF-IDF de um corpus usando o pacote <b>scikit-learn</b>. Primeiramente, vamos abrir novamente o texto de Hamlet e armazenar as sentenças em uma ndarray do numpy (como se cada sentença fosse um documento do corpus):
End of explanation
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
import string
from nltk.corpus import wordnet
stopwords_list = stopwords.words('english')
lemmatizer = WordNetLemmatizer()
def my_tokenizer(doc):
words = word_tokenize(doc)
pos_tags = pos_tag(words)
non_stopwords = [w for w in pos_tags if not w[0].lower() in stopwords_list]
non_punctuation = [w for w in non_stopwords if not w[0] in string.punctuation]
lemmas = []
for w in non_punctuation:
if w[1].startswith('J'):
pos = wordnet.ADJ
elif w[1].startswith('V'):
pos = wordnet.VERB
elif w[1].startswith('N'):
pos = wordnet.NOUN
elif w[1].startswith('R'):
pos = wordnet.ADV
else:
pos = wordnet.NOUN
lemmas.append(lemmatizer.lemmatize(w[0], pos))
return lemmas
Explanation: <p>Agora vamos definir uma função para tokenização pelo scikit-learn.</p>
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
hamlet_raw = nltk.corpus.gutenberg.raw('shakespeare-hamlet.txt')
sents = sent_tokenize(hamlet_raw)
hamlet_np = np.array(sents)
tfidf_vectorizer = TfidfVectorizer(tokenizer=my_tokenizer)
tfs = tfidf_vectorizer.fit_transform(hamlet_np)
print(tfs.shape)
print([k for k in tfidf_vectorizer.vocabulary_.keys()][:20])
print(tfs[:50,:50])
Explanation: E essa função será chamada pelo objeto TfidfVectorizer
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
count_vect = CountVectorizer(ngram_range=(3,3))
n_gram_counts = count_vect.fit_transform(hamlet_np)
tfidf_transformer = TfidfTransformer()
tfs_ngrams = tfidf_transformer.fit_transform(n_gram_counts)
print(tfs_ngrams.shape)
Explanation: <b>2. TF-IDF de N-gramas</b>
Opcionalmente, podemos obter os atributos tf-idf de n-grams, combinando as classes CountVectorizer e TfidfTransformer. Em nosso exemplo, vamos utilizar apenas trigramas:
End of explanation
from sklearn.decomposition import TruncatedSVD
svd_transformer = TruncatedSVD(n_components=1000)
svd_transformer.fit(tfs)
print(sorted(svd_transformer.explained_variance_ratio_)[::-1][:30])
Explanation: <b>3. Redução de Dimensionalidade</b>
<p>A transformação do corpus em atributos contendo as frequências TF-IDF em geral resultará numa ndarray bastante esparsa, ou seja, com muitas dimensões. Porém, além de isso tornar o treinamento de algoritmos mais demorado e custoso (computacionalmente falando), muitas dessas dimensões provavelmente são pouco representativas ou mesmo podem causar ruído durante o treinamento. Para resolver esse problema, podemos aplicar uma técnica de redução de dimensionalidade simples chamada <b>Singular Value Decomposition (SVD)</b>.
<p>Essa técnica transformará os vetores da matriz original, rotacionando e escalando-os, resultando em novas representações. A redução de dimensionalidade é feita ao manter apenas as <i>k</i> dimensões mais representativas que escolhermos. Outra vantagem dessa técnica é que as dimensões originais são, de certa forma, "combinadas", o que resulta em uma nova forma de representar a combinação de termos. No contexto de PLN, essa técnica é conhecida como <b>Latent Semantic Analysis (LSA)</b></p>
End of explanation
cummulative_variance = 0.0
k = 0
for var in sorted(svd_transformer.explained_variance_ratio_)[::-1]:
cummulative_variance += var
if cummulative_variance >= 0.5:
break
else:
k += 1
print(k)
Explanation: <p>Agora vamos manter as dimensões até que a variância acumulada seja maior ou igual a 0.50.</p>
End of explanation
svd_transformer = TruncatedSVD(n_components=k)
svd_data = svd_transformer.fit_transform(tfs)
print(sorted(svd_transformer.explained_variance_ratio_)[::-1])
print(svd_data.shape)
Explanation: <p>Transformarmos novamente, mas desta vez com o número de k componentes que obtemos anteriormente.</p>
End of explanation |
10,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: 1. Load a shapefile that represents the river network
First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes.
Step2: Alright, let's see what fields we read in with this shapefile
Step3: Great! Looks like we have length (reach length), upstream drainage area (drainage area), x and y verticies of each link/reach (x and y of polyline), and bed elevation (topographic elevation).
Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity. In this case, "reach_length" could be equivalently calculated as the cumulative distance between verticies defined by x and y of polyline.
Step4: Our network consists of 29 links between 30 nodes. In the plot above, X and Y represent the plan-view coordinates of the node locations.
Next, we need to populate the grid with the relevant topographic and hydrologic information
Step5: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment).
2. Create sediment 'parcels' in a DataRecord
We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints.
Parcels are tracked using the Landlab DataRecord.
First, let's create arrays with all of the essential sediment parcel variables
Step6: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which is a normalized value ranging from 0 to 1) according to a sediment transport formula.
We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link.
Step7: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example
Step8: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time
Step9: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2.
Step10: 3. Run the NetworkSedimentTransporter
With the parcels and grid set up, we can move on to setting up the model.
Step11: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component
Step12: Then, we initialize the network sediment transporter
Step13: Now we are ready to run the model forward in time
Step14: 4. Plot the model results
There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes.
Here, we demonstrate one example use of plot_network_and_parcels. For a thorough tutorial on the plotting tools, see this notebook.
We can color links by values that we calculate. For example, if we are curious about the fate of sediment that started out on link 27, we might want to plot the total volume of sediment that originated on link 27 during a later timestep
Step15: Non-network plotting
The results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST.
As a simple example, we can plot the total volume of parcels on the grid through time. As parcels exit the grid, the total volume decreases.
Step16: We can also plot individual parcel characteristics. The plot below shows the total transport distance of each parcel through the whole model run as a function of the parcel's grain size (during the final timestep).
Step17: The plot below is an example of accessing variables associated with the grid (grid.at_link.X, or grid.at_node.X), as well as a variable associated with this instance of NetworkModelGrid (nmg.X) | Python Code:
import warnings
warnings.filterwarnings("ignore")
import os
import pathlib
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter
from landlab.data_record import DataRecord
from landlab.grid.network import NetworkModelGrid
from landlab.plot import graph
from landlab.io import read_shapefile
from landlab import ExampleData
from landlab.plot import plot_network_and_parcels
%matplotlib inline
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Using the Landlab NetworkSedimentTransporter component starting with a shapefile river network
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to model the transport of coarse sediment through a river network using the NetworkSedimentTransporter Landlab component. For an equivalent tutorial demonstrating initialization of the NetworkSedimentTransporter with a synthetic network model grid, here.
In this example we will:
- load a river network shapefile to create a Landlab grid to represent a river network
- create sediment "parcels" that will transport through the river network, represented as items in a Landlab DataRecord
- run the component
- plot the results of the model run
Import the necessary libraries, plus a bit of magic so that we can plot within this notebook:
End of explanation
datadir = ExampleData("io/shapefile", case="methow").base
shp_file = datadir / "MethowSubBasin.shp"
points_shapefile = datadir / "MethowSubBasin_Nodes_4.shp"
grid = read_shapefile(
shp_file,
points_shapefile=points_shapefile,
node_fields=["usarea_km2", "Elev_m"],
link_fields=["usarea_km2", "Length_m"],
link_field_conversion={
"usarea_km2": "drainage_area",
"Slope": "channel_slope",
"Length_m": "reach_length",
},
node_field_conversion={
"usarea_km2": "drainage_area",
"Elev_m": "topographic__elevation",
},
threshold=0.01,
)
Explanation: 1. Load a shapefile that represents the river network
First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes.
End of explanation
grid.at_link.keys()
grid.at_node.keys()
Explanation: Alright, let's see what fields we read in with this shapefile:
End of explanation
graph.plot_graph(grid, at="node,link")
grid.number_of_links
grid.number_of_nodes
Explanation: Great! Looks like we have length (reach length), upstream drainage area (drainage area), x and y verticies of each link/reach (x and y of polyline), and bed elevation (topographic elevation).
Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity. In this case, "reach_length" could be equivalently calculated as the cumulative distance between verticies defined by x and y of polyline.
End of explanation
grid.at_node["bedrock__elevation"] = grid.at_node["topographic__elevation"].copy()
grid.at_link["channel_width"] = 1 * np.ones(grid.number_of_links) # m
grid.at_link["flow_depth"] = 0.5 * np.ones(grid.number_of_links) # m
Explanation: Our network consists of 29 links between 30 nodes. In the plot above, X and Y represent the plan-view coordinates of the node locations.
Next, we need to populate the grid with the relevant topographic and hydrologic information:
End of explanation
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(grid.number_of_links), 50)
element_id = np.expand_dims(element_id, axis=1)
volume = 1 * np.ones(np.shape(element_id)) # (m3)
active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.15 # m
mu = np.log(medianD)
sigma = np.log(2) # assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu, sigma, np.shape(element_id)
) # (m) the diameter of grains in each parcel
Explanation: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment).
2. Create sediment 'parcels' in a DataRecord
We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints.
Parcels are tracked using the Landlab DataRecord.
First, let's create arrays with all of the essential sediment parcel variables:
End of explanation
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
Explanation: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which is a normalized value ranging from 0 to 1) according to a sediment transport formula.
We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link.
End of explanation
lithology = ["quartzite"] * np.size(element_id)
Explanation: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example:
End of explanation
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"lithology": (["item_id"], lithology),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume),
}
Explanation: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time :
End of explanation
items = {"grid_element": "link", "element_id": element_id}
parcels = DataRecord(
grid,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
Explanation: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2.
End of explanation
timesteps = 10 # total number of timesteps
dt = 60 * 60 * 24 * 2 # length of timestep (seconds)
Explanation: 3. Run the NetworkSedimentTransporter
With the parcels and grid set up, we can move on to setting up the model.
End of explanation
fd = FlowDirectorSteepest(grid, "topographic__elevation")
fd.run_one_step()
Explanation: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component:
End of explanation
nst = NetworkSedimentTransporter(
grid,
parcels,
fd,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
Explanation: Then, we initialize the network sediment transporter:
End of explanation
for t in range(0, (timesteps * dt), dt):
nst.run_one_step(dt)
print("Model time: ", t / (60 * 60 * 24), "days passed")
Explanation: Now we are ready to run the model forward in time:
End of explanation
timestep_of_interest = 6
originating_link = 27
# filter the parcels to calculate total volumes of only the parcels that originated in the chosen link
parcelfilter = np.zeros_like(parcels.dataset.element_id, dtype=bool)
parcelfilter[:, timestep_of_interest] = (
parcels.dataset.element_id[:, 0] == originating_link
)
vol_orig_link = parcels.calc_aggregate_value(
xr.Dataset.sum, "volume", at="link", filter_array=parcelfilter, fill_value=0.0
)
fig = plot_network_and_parcels(
grid,
parcels,
link_attribute=vol_orig_link,
link_attribute_title="Vol of sed originating on link x",
network_linewidth=5,
parcel_alpha=0,
)
Explanation: 4. Plot the model results
There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes.
Here, we demonstrate one example use of plot_network_and_parcels. For a thorough tutorial on the plotting tools, see this notebook.
We can color links by values that we calculate. For example, if we are curious about the fate of sediment that started out on link 27, we might want to plot the total volume of sediment that originated on link 27 during a later timestep:
End of explanation
parcel_vol_on_grid = parcels.dataset["volume"].values
parcel_vol_on_grid[parcels.dataset["element_id"].values == -2] = 0
# plt.figure(figsize=(8,6))
plt.plot(
np.asarray(parcels.time_coordinates) / (60 * 60 * 24),
np.sum(parcel_vol_on_grid, axis=0),
"-",
linewidth=3,
alpha=0.5,
)
plt.ylabel("Total volume of parcels on grid $[m^3]$")
plt.xlabel("Time [days]")
plt.show()
Explanation: Non-network plotting
The results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST.
As a simple example, we can plot the total volume of parcels on the grid through time. As parcels exit the grid, the total volume decreases.
End of explanation
plt.loglog(parcels.dataset.D[:, -1], nst._distance_traveled_cumulative, ".")
plt.xlabel("Parcel grain size (m)")
plt.ylabel("Cumulative parcel travel distance (m)")
# Note: some of the smallest grain travel distances can exceed the length of the
# grid by "overshooting" during a single timestep of high transport rate
Explanation: We can also plot individual parcel characteristics. The plot below shows the total transport distance of each parcel through the whole model run as a function of the parcel's grain size (during the final timestep).
End of explanation
plt.plot(grid.at_link["channel_slope"], nst.d_mean_active, ".")
plt.xlabel("Channel slope (m/m)")
plt.ylabel("Mean grain size of active layer (m)")
Explanation: The plot below is an example of accessing variables associated with the grid (grid.at_link.X, or grid.at_node.X), as well as a variable associated with this instance of NetworkModelGrid (nmg.X):
End of explanation |
10,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
lesson1
Step1: Finetuning and Training
Step2: Use a pretrained VGG model with our Vgg16 class
Step4: The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http
Step5: Generate Predictions
Step6: Validate Predictions
Calculate predictions on validation set, so we can find correct and incorrect examples
Step7: (TODO) look at data to improve model
confusion matrix
Step8: Submit Predictions to Kaggle!
This section also depends on which dataset you use (and which Kaggle competition you are participating) | Python Code:
# make some Python3 functions available on Python2
from __future__ import division, print_function
import sys
print(sys.version_info)
import theano
print(theano.__version__)
import keras
print(keras.__version__)
# FloydHub: check data
%ls /input/dogscats/
# check current directory
%pwd
%ls
# see some files are loaded fine
%cat floyd_requirements.txt
# check no Keras2 specific function is used (when Keras1 is used)
%cat utils.py
#Create references to important directories we will use over and over
import os, sys
current_dir = os.getcwd()
LESSON_HOME_DIR = current_dir
# FloydHub
DATA_HOME_DIR = "/input/dogscats/"
OUTPUT_HOME_DIR = "/output/"
# alternatively, for local
#DATA_HOME_DIR = current_dir+'/data/redux'
#import modules
from utils import *
from vgg16 import Vgg16
#Instantiate plotting tool
#In Jupyter notebooks, you will need to run this command before doing any plotting
%matplotlib inline
Explanation: lesson1: Convolutional Neural Networks with dogscats
Let's classify images using deep learning and submit the result to Kaggle!
Prerequisite
This notebook assumes Keras with Theano backend.
- TODO: make TensorFlow version as another notebook
It also assumes that you will run it on either one of these two cases:
- Floydhub (--env theano:py2 -> Theano rel-0.8.2 + Keras 1.2.2 on Python2)
- local conda virtual environment (Theano 0.9.0 + Keras 2.0.4 on Python3)
Refer to this FloydHub document for available FloydHub environments.
Setup
Make sure to have these files in the parent directory of the directory where you execute this notebook.
available in the official repo for Keras1 on Python2 (rename from original files)
utils_keras1.py
vgg16_keras1.py
vgg16bn_keras1.py
available in the unofficial repo for Keras2 on Python3
utils.py
vgg16.py
vgg16bn.py
The directory structure looks like this. Please modifiy the symlinks according to your environment.
(*) only for FloydHub
(**) only for local
floyd_requirements.txt (*)
floydhub.data.unzip/ (*)
floydhub.data.zipped/ (*)
dogscats.zip
lesson1/
data/ (**)
redux/
train/
cat.437.jpg
dog.9924.jpg
...
test/
231.jpg
325.jpg
...
dogscats_run.ipynb
floyd_requirements.txt -> ../floyd_requirements.txt (*)
utils.py -> ../utils(_keras1).py
vgg16.py -> ../vgg16(_keras1).py
vgg16bn.py -> ../vgg16bn(_keras1).py
utils.py
utils_keras1.py
vgg16.py
vgg16_keras1.py
vgg16bn.py
vgg16bn_keras1.py
Prepare data
The details of data preparation largely depends on which dataset you use. In this section, we will use a pre-organized dataset from http://files.fast.ai/files/dogscats.zip
For another example of data preparation, please refer to this notebook
How the dataset looks like
After extracting the dogscats.zip file, the directory structure look like this.
dogscats/
models/
sample/
train/
cats/
cat.394.jpg
... (8 items)
dogs/
dog.1402.jpg
... (8 items)
valid/
cats/
cat.10435.jpg
... (4 items)
dogs/
dog.10459.jpg
... (4 items)
features.npy
labels.npy
test1/
1.jpg
10.jpg
100.jpg
... (12500 items)
train/
cats/
cat.0.jpg
cat.1.jpg
cat.3.jpg
... (11500 items)
dogs/
cat.0.jpg
cat.1.jpg
cat.2.jpg
cat.4.jpg
... (11500 items)
valid/
cats/
cat.2.jpg
cat.5.jpg
... (1000 item. these are copied from train/cats/ directory)
dogs/
dog.3.jpg
dog.9.jpg
... (1000 item. these are copied from train/dogs/ directory)
FloydHub
The cell below shows how to update data to FloydHub.
```
from the directory which this notebook is executed
cd ../floydhub.data.zipped/; pwd
expected: empty
ls -l
wget http://files.fast.ai/files/dogscats.zip
upload the zipped dataset to floydnet, and create a floydnet dataset
floyd data init dogscats.zipped
floyd data upload
```
Using the data we have just uploaded to FloydHub, let's unzip it on FloydHub.
```
from the directory which this notebook is executed
cd ../floydhub.fast.ai.data.unzip/; pwd
expected: empty
ls -l
floyd init dogscats.unzip
floyd run --gpu --data [data ID of the uploaded zip] "unzip /input/dogscats.zip -d /output"
```
Please note:
- the data ID should be the one you see from the above step
- the mounted data is available in /input/ directory, and you need to direct the unzipped files to /output/ directory
local
TODO
Run the notebook
Now let's run the notebook in the environment of your choice.
```
from the directory which this notebook is executed
cd ./; pwd
FloydHub
floyd init dogscats
floyd run --mode jupyter --data [data ID of unzipped data] --env theano:py2 --gpu
alternatively, for local
jupyter notebook
```
and check ~/.keras/keras.json
```
mkdir ~/.keras
FloydHub (Keras1)
echo '{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}' > ~/.keras/keras.json
alternatively, for local (Keras2)
echo '{
"image_data_format": "channels_first",
"backend": "theano",
"floatx": "float32",
"epsilon": 1e-07
}' > ~/.keras/keras.json
```
Finally, let's start running the notebook.
End of explanation
%cd $DATA_HOME_DIR
#Set path to sample/ path if desired
path = DATA_HOME_DIR + '/' #'/sample/'
test_path = DATA_HOME_DIR + '/test1/' #We use all the test data
# FloydHub
# data needs to be output under /output
# if results_path cannot be created, execute mkdir directly in the terminal
results_path = OUTPUT_HOME_DIR + '/results/'
%mkdir results_path
train_path = path + '/train/'
valid_path = path + '/valid/'
Explanation: Finetuning and Training
End of explanation
# As large as you can, but no larger than 64 is recommended.
#batch_size = 8
batch_size = 64
no_of_epochs=3
Explanation: Use a pretrained VGG model with our Vgg16 class
End of explanation
vgg = Vgg16()
# Grab a few images at a time for training and validation.
batches = vgg.get_batches(train_path, batch_size=batch_size)
val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2)
# Finetune: note that the vgg model is compiled inside the finetune method.
vgg.finetune(batches)
# Fit: note that we are passing in the validation dataset to the fit() method
# For each epoch we test our model against the validation set
latest_weights_filename = None
# FloydHub (Keras1)
for epoch in range(no_of_epochs):
print("Running epoch: %d" % epoch)
vgg.fit(batches, val_batches, nb_epoch=1)
latest_weights_filename = 'ft%d.h5' % epoch
vgg.model.save_weights(results_path+latest_weights_filename)
print("Completed %s fit operations" % no_of_epochs)
# alternatively, for local (Keras2)
for epoch in range(no_of_epochs):
print("Running epoch: %d" % epoch)
vgg.fit(batches, val_batches, batch_size, nb_epoch=1)
latest_weights_filename = 'ft%d.h5' % epoch
vgg.model.save_weights(results_path+latest_weights_filename)
print("Completed %s fit operations" % no_of_epochs)
Explanation: The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http://image-net.org/challenges/LSVRC/2014/browse-synsets)
In order to classify images into the categories which we prepare (2 categories of dogs/cats, in this notebook), fine-tuning technology is useful. It:
- keeps the most weights from the pre-trained Vgg16 model, but modifies only a few parts of the weights
- changes the dimension of the output layer (from 1000 to 2, in this notebook)
End of explanation
# OUTPUT_HOME_DIR, not DATA_HOME_DIR due to FloydHub restriction
%cd $OUTPUT_HOME_DIR
%mkdir -p test1/unknown
%cd $OUTPUT_HOME_DIR/test1
%cp $test_path/*.jpg unknown/
# rewrite test_path
test_path = OUTPUT_HOME_DIR + '/test1/' #We use all the test data
batches, preds = vgg.test(test_path, batch_size = batch_size*2)
print(preds[:5])
filenames = batches.filenames
print(filenames[:5])
# You can verify the column ordering by viewing some images
from PIL import Image
Image.open(test_path + filenames[2])
#Save our test results arrays so we can use them again later
save_array(results_path + 'test_preds.dat', preds)
save_array(results_path + 'filenames.dat', filenames)
Explanation: Generate Predictions
End of explanation
vgg.model.load_weights(results_path+latest_weights_filename)
val_batches, probs = vgg.test(valid_path, batch_size = batch_size)
filenames = val_batches.filenames
expected_labels = val_batches.classes #0 or 1
#Round our predictions to 0/1 to generate labels
our_predictions = probs[:,0]
our_labels = np.round(1-our_predictions)
Explanation: Validate Predictions
Calculate predictions on validation set, so we can find correct and incorrect examples:
End of explanation
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(expected_labels, our_labels)
plot_confusion_matrix(cm, val_batches.class_indices)
Explanation: (TODO) look at data to improve model
confusion matrix
End of explanation
#Load our test predictions from file
preds = load_array(results_path + 'test_preds.dat')
filenames = load_array(results_path + 'filenames.dat')
#Grab the dog prediction column
isdog = preds[:,1]
print("Raw Predictions: " + str(isdog[:5]))
print("Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)]))
print("Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)]))
# sneaky trick to round down our edge predictions
# Swap all ones with .95 and all zeros with .05
isdog = isdog.clip(min=0.05, max=0.95)
#Extract imageIds from the filenames in our test/unknown directory
filenames = batches.filenames
ids = np.array([int(f[8:f.find('.')]) for f in filenames])
subm = np.stack([ids,isdog], axis=1)
subm[:5]
# FloydHub
%cd $OUTPUT_HOME_DIR
# alternatively, for local
#%cd $DATA_HOME_DIR
submission_file_name = 'submission1.csv'
np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')
from IPython.display import FileLink
# FloydHub
%cd $OUTPUT_HOME_DIR
FileLink(submission_file_name)
# alternatively, for local
#%cd $LESSON_HOME_DIR
#FileLink('data/redux/'+submission_file_name)
Explanation: Submit Predictions to Kaggle!
This section also depends on which dataset you use (and which Kaggle competition you are participating)
End of explanation |
10,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fundamentos de Python 3 en Jupyter
<br>
El lenguaje de programación Python
Step1: <br>
add_numbers es una función que toma dos números y los suma.
Step2: <br>
add_numbers puede actualizarse agregando un tercer parámetro. Usando print se pueden imprimir múltiples opciones en una sola celda de Jupyter.
Step3: <br>
A add_numbers también pordemos agregarle un parámetro opcional llamado flag.
Step4: <br>
Se puede también asignar la función add_numbers a una variable a.
Step5: <br>
La función min devuelve el elemento más pequeño en un iterable o el más pequeño de dos o más argumentos.
Step6: <br>
La función pow(x,y) devuelve x a la potencia de y.
Step7: <br>
La función x//y devuelve la parte entera del cociente entre x y y.
Step8: <br>
La función x%y devuelve el resto de la división entre x y y.
Step9: <br>
El lenguaje de programación Python
Step10: <br>
Una tupla es una estructura de dato inalterable. Las tuplas son secuencias, al igual que las listas. La diferencia entre las tuplas y las listas es que las tuplas no se pueden modificar a diferencia de las listas. Las tuplas usan paréntesis, mientras que las listas usan corchetes.
Step11: <br>
Veamos algunas modificaciones en la estructura de datos lista.
<br>
Usemos append para gregar un elemento a la lista.
Step12: <br>
Este es un ejemplo de cómo se puede hacer un ciclo para recorrer los elementos de la lista.
Step13: <br>
O podemos utilizar el operador de índices
Step14: <br>
Use + para concatenar listas.
Step15: <br>
Use * para repetir listas.
Step16: <br>
Use el operador in para chequear si algo está dentro de la lista.
Step17: <br>
Use la notación bracket [] para picar o rebanar listas o cadenas de caracteres
Step18: <br>
Esta instrucción devuelve el último elemento de la lista o de la cadena.
Step19: <br>
Esta instrucción devuelve un pedazo de la lista o cadena empezando desde el cuarto elemento desde el final y deteniéndose antes del segundo elemento del final.
Step20: <br>
Este es un pedazo de la lista o cadena desde el inicio y se detiene antes del tercer elemento.
Step21: <br>
Y este es un pedazo de la lista o cadena empezando desde el tercer elemento y siguiendo hasta el final.
Step22: <br>
Leer y escribir archivos CSV
<br>
Vamos a importar un archivo txt como
uno CSV que contiene como columnas las etiquetas
Step23: <br>
Lenguaje de programación Python
Step24: Creando arreglos
<br>
Podemos generar una lista y convertirla en un arreglo NumPy.
Step25: <br>
O simplemente podemos pasarle una lista directamente.
Step26: <br>
Podemos pasarle una lista de listas para crear un arreglo multidimensional.
Step27: <br>
Usa el método shape para encontrar las dimensiones del arreglo. (filas, columnas)
Step28: <br>
arange devuelve valores espaciados uniformemente dentro de un intervalo dado.
Step29: <br>
reshape devuelve una matriz con los mismos datos con una nueva forma.
Step30: Combinando arreglos
Step31: <br>
Use vstack para apilar matrices en secuencia verticalmente (por fila).
Step32: <br>
Use hstack para apilar arreglos en secuencia horizontalmente (por columna).
Step33: Operaciones
Use +, -, *, / and ** para realizar operaciones de suma , resta, multiplicación, división y potencia elemento a elemento.
Step34: Producto punto o escalar
Step35: <br>
Vamos a mirar la transposición de matrices. La transposición permuta las dimensiones de la matriz.
Step36: <br>
Las dimensiones del arreglo zt son (4,2) después de la transposición.
Step37: <br>
Se puede usar también .T para obtener la transpuesta de un arreglo.
Step38: Indexando/Rebanando
Step39: <br>
Use la notación de bracket [] para obtener valores en el índice indicado. Recuerde que la indexación comienza en 0.
Step40: <br>
Use
Step41: <br>
Use contadores negativos desde el final.
Step42: <br>
Se puede usar un segundo | Python Code:
x = 20
y = 5
print(x+y)
Explanation: Fundamentos de Python 3 en Jupyter
<br>
El lenguaje de programación Python: Funciones
End of explanation
def add_numbers(x, y):
return x + y
add_numbers(1, 2)
Explanation: <br>
add_numbers es una función que toma dos números y los suma.
End of explanation
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
Explanation: <br>
add_numbers puede actualizarse agregando un tercer parámetro. Usando print se pueden imprimir múltiples opciones en una sola celda de Jupyter.
End of explanation
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('¡Flag es verdad, y es muy útil!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, 20, flag=True))
Explanation: <br>
A add_numbers también pordemos agregarle un parámetro opcional llamado flag.
End of explanation
def add_numbers(x,y,z):
return x+y+z
a = add_numbers
a(1, 2, -5)
Explanation: <br>
Se puede también asignar la función add_numbers a una variable a.
End of explanation
x = (1,3,5, -6)
minimum = min(x)
print(minimum)
Explanation: <br>
La función min devuelve el elemento más pequeño en un iterable o el más pequeño de dos o más argumentos.
End of explanation
p = pow(2,2)
print(p)
Explanation: <br>
La función pow(x,y) devuelve x a la potencia de y.
End of explanation
div = 5/2
print(div)
divint = 5//2
print(divint)
Explanation: <br>
La función x//y devuelve la parte entera del cociente entre x y y.
End of explanation
mod1 = 4%2
print(mod1)
mod2 = 5%2
print(mod2)
Explanation: <br>
La función x%y devuelve el resto de la división entre x y y.
End of explanation
type('Esto es una cadena de caracteres')
type(None)
type(10000)
type(100000.0)
type(add_numbers)
Explanation: <br>
El lenguaje de programación Python: Tipos y Secuencias
<br>
Use type para devolver el tipo del objeto.
End of explanation
x = (1, 'b', 25, 'c')
print(type(x))
y = [1, 'b', 25, 'c']
print(type(y))
Explanation: <br>
Una tupla es una estructura de dato inalterable. Las tuplas son secuencias, al igual que las listas. La diferencia entre las tuplas y las listas es que las tuplas no se pueden modificar a diferencia de las listas. Las tuplas usan paréntesis, mientras que las listas usan corchetes.
End of explanation
y.append(2.5)
print(y)
Explanation: <br>
Veamos algunas modificaciones en la estructura de datos lista.
<br>
Usemos append para gregar un elemento a la lista.
End of explanation
for elemento in y:
print(elemento)
Explanation: <br>
Este es un ejemplo de cómo se puede hacer un ciclo para recorrer los elementos de la lista.
End of explanation
i=0
while( i != len(y) ):
print(y[i])
i = i + 1
Explanation: <br>
O podemos utilizar el operador de índices:
End of explanation
[1,25] + [3,'z']
Explanation: <br>
Use + para concatenar listas.
End of explanation
[3, 'z']*4
Explanation: <br>
Use * para repetir listas.
End of explanation
'z' in [3,'z', 25]
Explanation: <br>
Use el operador in para chequear si algo está dentro de la lista.
End of explanation
y =[2, 'c', 34, -2, 'f']
print(y[1]) #primer elemento de la lista
print(y[0:1]) #primer elemento devuelto como lista
print(y[0:2]) #dos primeros elementos devueltos como lista
x = 'Esto es una cadena de caracteres'
print(x[0]) #primer caracter
print(x[0:1]) #primer caracter, especificando último caracter
print(x[0:2]) #dos primeros caracteres
Explanation: <br>
Use la notación bracket [] para picar o rebanar listas o cadenas de caracteres
End of explanation
print(y[-1])
print(x[-1])
Explanation: <br>
Esta instrucción devuelve el último elemento de la lista o de la cadena.
End of explanation
print(y)
print(y[-4:-2])
print(x)
print(x[-4:-2])
Explanation: <br>
Esta instrucción devuelve un pedazo de la lista o cadena empezando desde el cuarto elemento desde el final y deteniéndose antes del segundo elemento del final.
End of explanation
print(y)
print(y[:3])
print(x)
print(x[:3])
Explanation: <br>
Este es un pedazo de la lista o cadena desde el inicio y se detiene antes del tercer elemento.
End of explanation
print(y)
print(y[3:])
print(x)
print(x[3:])
Explanation: <br>
Y este es un pedazo de la lista o cadena empezando desde el tercer elemento y siguiendo hasta el final.
End of explanation
import csv
with open('empleados_cumple.txt') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
print(f'Los nombres de las columnas son: {", ".join(row)}')
line_count += 1
else:
print(f'\t{row[0]} trabaja en el departamento de {row[1]}, y nació en el mes de {row[2]}.')
line_count += 1
print(f'Líneas procesadas: {line_count}.')
Explanation: <br>
Leer y escribir archivos CSV
<br>
Vamos a importar un archivo txt como
uno CSV que contiene como columnas las etiquetas:
Nombre,Departamento,Mes de Cumpleaños
Y como filas:
Adelis Nieves, Matemáticas, Junio
Miguel Astor, Redes, Septiembre
Francisco Sans, Computación Gráfica, Diciembre
Antonio Escalante, Letras, Diciembre
End of explanation
import numpy as np
Explanation: <br>
Lenguaje de programación Python: Python Numérico (NumPy)
End of explanation
milista = [1, 25, 31, 5]
x = np.array(milista)
x
Explanation: Creando arreglos
<br>
Podemos generar una lista y convertirla en un arreglo NumPy.
End of explanation
y = np.array([1, 25, 31, 5])
y
Explanation: <br>
O simplemente podemos pasarle una lista directamente.
End of explanation
m = np.array([[1, 25, 31, 5], [10, 2, 11, 12], [5, 7, 0, 1]])
m
Explanation: <br>
Podemos pasarle una lista de listas para crear un arreglo multidimensional.
End of explanation
m.shape
Explanation: <br>
Usa el método shape para encontrar las dimensiones del arreglo. (filas, columnas)
End of explanation
n = np.arange(0, 30, 2) # empieza en 0 cuenta de 2 en 2, se detiene antes de 30
n
Explanation: <br>
arange devuelve valores espaciados uniformemente dentro de un intervalo dado.
End of explanation
n = n.reshape(3, 5) #cambia la forma de manera que se obtenga una matriz 3x5
n
Explanation: <br>
reshape devuelve una matriz con los mismos datos con una nueva forma.
End of explanation
p = np.ones([2, 3], int)
p
Explanation: Combinando arreglos
End of explanation
np.vstack([p, 2*p])
Explanation: <br>
Use vstack para apilar matrices en secuencia verticalmente (por fila).
End of explanation
np.hstack([p, 2*p])
Explanation: <br>
Use hstack para apilar arreglos en secuencia horizontalmente (por columna).
End of explanation
print(x)
print(y)
print(x + y) # suma elemento a elemento [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # resta elemento a elemento [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x)
print(y)
print(x * y) # multiplicación elemento a elemento [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # división elemento a elemento [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x)
print(x**2) # potencia elemento a elemento [1 2 3] ^2 = [1 4 9]
Explanation: Operaciones
Use +, -, *, / and ** para realizar operaciones de suma , resta, multiplicación, división y potencia elemento a elemento.
End of explanation
print(x)
print(y)
x.dot(y) #producto escalar
print(y)
z = np.array([y, y**2])
print(z)
print(len(z)) #número de filas del arreglo
Explanation: Producto punto o escalar:
$ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}
\cdot
\begin{bmatrix}y_1 \ y_2 \ y_3\end{bmatrix}
= x_1 y_1 + x_2 y_2 + x_3 y_3$
End of explanation
zt = np.transpose(z, axes=None)
zt
Explanation: <br>
Vamos a mirar la transposición de matrices. La transposición permuta las dimensiones de la matriz.
End of explanation
zt.shape
Explanation: <br>
Las dimensiones del arreglo zt son (4,2) después de la transposición.
End of explanation
print(zt)
zt.T
Explanation: <br>
Se puede usar también .T para obtener la transpuesta de un arreglo.
End of explanation
s = np.arange(13)**2
s
Explanation: Indexando/Rebanando
End of explanation
s[0], s[4], s[-1]
Explanation: <br>
Use la notación de bracket [] para obtener valores en el índice indicado. Recuerde que la indexación comienza en 0.
End of explanation
s[1:5]
Explanation: <br>
Use : para indicar un rango. array[empieza:termina]
Dejando empieza or termina vacío quedará predeterminado al principio / final de la matriz.
End of explanation
s[-4:]
Explanation: <br>
Use contadores negativos desde el final.
End of explanation
s[-5::-2]
Explanation: <br>
Se puede usar un segundo : para indicar el tamaño del paso. array [empieza: termina: paso]
Aquí estamos comenzando el quinto elemento desde el final, y contando hacia atrás cada 2 hasta que se alcanza el comienzo de la matriz.
End of explanation |
10,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Oscillators
This example shows how to generate different signals using the Oscillator module. This module integrates basic structures that implements a simple interface compatible with the std
Step1: Then we can display the final result
Step2: Sawtooth Signal
A sawtooth waveform increases linearly from -1 to 1 in $ [0, 2 \pi wi] $ interval, and decreases linearly from 1 to
-1 in the interval $ \left[ 2 \pi w, 2 \pi \right] $, where $ w $ is the width of the periodic signal.
If $ w $ is 0.5, the function generates a standard triangular wave. The triangle wave shares many geometric
similarities with the sawtooth wave, except it has two sloping line segments.
A more general form, and with period T, is
Step3: Then, to display
Step4: Interactive mode | Python Code:
import pedsp.oscillator as oscillator
import pedsp.algorithm as algorithm
import matplotlib.pyplot as plt
import numpy as np
amplitude = 1.;
sample_rate = 8000;
frequency = 5;
duration_secs = 2;
samples = int(duration_secs * sample_rate);
duty = 0.5;
square = oscillator.Square(amp=amplitude, sr=sample_rate, f=frequency, duty=duty)
data = square.generate(N=samples)
Explanation: Oscillators
This example shows how to generate different signals using the Oscillator module. This module integrates basic structures that implements a simple interface compatible with the std::generate standard function.
eDSP implements 4 different oscillators:
Square Signal
Triangular Signal
Sawtooth Signal
Sinusoidal Signal
All of them are available by default in the oscillators folder.
Square Signal
The square wave can be constructed from straight line segments. The square waves contain a wide range of harmonics. It can be defined as simply the sign function of a sinusoid:
$$ x(t) = {sgn}\left(\sin {\frac {t}{T}}\right)={sgn}\left(\sin ft\right)$$
which will be 1 when the sinusoid is positive, −1 when the sinusoid is negative, and 0 at the discontinuities. Here, T is the period of the square wave, or equivalently, f is its frequency, where f = 1/T.
The class square_oscillator implements a basic square signal oscillator. In this example we generate a square signal with a period of 10KHz sampled at 42.1KHz:
End of explanation
t = algorithm.linspace(0, duration_secs, samples)
plt.plot(t, data)
plt.show()
Explanation: Then we can display the final result:
End of explanation
width = 0.7
sawtooth = oscillator.Sawtooth(amp=amplitude, sr=sample_rate, f=frequency, width=width)
data = sawtooth.generate(N=samples)
Explanation: Sawtooth Signal
A sawtooth waveform increases linearly from -1 to 1 in $ [0, 2 \pi wi] $ interval, and decreases linearly from 1 to
-1 in the interval $ \left[ 2 \pi w, 2 \pi \right] $, where $ w $ is the width of the periodic signal.
If $ w $ is 0.5, the function generates a standard triangular wave. The triangle wave shares many geometric
similarities with the sawtooth wave, except it has two sloping line segments.
A more general form, and with period T, is:
$$ {\displaystyle 2\left({\frac {t}{T}}-\left\lfloor {\frac {1}{2}}+{\frac {t}{T}}\right\rfloor \right)} $$
The class sawtooth_oscillator implements a basic square signal oscillator. In this example we generate a square signal with a period of 10KHz sampled at 42.1KHz:
End of explanation
plt.plot(t, data)
plt.show()
Explanation: Then, to display:
End of explanation
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
@interact(dtype=widgets.Dropdown(
options=['square', 'sinusoidal', 'sawtooth'],
value='square',
description='Type:',
disabled=False),
frequency=widgets.IntSlider(min=1,max=20,step=1,value=10),
duration=widgets.IntSlider(min=1,max=5,step=1,value=1),
alpha=widgets.FloatSlider(min=0.0,max=1.0, value=0.3))
def display_oscillator(dtype, frequency, duration, alpha):
sr = 42000
g = None
if dtype == "square":
g = oscillator.Square(amp=1, sr=sr, f=frequency, duty=alpha)
elif dtype == "sinusoidal":
g = oscillator.Sinusoidal(amp=1, sr=sr, f=frequency, p=0)
else:
g = oscillator.Sawtooth(amp=1, sr=sr, f=frequency, width=alpha)
samples = int(duration * sr)
data = g.generate(N=samples)
t = algorithm.linspace(0, duration, samples)
plt.plot(t, data)
plt.show()
Explanation: Interactive mode
End of explanation |
10,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Simple reading
The file ../data/coordinates.txt contains list of (x, y) value pairs.
Read the values into two lists x and y.
Step1: Nontrivial reading and conversion
The file ../data/CH4.pdb contains the coordinates of methane molecule in a PDB format. The file consists of header followed by record lines which contain the following fields
Step2: Bonus exercises
Delimiter separated values
Many data exchange formats are so-called delimiter separated values. The most commonly known of these is CSV.
There are multiple caveats in the format, e.g. European languages use comma (,) as a decimal separator and semicolon (;) as the field separator. Most pure-English systems use the dot (.) for decimal separation and the comma (,) for field separation.
Another family of systems uses whitespace, like space or tab characters to separate fields.
Python's csv library supports most of the variance in different formats and it can be a time-saving tool to those who use Python and deal with file formats a lot.
The file "../data/iris.data" is actually in CSV format even though the file ending doesn't explicitly say so (this is common).
Read in iris.data and write out a tab-separated file "iris.tsv" using the csv module.
Hint
Step3: The file ../data/word_count.txt contains a short piece of text. Determine the frequency of words in the file, i.e. how many times each word appears. Print out the ten most frequent words.
Read the file line by line and use the split() function for separating a line into words.
The frequencies are stored most conveniently into a dictionary. The dictionary method setdefault can be useful
here.
For sorting, convert the dictionary into a list of (key, value) pairs with the items() function
Step4: Reading nucleotide sequences
Fasta is a fileformat for storing nucleotide sequences. The sequences consist of header line, starting with >, followed by one or more lines containing the amino acids of the sequence presented by single-letter codes | Python Code:
xs = []
ys = []
with open("../data/coordinates.txt", "r") as f:
for line in f:
line = line.split()
xs.append(float(line[0]))
ys.append(float(line[1]))
print(xs)
print(ys)
Explanation: Exercises
Simple reading
The file ../data/coordinates.txt contains list of (x, y) value pairs.
Read the values into two lists x and y.
End of explanation
infile = '../data/CH4.pdb'
outfile = infile.replace('.pdb', '.xyz')
atoms = []
with open(infile, "r") as f:
for line in f:
if 'ATOM' in line:
line = line.split()
symbol = line[2]
coords = [float(x) for x in line[3:6]]
atoms.append((symbol, coords))
with open(outfile, "w") as f:
f.write("{0}\n".format(len(atoms)))
f.write("Converted from PDB\n")
for atom in atoms:
f.write("{0:2s} {1:10.6f} {2:10.6f} {3:10.6f}\n".format(atom[0],
atom[1][0], atom[1][1], atom[1][2]))
Explanation: Nontrivial reading and conversion
The file ../data/CH4.pdb contains the coordinates of methane molecule in a PDB format. The file consists of header followed by record lines which contain the following fields:
record name(=ATOM), atom serial number, atom name, x-,y-,z-coordinates, occupancy and temperature factor.
i.e.
ATOM 2 H -0.627 -0.627 0.627 0.00 0.00
Convert the file into XYZ format: first line contains the
number of atoms, second line is title string, and the
following lines contain the atomic symbols and x-, y-, z-
coordinates, all separated by white space. Write the
coordinates with 6 decimals:
5
Converted from PDB
C 0.000000 0.000000 0.000000
...
End of explanation
import csv
irises = []
with open("../data/iris.data") as inputfile:
chreader = csv.DictReader(inputfile)
for line in chreader:
irises.append(line)
print(irises[0])
with open("../data/iris.tsv", "w") as outputfile:
writer = csv.DictWriter(outputfile, delimiter="\t", fieldnames=["sepal.length","sepal.width","petal.length","petal.width","class"])
writer.writeheader()
for iris in irises:
writer.writerow(iris)
Explanation: Bonus exercises
Delimiter separated values
Many data exchange formats are so-called delimiter separated values. The most commonly known of these is CSV.
There are multiple caveats in the format, e.g. European languages use comma (,) as a decimal separator and semicolon (;) as the field separator. Most pure-English systems use the dot (.) for decimal separation and the comma (,) for field separation.
Another family of systems uses whitespace, like space or tab characters to separate fields.
Python's csv library supports most of the variance in different formats and it can be a time-saving tool to those who use Python and deal with file formats a lot.
The file "../data/iris.data" is actually in CSV format even though the file ending doesn't explicitly say so (this is common).
Read in iris.data and write out a tab-separated file "iris.tsv" using the csv module.
Hint: because the first line of the input file has labels, csv.DictReader and csv.DictWriter are a good choice.
End of explanation
words = {}
with open("../data/word_count.txt", "r") as f:
for line in f:
line = line.split()
for word in line:
words.setdefault(word, 0)
words[word] += 1
word_list = [(value, key) for key, value in words.items()]
word_list.sort()
word_list.reverse()
for freq, word in word_list[:10]:
word = '"%s"' % word
print("The word {0:^15} appears {1:5} times".format(word, freq))
Explanation: The file ../data/word_count.txt contains a short piece of text. Determine the frequency of words in the file, i.e. how many times each word appears. Print out the ten most frequent words.
Read the file line by line and use the split() function for separating a line into words.
The frequencies are stored most conveniently into a dictionary. The dictionary method setdefault can be useful
here.
For sorting, convert the dictionary into a list of (key, value) pairs with the items() function:
words = {"foo" : 1, "bar" : 2}
print(words.items())
[('foo', 1), ('bar', 2)]
End of explanation
chains = {}
with open("../data/5ire.fasta", "r") as f:
for line in f:
if line.startswith('>'):
# We have a header
key = line.split('|')[0].split(':')[1]
chains[key] = ""
else:
chains[key] += line.strip()
print('Chain C:')
print(chains['C'])
print()
subsequence = 'LDFSDL'
for key, sequence in chains.items():
if subsequence in sequence:
print("Chain {0} contains subsequence {1}".format(key, subsequence))
Explanation: Reading nucleotide sequences
Fasta is a fileformat for storing nucleotide sequences. The sequences consist of header line, starting with >, followed by one or more lines containing the amino acids of the sequence presented by single-letter codes:
```
5IRE:A|PDBID|CHAIN|SEQUENCE
IRCIGVSNRDFVEGMSGGTWVDVVLEHGGCVTVMAQDKPTVDIELVTTTVSNMAEVRSYCYEASISDMASDSRCPTQGEA
YLDKQSDTQYVCKRTLVDRGWGNGCGLFGKGSLVTCAKFACSKKMTGKSIQPENLEYRIMLSVHGSQHSGMIVNDTGHET
...
```
The file ../data/5ire.fasta contains sequences for multiple chains of Zika virus. Read from the file the sequence of chain C (the chain ids are given in the header, i.e. the chain above is A).
Find out which chains contain the subsequence LDFSDL.
Hint: as the sequence is given in multiple lines, you should combine all the lines of a sequence into a single string. String object's .strip() method which removes newlines from the end of string is useful here.
End of explanation |
10,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Effect Size
Examples and exercises for a tutorial on statistical inference.
Copyright 2015 Allen Downey
License
Step1: To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
Step2: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
Step3: Here's what the two distributions look like.
Step4: Let's assume for now that those are the true distributions for the population. Of course, in real life we never observe the true population distribution. We generally have to work with a random sample.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
Step5: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
Step6: The sample mean is close to the population mean, but not exact, as expected.
Step7: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means
Step8: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems
Step9: But a problem with relative differences is that you have to choose which mean to express them relative to.
Step10: An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means
Step11: A better, but slightly more complicated threshold is the place where the PDFs cross.
Step12: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold
Step13: And how many women are above it
Step14: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
Step15: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex
Step16: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
Step18: Overlap (or misclassification rate) and "probability of superiority" have two good properties
Step19: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
Step21: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
Step23: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
Step24: Here's an example that demonstrates the function
Step25: And an interactive widget you can use to visualize what different values of $d$ mean | Python Code:
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from IPython.html.widgets import interact, fixed
from IPython.html import widgets
# seed the random number generator so we all get the same results
numpy.random.seed(17)
# some nice colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
Explanation: Effect Size
Examples and exercises for a tutorial on statistical inference.
Copyright 2015 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
mu1, sig1 = 178, 7.7
male_height = scipy.stats.norm(mu1, sig1)
mu2, sig2 = 163, 7.3
female_height = scipy.stats.norm(mu2, sig2)
Explanation: To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
End of explanation
def eval_pdf(rv, num=4):
mean, std = rv.mean(), rv.std()
xs = numpy.linspace(mean - num*std, mean + num*std, 100)
ys = rv.pdf(xs)
return xs, ys
Explanation: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
End of explanation
xs, ys = eval_pdf(male_height)
pyplot.plot(xs, ys, label='male', linewidth=4, color=COLOR2)
xs, ys = eval_pdf(female_height)
pyplot.plot(xs, ys, label='female', linewidth=4, color=COLOR3)
pyplot.xlabel('height (cm)')
None
Explanation: Here's what the two distributions look like.
End of explanation
male_sample = male_height.rvs(1000)
female_sample = female_height.rvs(1000)
Explanation: Let's assume for now that those are the true distributions for the population. Of course, in real life we never observe the true population distribution. We generally have to work with a random sample.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
End of explanation
mean1, std1 = male_sample.mean(), male_sample.std()
mean1, std1
Explanation: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
End of explanation
mean2, std2 = female_sample.mean(), female_sample.std()
mean2, std2
Explanation: The sample mean is close to the population mean, but not exact, as expected.
End of explanation
difference_in_means = male_sample.mean() - female_sample.mean()
difference_in_means # in cm
Explanation: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means:
End of explanation
relative_difference = difference_in_means / male_sample.mean()
relative_difference * 100 # percent
Explanation: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems:
Without knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not.
The magnitude of the difference depends on the units of measure, making it hard to compare across different studies.
There are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean.
End of explanation
relative_difference = difference_in_means / female_sample.mean()
relative_difference * 100 # percent
Explanation: But a problem with relative differences is that you have to choose which mean to express them relative to.
End of explanation
simple_thresh = (mean1 + mean2) / 2
simple_thresh
Explanation: An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means:
End of explanation
thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2)
thresh
Explanation: A better, but slightly more complicated threshold is the place where the PDFs cross.
End of explanation
male_below_thresh = sum(male_sample < thresh)
male_below_thresh
Explanation: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold:
End of explanation
female_above_thresh = sum(female_sample > thresh)
female_above_thresh
Explanation: And how many women are above it:
End of explanation
overlap = male_below_thresh / len(male_sample) + female_above_thresh / len(female_sample)
overlap
Explanation: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
End of explanation
misclassification_rate = overlap / 2
misclassification_rate
Explanation: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex:
End of explanation
sum(x > y for x, y in zip(male_sample, female_sample)) / len(male_sample)
Explanation: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Overlap (or misclassification rate) and "probability of superiority" have two good properties:
As probabilities, they don't depend on units of measure, so they are comparable between studies.
They are expressed in operational terms, so a reader has a sense of what practical effect the difference makes.
There is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's a function that computes it:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
End of explanation
def overlap_superiority(control, treatment, n=1000):
Estimates overlap and superiority based on a sample.
control: scipy.stats rv object
treatment: scipy.stats rv object
n: sample size
control_sample = control.rvs(n)
treatment_sample = treatment.rvs(n)
thresh = (control.mean() + treatment.mean()) / 2
control_above = sum(control_sample > thresh)
treatment_below = sum(treatment_sample < thresh)
overlap = (control_above + treatment_below) / n
superiority = sum(x > y for x, y in zip(treatment_sample, control_sample)) / n
return overlap, superiority
Explanation: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
End of explanation
def plot_pdfs(cohen_d=2):
Plot PDFs for distributions that differ by some number of stds.
cohen_d: number of standard deviations between the means
control = scipy.stats.norm(0, 1)
treatment = scipy.stats.norm(cohen_d, 1)
xs, ys = eval_pdf(control)
pyplot.fill_between(xs, ys, label='control', color=COLOR3, alpha=0.7)
xs, ys = eval_pdf(treatment)
pyplot.fill_between(xs, ys, label='treatment', color=COLOR2, alpha=0.7)
o, s = overlap_superiority(control, treatment)
print('overlap', o)
print('superiority', s)
Explanation: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
End of explanation
plot_pdfs(2)
Explanation: Here's an example that demonstrates the function:
End of explanation
slider = widgets.FloatSliderWidget(min=0, max=4, value=2)
interact(plot_pdfs, cohen_d=slider)
None
Explanation: And an interactive widget you can use to visualize what different values of $d$ mean:
End of explanation |
10,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="center">
<a href="https
Step1: Simple ConvNet
Step2: Reasons to prefer new implementation
Step3: Super-resolution
<!-- minified https
Step4: Here is the difference
Step5: Restyling Gram matrix for style transfer
<!-- from https
Step6: It would be great to use just 'b c1 h w,b c2 h w->b c1 c2', but einsum supports only one-letter axes
Step9: Recurrent model
All we did here is just made information about shapes explicit to skip deciphering
<!-- simplified version of https
Step10: Channel shuffle (from shufflenet)
<!-- from https
Step11: While progress is obvious, this is not the limit. As you'll see below, we don't even need to write these couple of lines.
Step14: Shufflenet
Step15: Rewriting the code helped to identify
Step16: Simplifying ResNet
Step17: Changes
Step18: Improving RNN language modelling
Step19: original code misbehaves for non-bidirectional models
... and fails when bidirectional = False, and there is only one layer
modification of the code shows both how hidden is structured and how it is modified
Writing FastText faster
<!-- from # https
Step20: Some comments on new code
Step21: Original code misuses Conv2d, while Conv1d is the right choice
Fixed code can work with any number of filter_sizes (and won't fail)
First line in new code does nothing, but was added for simplicity
Step22: Highway convolutions
Highway convolutions are common in TTS systems. Code below makes splitting a bit more explicit.
Splitting policy may eventually turn out to be important if input had previously groups over channel axes (group convolutions or bidirectional LSTMs/GRUs)
Same applies to GLU and gated units in general
Step24: Tacotron's CBHG module
<!-- https
Step25: There is still a large room for improvements, but in this example only forward function was changed
Simple attention
Good news
Step26: Transformer's attention needs more attention
Step27: Benefits of new implementation
we have one module, not two
now code does not fail for None mask
the amount of caveats in the original code that we removed is huge.
Try erasing comments and deciphering what happens there
Step31: Self-attention GANs
SAGANs are currently SotA for image generation, and can be simplified using same tricks.
<!-- If torch.einsum supported non-one letter axes, we could improve this solution further. -->
<!-- from https
Step32: Improving time sequence prediction
<!-- https
Step33: Transforming spacial transformer network (STN)
<!-- modified version of https
Step34: new code will give reasonable errors when passed image size is different from expected
if batch size is divisible by 18, whatever you input in the old code, it'll fail no sooner than affine_grid.
Improving GLOW
That's a good old depth-to-space written manually!
Since GLOW is revertible, it will frequently rely on rearrange-like operations.
<!-- from https
Step35: term squeeze isn't very helpful
Step36: We changed and fixed a lot | Python Code:
#right
# start from importing some stuff
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import math
from einops import rearrange, reduce, asnumpy, parse_shape
from einops.layers.torch import Rearrange, Reduce
def initialize(model):
for p in model.parameters():
p.data[:] = torch.from_numpy(np.random.RandomState(sum(p.shape)).randn(*p.shape))
return model
Explanation: <div align="center">
<a href="https://github.com/arogozhnikov/einops">
<img src="http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png" alt="einops package logo" width="150" height="150" style='padding: 50px 50px 25px;' />
</a>
<div>
<a href="https://github.com/arogozhnikov/einops">[github]</a>,
tutorials
<a href="https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb">[1]</a> and
<a href="https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb">[2]</a>
<br />
<br />
</div>
</div>
Writing a better code with pytorch and einops
<br /><br />
Rewriting building blocks of deep learning
Now let's get to examples from real world.
These code fragments taken from official tutorials and popular repositories.
Learn how to improve code and how einops can help you.
Left: as it was, Right: improved version
End of explanation
#left
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
conv_net_old = Net()
#right
conv_net_new = nn.Sequential(
nn.Conv2d(1, 10, kernel_size=5),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(),
nn.Conv2d(10, 20, kernel_size=5),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(),
nn.Dropout2d(),
Rearrange('b c h w -> b (c h w)'),
nn.Linear(320, 50),
nn.ReLU(),
nn.Dropout(),
nn.Linear(50, 10),
nn.LogSoftmax(dim=1)
)
Explanation: Simple ConvNet
End of explanation
conv_net_old(torch.zeros([16, 1, 20, 20])).shape
# conv_net_new(torch.zeros([16, 1, 20, 20])).shape
Explanation: Reasons to prefer new implementation:
in the original code (to the left) if input size is changed and batch size is divisible by 16 (that's usually so), we'll get something senseless after reshaping
new code will explicitly raise an error in this case
we won't forget to use dropout with flag self.training with new version
code is straightforward to read and analyze
sequential makes printing / saving / passing trivial. And there is no need in your code to load a model (which also has a number of benefits)
don't need logsoftmax? Now you can use conv_net_new[:-1]. One more reason to prefer nn.Sequential
... and we could also add inplace for ReLU
End of explanation
#left
class SuperResolutionNetOld(nn.Module):
def __init__(self, upscale_factor):
super(SuperResolutionNetOld, self).__init__()
self.relu = nn.ReLU()
self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))
self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))
self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))
self.pixel_shuffle = nn.PixelShuffle(upscale_factor)
def forward(self, x):
x = self.relu(self.conv1(x))
x = self.relu(self.conv2(x))
x = self.relu(self.conv3(x))
x = self.pixel_shuffle(self.conv4(x))
return x
#right
def SuperResolutionNetNew(upscale_factor):
return nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 32, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(32, upscale_factor ** 2, kernel_size=3, padding=1),
Rearrange('b (h2 w2) h w -> b (h h2) (w w2)', h2=upscale_factor, w2=upscale_factor),
)
Explanation: Super-resolution
<!-- minified https://github.com/pytorch/examples/tree/master/super_resolution, withour initialization -->
End of explanation
model1 = initialize(SuperResolutionNetOld(upscale_factor=3))
model2 = initialize(SuperResolutionNetNew(upscale_factor=3))
assert torch.allclose(model1(torch.zeros(1, 1, 30, 30)), model2(torch.zeros(1, 1, 30, 30))[None])
## that's how this code was mentioned to use
# from PIL import Image
# img = Image.open(opt.input_image).convert('YCbCr')
# y, cb, cr = img.split()
# model = torch.load(opt.model)
# img_to_tensor = ToTensor()
# input = img_to_tensor(y).view(1, -1, y.size[1], y.size[0])
# if opt.cuda:
# model = model.cuda()
# input = input.cuda()
# out = model(input)
# out = out.cpu()
# out_img_y = out[0].detach().numpy()
# out_img_y *= 255.0
# out_img_y = out_img_y.clip(0, 255)
# out_img_y = Image.fromarray(np.uint8(out_img_y[0]), mode='L')
# out_img_cb = cb.resize(out_img_y.size, Image.BICUBIC)
# out_img_cr = cr.resize(out_img_y.size, Image.BICUBIC)
# out_img = Image.merge('YCbCr', [out_img_y, out_img_cb, out_img_cr]).convert('RGB')
## Benefits
# - no need to remembder the order of components in PIL.Image.size (as you see, it is actually different)
# - code explicitly shows shapes passed in and out
# - normalization to [0, 1] range and back is also explicit (it is needed to rememebed in original code that division by 255 is done by ToTensor)
input_image = '../../logo/einops_logo_350x350.png'
from PIL import Image
import numpy as np
from torchvision.transforms import ToTensor
model = SuperResolutionNetOld(upscale_factor=2)
img = Image.open(input_image).convert('YCbCr')
y, cb, cr = img.split()
img_to_tensor = ToTensor()
input = img_to_tensor(y).view(1, -1, y.size[1], y.size[0])
out = model(input)
out_img_y = out[0].detach().numpy()
out_img_y = np.clip(out_img_y[0] * 255, 0, 255)
model = SuperResolutionNetNew(upscale_factor=2)
img = Image.open(input_image).convert('YCbCr')
y, cb, cr = img.split()
# TODO numpy.asarray
y = torch.from_numpy(np.array(y, dtype='float32') / 255)
out = model(rearrange(y, 'h w -> () () h w'))
out_img_y = asnumpy(rearrange(out, '() h w -> h w'))
out_img_y = np.clip(out_img_y * 255, 0, 255)
Explanation: Here is the difference:
no need in special instruction pixel_shuffle (and result is transferrable between frameworks)
output doesn't contain a fake axis (and we could do the same for the input)
inplace ReLU used now, for high resolution pictures that becomes critical and saves us much memory
and all the benefits of nn.Sequential again
End of explanation
#left
def gram_matrix_old(y):
(b, ch, h, w) = y.size()
features = y.view(b, ch, w * h)
features_t = features.transpose(1, 2)
gram = features.bmm(features_t) / (ch * h * w)
return gram
#right
def gram_matrix_new(y):
b, ch, h, w = y.shape
return torch.einsum('bchw,bdhw->bcd', [y, y]) / (h * w)
Explanation: Restyling Gram matrix for style transfer
<!-- from https://github.com/pytorch/examples/blob/29c2ed8ca6dc36fc78a3e74a5908615619987863/fast_neural_style/neural_style/utils.py#L21-L26 -->
Original code is already good - first line shows what kind of input is expected
einsum operation should be read like:
for each batch and for each pair of channels, we sum over h and w.
I've also changed normalization, because that's how Gram matrix is defined, otherwise we should call it normalized Gram matrix or alike
End of explanation
x = torch.randn([32, 128, 40, 40])
%timeit gram_matrix_old(x).sum()
%timeit gram_matrix_new(x).sum()
assert torch.allclose(gram_matrix_old(x), gram_matrix_new(x) / 128)
# x = x.to('cuda')
# %timeit -n100 gram_matrix_old(x).sum(); torch.cuda.synchronize()
# %timeit -n100 gram_matrix_new(x).sum(); torch.cuda.synchronize()
Explanation: It would be great to use just 'b c1 h w,b c2 h w->b c1 c2', but einsum supports only one-letter axes
End of explanation
#left
class RNNModelOld(nn.Module):
Container module with an encoder, a recurrent module, and a decoder.
def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5):
super(RNNModel, self).__init__()
self.drop = nn.Dropout(dropout)
self.encoder = nn.Embedding(ntoken, ninp)
self.rnn = nn.LSTM(ninp, nhid, nlayers, dropout=dropout)
self.decoder = nn.Linear(nhid, ntoken)
def forward(self, input, hidden):
emb = self.drop(self.encoder(input))
output, hidden = self.rnn(emb, hidden)
output = self.drop(output)
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
#right
class RNNModelNew(nn.Module):
Container module with an encoder, a recurrent module, and a decoder.
def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5):
super(RNNModel, self).__init__()
self.drop = nn.Dropout(p=dropout)
self.encoder = nn.Embedding(ntoken, ninp)
self.rnn = nn.LSTM(ninp, nhid, nlayers, dropout=dropout)
self.decoder = nn.Linear(nhid, ntoken)
def forward(self, input, hidden):
t, b = input.shape
emb = self.drop(self.encoder(input))
output, hidden = self.rnn(emb, hidden)
output = rearrange(self.drop(output), 't b nhid -> (t b) nhid')
decoded = rearrange(self.decoder(output), '(t b) token -> t b token', t=t, b=b)
return decoded, hidden
Explanation: Recurrent model
All we did here is just made information about shapes explicit to skip deciphering
<!-- simplified version of https://github.com/pytorch/examples/blob/master/word_language_model/model.py -->
End of explanation
#left
def channel_shuffle_old(x, groups):
batchsize, num_channels, height, width = x.data.size()
channels_per_group = num_channels // groups
# reshape
x = x.view(batchsize, groups,
channels_per_group, height, width)
# transpose
# - contiguous() required if transpose() is used before view().
# See https://github.com/pytorch/pytorch/issues/764
x = torch.transpose(x, 1, 2).contiguous()
# flatten
x = x.view(batchsize, -1, height, width)
return x
#right
def channel_shuffle_new(x, groups):
return rearrange(x, 'b (c1 c2) h w -> b (c2 c1) h w', c1=groups)
Explanation: Channel shuffle (from shufflenet)
<!-- from https://github.com/jaxony/ShuffleNet/blob/master/model.py -->
End of explanation
x = torch.zeros([32, 64, 100, 100])
%timeit -n100 channel_shuffle_old(x, 8); torch.cuda.synchronize()
%timeit -n100 channel_shuffle_new(x, 8); torch.cuda.synchronize()
Explanation: While progress is obvious, this is not the limit. As you'll see below, we don't even need to write these couple of lines.
End of explanation
def conv3x3(in_channels, out_channels, stride=1,
padding=1, bias=True, groups=1):
3x3 convolution with padding
return nn.Conv2d(
in_channels,
out_channels,
kernel_size=3,
stride=stride,
padding=padding,
bias=bias,
groups=groups)
def conv1x1(in_channels, out_channels, groups=1):
1x1 convolution with padding
- Normal pointwise convolution When groups == 1
- Grouped pointwise convolution when groups > 1
return nn.Conv2d(
in_channels,
out_channels,
kernel_size=1,
groups=groups,
stride=1)
#left
from collections import OrderedDict
def channel_shuffle(x, groups):
batchsize, num_channels, height, width = x.data.size()
channels_per_group = num_channels // groups
# reshape
x = x.view(batchsize, groups,
channels_per_group, height, width)
# transpose
# - contiguous() required if transpose() is used before view().
# See https://github.com/pytorch/pytorch/issues/764
x = torch.transpose(x, 1, 2).contiguous()
# flatten
x = x.view(batchsize, -1, height, width)
return x
class ShuffleUnitOld(nn.Module):
def __init__(self, in_channels, out_channels, groups=3,
grouped_conv=True, combine='add'):
super(ShuffleUnitOld, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.grouped_conv = grouped_conv
self.combine = combine
self.groups = groups
self.bottleneck_channels = self.out_channels // 4
# define the type of ShuffleUnit
if self.combine == 'add':
# ShuffleUnit Figure 2b
self.depthwise_stride = 1
self._combine_func = self._add
elif self.combine == 'concat':
# ShuffleUnit Figure 2c
self.depthwise_stride = 2
self._combine_func = self._concat
# ensure output of concat has the same channels as
# original output channels.
self.out_channels -= self.in_channels
else:
raise ValueError("Cannot combine tensors with \"{}\"" \
"Only \"add\" and \"concat\" are" \
"supported".format(self.combine))
# Use a 1x1 grouped or non-grouped convolution to reduce input channels
# to bottleneck channels, as in a ResNet bottleneck module.
# NOTE: Do not use group convolution for the first conv1x1 in Stage 2.
self.first_1x1_groups = self.groups if grouped_conv else 1
self.g_conv_1x1_compress = self._make_grouped_conv1x1(
self.in_channels,
self.bottleneck_channels,
self.first_1x1_groups,
batch_norm=True,
relu=True
)
# 3x3 depthwise convolution followed by batch normalization
self.depthwise_conv3x3 = conv3x3(
self.bottleneck_channels, self.bottleneck_channels,
stride=self.depthwise_stride, groups=self.bottleneck_channels)
self.bn_after_depthwise = nn.BatchNorm2d(self.bottleneck_channels)
# Use 1x1 grouped convolution to expand from
# bottleneck_channels to out_channels
self.g_conv_1x1_expand = self._make_grouped_conv1x1(
self.bottleneck_channels,
self.out_channels,
self.groups,
batch_norm=True,
relu=False
)
@staticmethod
def _add(x, out):
# residual connection
return x + out
@staticmethod
def _concat(x, out):
# concatenate along channel axis
return torch.cat((x, out), 1)
def _make_grouped_conv1x1(self, in_channels, out_channels, groups,
batch_norm=True, relu=False):
modules = OrderedDict()
conv = conv1x1(in_channels, out_channels, groups=groups)
modules['conv1x1'] = conv
if batch_norm:
modules['batch_norm'] = nn.BatchNorm2d(out_channels)
if relu:
modules['relu'] = nn.ReLU()
if len(modules) > 1:
return nn.Sequential(modules)
else:
return conv
def forward(self, x):
# save for combining later with output
residual = x
if self.combine == 'concat':
residual = F.avg_pool2d(residual, kernel_size=3,
stride=2, padding=1)
out = self.g_conv_1x1_compress(x)
out = channel_shuffle(out, self.groups)
out = self.depthwise_conv3x3(out)
out = self.bn_after_depthwise(out)
out = self.g_conv_1x1_expand(out)
out = self._combine_func(residual, out)
return F.relu(out)
#right
class ShuffleUnitNew(nn.Module):
def __init__(self, in_channels, out_channels, groups=3,
grouped_conv=True, combine='add'):
super().__init__()
first_1x1_groups = groups if grouped_conv else 1
bottleneck_channels = out_channels // 4
self.combine = combine
if combine == 'add':
# ShuffleUnit Figure 2b
self.left = Rearrange('...->...') # identity
depthwise_stride = 1
else:
# ShuffleUnit Figure 2c
self.left = nn.AvgPool2d(kernel_size=3, stride=2, padding=1)
depthwise_stride = 2
# ensure output of concat has the same channels as original output channels.
out_channels -= in_channels
assert out_channels > 0
self.right = nn.Sequential(
# Use a 1x1 grouped or non-grouped convolution to reduce input channels
# to bottleneck channels, as in a ResNet bottleneck module.
conv1x1(in_channels, bottleneck_channels, groups=first_1x1_groups),
nn.BatchNorm2d(bottleneck_channels),
nn.ReLU(inplace=True),
# channel shuffle
Rearrange('b (c1 c2) h w -> b (c2 c1) h w', c1=groups),
# 3x3 depthwise convolution followed by batch
conv3x3(bottleneck_channels, bottleneck_channels,
stride=depthwise_stride, groups=bottleneck_channels),
nn.BatchNorm2d(bottleneck_channels),
# Use 1x1 grouped convolution to expand from
# bottleneck_channels to out_channels
conv1x1(bottleneck_channels, out_channels, groups=groups),
nn.BatchNorm2d(out_channels),
)
def forward(self, x):
if self.combine == 'add':
combined = self.left(x) + self.right(x)
else:
combined = torch.cat([self.left(x), self.right(x)], dim=1)
return F.relu(combined, inplace=True)
Explanation: Shufflenet
End of explanation
model1 = ShuffleUnitOld(32, 32, groups=4, grouped_conv=True, combine='add')
model2 = ShuffleUnitNew(32, 32, groups=4, grouped_conv=True, combine='add')
x = torch.randn(1, 32, 14, 14)
initialize(model1)
initialize(model2)
torch.allclose(model1(x), model2(x))
import pickle
dump1 = pickle.dumps(model1._combine_func)
dump2 = pickle.dumps(model2)
Explanation: Rewriting the code helped to identify:
There is no sense in doing reshuffling and not using groups in the first convolution
(indeed, in the paper it is not so). However, result is an equivalent model.
It is also strange that the first convolution may be not grouped, while the last convolution is always grouped
(and that is different from the paper)
Other comments:
There is an identity layer for pytorch introduced here
The last thing left is get rid of conv1x1 and conv3x3 in the code - those are not better than standard
End of explanation
#left
class ResNetOld(nn.Module):
def __init__(self, block, layers, num_classes=1000):
self.inplanes = 64
super(ResNetOld, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AvgPool2d(7, stride=1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
#right
def make_layer(inplanes, planes, block, n_blocks, stride=1):
downsample = None
if stride != 1 or inplanes != planes * block.expansion:
# output size won't match input, so adjust residual
downsample = nn.Sequential(
nn.Conv2d(inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
return nn.Sequential(
block(inplanes, planes, stride, downsample),
*[block(planes * block.expansion, planes) for _ in range(1, n_blocks)]
)
def ResNetNew(block, layers, num_classes=1000):
e = block.expansion
resnet = nn.Sequential(
Rearrange('b c h w -> b c h w', c=3, h=224, w=224),
nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
make_layer(64, 64, block, layers[0], stride=1),
make_layer(64 * e, 128, block, layers[1], stride=2),
make_layer(128 * e, 256, block, layers[2], stride=2),
make_layer(256 * e, 512, block, layers[3], stride=2),
# combined AvgPool and view in one averaging operation
Reduce('b c h w -> b c', 'mean'),
nn.Linear(512 * e, num_classes),
)
# initialization
for m in resnet.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
return resnet
Explanation: Simplifying ResNet
End of explanation
from torchvision.models.resnet import BasicBlock, Bottleneck, ResNet
x = torch.randn(2, 3, 224, 224)
with torch.no_grad():
model_old = ResNetOld(BasicBlock, layers=[2, 2, 2, 3])
model_new = ResNetNew(BasicBlock, layers=[2, 2, 2, 3])
initialize(model_old)
initialize(model_new)
assert torch.allclose(model_old(x), model_new(x), atol=1e-3)
# with torch.no_grad():
# x = torch.randn([2, 512, 7, 7])
# torch.allclose(nn.AvgPool2d(7)(x), reduce(x, 'b c h w -> b c', 'mean'), atol=1e-8)
Explanation: Changes:
explicit check for input shape
no views and simple sequential structure, output is just nn.Sequential, so can always be saved/passed/etc
no need in AvgPool and additional views, this place is much clearer now
make_layer doesn't use internal state (that's quite faulty place)
End of explanation
#left
class RNNOld(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers,
bidirectional=bidirectional, dropout=dropout)
self.fc = nn.Linear(hidden_dim*2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
#x = [sent len, batch size]
embedded = self.dropout(self.embedding(x))
#embedded = [sent len, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded)
#output = [sent len, batch size, hid dim * num directions]
#hidden = [num layers * num directions, batch size, hid dim]
#cell = [num layers * num directions, batch size, hid dim]
#concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers
#and apply dropout
hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1))
#hidden = [batch size, hid dim * num directions]
return self.fc(hidden.squeeze(0))
#right
class RNNNew(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers,
bidirectional=bidirectional, dropout=dropout)
self.dropout = nn.Dropout(dropout)
self.directions = 2 if bidirectional else 1
self.fc = nn.Linear(hidden_dim * self.directions, output_dim)
def forward(self, x):
#x = [sent len, batch size]
embedded = self.dropout(self.embedding(x))
#embedded = [sent len, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded)
hidden = rearrange(hidden, '(layer dir) b c -> layer b (dir c)',
dir=self.directions)
# take the final layer's hidden
return self.fc(self.dropout(hidden[-1]))
model_old = initialize(RNNOld(10, 10, 10, output_dim=15, n_layers=2, bidirectional=True, dropout=0.1)).eval()
model_new = initialize(RNNNew(10, 10, 10, output_dim=15, n_layers=2, bidirectional=True, dropout=0.1)).eval()
x = torch.randint(0, 10, size=[23, 10]).long()
assert torch.allclose(model_old(x), model_new(x))
# this code fails
# model_old = initialize(RNNOld(10, 10, 10, output_dim=15, n_layers=1, bidirectional=False, dropout=0.1)).eval()
# model_old(x).shape
Explanation: Improving RNN language modelling
End of explanation
#left
class FastTextOld(nn.Module):
def __init__(self, vocab_size, embedding_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.fc = nn.Linear(embedding_dim, output_dim)
def forward(self, x):
#x = [sent len, batch size]
embedded = self.embedding(x)
#embedded = [sent len, batch size, emb dim]
embedded = embedded.permute(1, 0, 2)
#embedded = [batch size, sent len, emb dim]
pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1)
#pooled = [batch size, embedding_dim]
return self.fc(pooled)
#right
def FastTextNew(vocab_size, embedding_dim, output_dim):
return nn.Sequential(
Rearrange('t b -> t b'),
nn.Embedding(vocab_size, embedding_dim),
Reduce('t b c -> b c', 'mean'),
nn.Linear(embedding_dim, output_dim),
Rearrange('b c -> b c'),
)
Explanation: original code misbehaves for non-bidirectional models
... and fails when bidirectional = False, and there is only one layer
modification of the code shows both how hidden is structured and how it is modified
Writing FastText faster
<!-- from # https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/3%20-%20Faster%20Sentiment%20Analysis.ipynb -->
End of explanation
#left
class CNNOld(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.conv_0 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[0],embedding_dim))
self.conv_1 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[1],embedding_dim))
self.conv_2 = nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[2],embedding_dim))
self.fc = nn.Linear(len(filter_sizes)*n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
#x = [sent len, batch size]
x = x.permute(1, 0)
#x = [batch size, sent len]
embedded = self.embedding(x)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved_0 = F.relu(self.conv_0(embedded).squeeze(3))
conved_1 = F.relu(self.conv_1(embedded).squeeze(3))
conved_2 = F.relu(self.conv_2(embedded).squeeze(3))
#conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
pooled_0 = F.max_pool1d(conved_0, conved_0.shape[2]).squeeze(2)
pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2)
pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2)
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat((pooled_0, pooled_1, pooled_2), dim=1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
#right
class CNNNew(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.convs = nn.ModuleList([
nn.Conv1d(embedding_dim, n_filters, kernel_size=size) for size in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
x = rearrange(x, 't b -> t b')
emb = rearrange(self.embedding(x), 't b c -> b c t')
pooled = [reduce(conv(emb), 'b c t -> b c', 'max') for conv in self.convs]
concatenated = rearrange(pooled, 'filter b c -> b (filter c)')
return self.fc(self.dropout(F.relu(concatenated)))
Explanation: Some comments on new code:
first and last operations do nothing and can be removed
but were added to explicitly show expected input and output
this also gives you a flexibility of changing interface by editing a single line. Should you need to accept inputs as (batch, time),
you just change first line to Rearrange('b t -> t b'),
CNNs for text classification
End of explanation
# old_model = initialize(CNNOld(32, 32, 32, [1, 2, 4], 32, dropout=0.1)).eval()
# new_model = initialize(CNNNew(32, 32, 32, [1, 2, 4], 32, dropout=0.1)).eval()
# x = torch.zeros([10, 20]).long()
# assert torch.allclose(old_model(x), new_model(x), atol=1e-3)
Explanation: Original code misuses Conv2d, while Conv1d is the right choice
Fixed code can work with any number of filter_sizes (and won't fail)
First line in new code does nothing, but was added for simplicity
End of explanation
#left
class HighwayConv1dOld(nn.Conv1d):
def forward(self, inputs):
L = super(HighwayConv1dOld, self).forward(inputs)
H1, H2 = torch.chunk(L, 2, 1) # chunk at the feature dim
torch.sigmoid_(H1)
return H1 * H2 + (1.0 - H1) * inputs
#right
class HighwayConv1dNew(nn.Conv1d):
def forward(self, inputs):
L = super().forward(inputs)
H1, H2 = rearrange(L, 'b (split c) t -> split b c t', split=2)
torch.sigmoid_(H1)
return H1 * H2 + (1.0 - H1) * inputs
hc1 = HighwayConv1dOld(10, 20, kernel_size=3, padding=1)
hc2 = HighwayConv1dNew(10, 20, kernel_size=3, padding=1)
initialize(hc1)
initialize(hc2)
fw1 = hc1(torch.zeros(1, 10, 100))
fw2 = hc2(torch.zeros(1, 10, 100))
assert torch.allclose(fw1, fw2)
Explanation: Highway convolutions
Highway convolutions are common in TTS systems. Code below makes splitting a bit more explicit.
Splitting policy may eventually turn out to be important if input had previously groups over channel axes (group convolutions or bidirectional LSTMs/GRUs)
Same applies to GLU and gated units in general
End of explanation
#right
class CBHG_Old(nn.Module):
CBHG module: a recurrent neural network composed of:
- 1-d convolution banks
- Highway networks + residual connections
- Bidirectional gated recurrent units
def __init__(self, in_dim, K=16, projections=[128, 128]):
super(CBHG, self).__init__()
self.in_dim = in_dim
self.relu = nn.ReLU()
self.conv1d_banks = nn.ModuleList(
[BatchNormConv1d(in_dim, in_dim, kernel_size=k, stride=1,
padding=k // 2, activation=self.relu)
for k in range(1, K + 1)])
self.max_pool1d = nn.MaxPool1d(kernel_size=2, stride=1, padding=1)
in_sizes = [K * in_dim] + projections[:-1]
activations = [self.relu] * (len(projections) - 1) + [None]
self.conv1d_projections = nn.ModuleList(
[BatchNormConv1d(in_size, out_size, kernel_size=3, stride=1,
padding=1, activation=ac)
for (in_size, out_size, ac) in zip(
in_sizes, projections, activations)])
self.pre_highway = nn.Linear(projections[-1], in_dim, bias=False)
self.highways = nn.ModuleList(
[Highway(in_dim, in_dim) for _ in range(4)])
self.gru = nn.GRU(
in_dim, in_dim, 1, batch_first=True, bidirectional=True)
#left
def forward_old(self, inputs):
# (B, T_in, in_dim)
x = inputs
# Needed to perform conv1d on time-axis
# (B, in_dim, T_in)
if x.size(-1) == self.in_dim:
x = x.transpose(1, 2)
T = x.size(-1)
# (B, in_dim*K, T_in)
# Concat conv1d bank outputs
x = torch.cat([conv1d(x)[:, :, :T] for conv1d in self.conv1d_banks], dim=1)
assert x.size(1) == self.in_dim * len(self.conv1d_banks)
x = self.max_pool1d(x)[:, :, :T]
for conv1d in self.conv1d_projections:
x = conv1d(x)
# (B, T_in, in_dim)
# Back to the original shape
x = x.transpose(1, 2)
if x.size(-1) != self.in_dim:
x = self.pre_highway(x)
# Residual connection
x += inputs
for highway in self.highways:
x = highway(x)
# (B, T_in, in_dim*2)
outputs, _ = self.gru(x)
return outputs
#right
def forward_new(self, inputs, input_lengths=None):
x = rearrange(inputs, 'b t c -> b c t')
_, _, T = x.shape
# Concat conv1d bank outputs
x = rearrange([conv1d(x)[:, :, :T] for conv1d in self.conv1d_banks],
'bank b c t -> b (bank c) t', c=self.in_dim)
x = self.max_pool1d(x)[:, :, :T]
for conv1d in self.conv1d_projections:
x = conv1d(x)
x = rearrange(x, 'b c t -> b t c')
if x.size(-1) != self.in_dim:
x = self.pre_highway(x)
# Residual connection
x += inputs
for highway in self.highways:
x = highway(x)
# (B, T_in, in_dim*2)
outputs, _ = self.gru(self.highways(x))
return outputs
Explanation: Tacotron's CBHG module
<!-- https://github.com/r9y9/tacotron_pytorch/blob/master/tacotron_pytorch/tacotron.py -->
End of explanation
#left
class Attention(nn.Module):
def __init__(self):
super(Attention, self).__init__()
def forward(self, K, V, Q):
A = torch.bmm(K.transpose(1,2), Q) / np.sqrt(Q.shape[1])
A = F.softmax(A, 1)
R = torch.bmm(V, A)
return torch.cat((R, Q), dim=1)
#right
def attention(K, V, Q):
_, n_channels, _ = K.shape
A = torch.einsum('bct,bcl->btl', [K, Q])
A = F.softmax(A * n_channels ** (-0.5), 1)
R = torch.einsum('bct,btl->bcl', [V, A])
return torch.cat((R, Q), dim=1)
args = dict(
K=torch.zeros(32, 128, 40).cuda(),
V=torch.zeros(32, 128, 40).cuda(),
Q=torch.zeros(32, 128, 30).cuda(),
)
%timeit -n100 result_old = Attention()(**args); torch.cuda.synchronize()
%timeit -n100 result_new = attention(**args); torch.cuda.synchronize()
result_old = Attention()(**args); torch.cuda.synchronize()
result_new = attention(**args); torch.cuda.synchronize()
assert torch.allclose(result_old, result_new)
Explanation: There is still a large room for improvements, but in this example only forward function was changed
Simple attention
Good news: there is no more need to guess order of dimensions. Neither for inputs nor for outputs
End of explanation
#left
class ScaledDotProductAttention(nn.Module):
''' Scaled Dot-Product Attention '''
def __init__(self, temperature, attn_dropout=0.1):
super().__init__()
self.temperature = temperature
self.dropout = nn.Dropout(attn_dropout)
self.softmax = nn.Softmax(dim=2)
def forward(self, q, k, v, mask=None):
attn = torch.bmm(q, k.transpose(1, 2))
attn = attn / self.temperature
if mask is not None:
attn = attn.masked_fill(mask, -np.inf)
attn = self.softmax(attn)
attn = self.dropout(attn)
output = torch.bmm(attn, v)
return output, attn
class MultiHeadAttentionOld(nn.Module):
''' Multi-Head Attention module '''
def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):
super().__init__()
self.n_head = n_head
self.d_k = d_k
self.d_v = d_v
self.w_qs = nn.Linear(d_model, n_head * d_k)
self.w_ks = nn.Linear(d_model, n_head * d_k)
self.w_vs = nn.Linear(d_model, n_head * d_v)
nn.init.normal_(self.w_qs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))
nn.init.normal_(self.w_ks.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))
nn.init.normal_(self.w_vs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_v)))
self.attention = ScaledDotProductAttention(temperature=np.power(d_k, 0.5))
self.layer_norm = nn.LayerNorm(d_model)
self.fc = nn.Linear(n_head * d_v, d_model)
nn.init.xavier_normal_(self.fc.weight)
self.dropout = nn.Dropout(dropout)
def forward(self, q, k, v, mask=None):
d_k, d_v, n_head = self.d_k, self.d_v, self.n_head
sz_b, len_q, _ = q.size()
sz_b, len_k, _ = k.size()
sz_b, len_v, _ = v.size()
residual = q
q = self.w_qs(q).view(sz_b, len_q, n_head, d_k)
k = self.w_ks(k).view(sz_b, len_k, n_head, d_k)
v = self.w_vs(v).view(sz_b, len_v, n_head, d_v)
q = q.permute(2, 0, 1, 3).contiguous().view(-1, len_q, d_k) # (n*b) x lq x dk
k = k.permute(2, 0, 1, 3).contiguous().view(-1, len_k, d_k) # (n*b) x lk x dk
v = v.permute(2, 0, 1, 3).contiguous().view(-1, len_v, d_v) # (n*b) x lv x dv
mask = mask.repeat(n_head, 1, 1) # (n*b) x .. x ..
output, attn = self.attention(q, k, v, mask=mask)
output = output.view(n_head, sz_b, len_q, d_v)
output = output.permute(1, 2, 0, 3).contiguous().view(sz_b, len_q, -1) # b x lq x (n*dv)
output = self.dropout(self.fc(output))
output = self.layer_norm(output + residual)
return output, attn
#right
class MultiHeadAttentionNew(nn.Module):
def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):
super().__init__()
self.n_head = n_head
self.w_qs = nn.Linear(d_model, n_head * d_k)
self.w_ks = nn.Linear(d_model, n_head * d_k)
self.w_vs = nn.Linear(d_model, n_head * d_v)
nn.init.normal_(self.w_qs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))
nn.init.normal_(self.w_ks.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k)))
nn.init.normal_(self.w_vs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_v)))
self.fc = nn.Linear(n_head * d_v, d_model)
nn.init.xavier_normal_(self.fc.weight)
self.dropout = nn.Dropout(p=dropout)
self.layer_norm = nn.LayerNorm(d_model)
def forward(self, q, k, v, mask=None):
residual = q
q = rearrange(self.w_qs(q), 'b l (head k) -> head b l k', head=self.n_head)
k = rearrange(self.w_ks(k), 'b t (head k) -> head b t k', head=self.n_head)
v = rearrange(self.w_vs(v), 'b t (head v) -> head b t v', head=self.n_head)
attn = torch.einsum('hblk,hbtk->hblt', [q, k]) / np.sqrt(q.shape[-1])
if mask is not None:
attn = attn.masked_fill(mask[None], -np.inf)
attn = torch.softmax(attn, dim=3)
output = torch.einsum('hblt,hbtv->hblv', [attn, v])
output = rearrange(output, 'head b l v -> b l (head v)')
output = self.dropout(self.fc(output))
output = self.layer_norm(output + residual)
return output, attn
Explanation: Transformer's attention needs more attention
End of explanation
# Poor implementation of torch.einsum, so code below doesn't work
class MultiHeadAttentionHard(nn.Module):
def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1):
super().__init__()
self.w_qs = nn.Parameter(torch.randn(d_model, n_head, d_k) * np.sqrt(2.0 / (d_model + d_k)))
self.w_ks = nn.Parameter(torch.randn(d_model, n_head, d_k) * np.sqrt(2.0 / (d_model + d_k)))
self.w_vs = nn.Parameter(torch.randn(d_model, n_head, d_v) * np.sqrt(2.0 / (d_model + d_v)))
self.w_fc = nn.Parameter(torch.randn(d_model, n_head, d_v) * np.sqrt(2.0 / (d_model + n_head * d_v)))
self.dropout = nn.Dropout(p=dropout)
self.layer_norm = nn.LayerNorm(d_model)
def forward(self, q, k, v, mask=None):
attn = torch.einsum('bld,dhc,bte,ehc->hblt', [q, self.w_qs, k, self.w_ks])
if mask is not None:
attn = attn.masked_fill(mask[None], -np.inf)
attn = torch.softmax(attn, dim=3)
output = torch.einsum('hblt,bte,ehv,dhv->hbd', [attn, v, self.w_vs, self.w_fc])
output = self.dropout(output)
output = self.layer_norm(output + q)
return output, attn
n_heads = 8
d_k = 32
d_v = 64
d_model = 100
t = 51
l = 53
batch = 30
layer1 = initialize(MultiHeadAttentionOld(n_heads, d_k=d_k, d_v=d_v, d_model=d_model)).eval().cuda()
layer2 = initialize(MultiHeadAttentionNew(n_heads, d_k=d_k, d_v=d_v, d_model=d_model)).eval().cuda()
args = dict(
q=torch.randn(batch, l, d_model),
k=torch.randn(batch, t, d_model) * 0.1,
v=torch.randn(batch, t, d_model),
mask=torch.randn(batch, l, t) > 0,
)
args = {k:v.cuda() for k, v in args.items()}
o1, a1 = layer1(**args)
o2, a2 = layer2(**args)
a1.shape, a2.shape
assert torch.allclose(o1, o2)
%timeit -n 200 layer1(**args); torch.cuda.synchronize()
%timeit -n 200 layer2(**args); torch.cuda.synchronize()
Explanation: Benefits of new implementation
we have one module, not two
now code does not fail for None mask
the amount of caveats in the original code that we removed is huge.
Try erasing comments and deciphering what happens there
End of explanation
#left
class Self_Attn_Old(nn.Module):
Self attention Layer
def __init__(self,in_dim,activation):
super(Self_Attn_Old,self).__init__()
self.chanel_in = in_dim
self.activation = activation
self.query_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim//8 , kernel_size= 1)
self.key_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim//8 , kernel_size= 1)
self.value_conv = nn.Conv2d(in_channels = in_dim , out_channels = in_dim , kernel_size= 1)
self.gamma = nn.Parameter(torch.zeros(1))
self.softmax = nn.Softmax(dim=-1) #
def forward(self, x):
inputs :
x : input feature maps( B X C X W X H)
returns :
out : self attention value + input feature
attention: B X N X N (N is Width*Height)
m_batchsize,C,width ,height = x.size()
proj_query = self.query_conv(x).view(m_batchsize,-1,width*height).permute(0,2,1) # B X CX(N)
proj_key = self.key_conv(x).view(m_batchsize,-1,width*height) # B X C x (*W*H)
energy = torch.bmm(proj_query,proj_key) # transpose check
attention = self.softmax(energy) # BX (N) X (N)
proj_value = self.value_conv(x).view(m_batchsize,-1,width*height) # B X C X N
out = torch.bmm(proj_value,attention.permute(0,2,1) )
out = out.view(m_batchsize,C,width,height)
out = self.gamma*out + x
return out,attention
#right
class Self_Attn_New(nn.Module):
Self attention Layer
def __init__(self, in_dim):
super().__init__()
self.query_conv = nn.Conv2d(in_dim, out_channels=in_dim//8, kernel_size=1)
self.key_conv = nn.Conv2d(in_dim, out_channels=in_dim//8, kernel_size=1)
self.value_conv = nn.Conv2d(in_dim, out_channels=in_dim, kernel_size=1)
self.gamma = nn.Parameter(torch.zeros([1]))
def forward(self, x):
proj_query = rearrange(self.query_conv(x), 'b c h w -> b (h w) c')
proj_key = rearrange(self.key_conv(x), 'b c h w -> b c (h w)')
proj_value = rearrange(self.value_conv(x), 'b c h w -> b (h w) c')
energy = torch.bmm(proj_query, proj_key)
attention = F.softmax(energy, dim=2)
out = torch.bmm(attention, proj_value)
out = x + self.gamma * rearrange(out, 'b (h w) c -> b c h w',
**parse_shape(x, 'b c h w'))
return out, attention
model_old = initialize(Self_Attn_Old(128, None))
model_new = initialize(Self_Attn_New(128))
x = torch.randn(2, 128, 30, 30)
assert torch.allclose(model_old(x)[0], model_new(x)[0], atol=1e-4)
# returned attention is transposed
assert torch.allclose(model_old(x)[1], model_new(x)[1], atol=1e-4)
%timeit model_old(x)[0].sum().item()
%timeit model_new(x)[0].sum().item()
# surprise - I had slow down here due to the order of softmax, not einops
Explanation: Self-attention GANs
SAGANs are currently SotA for image generation, and can be simplified using same tricks.
<!-- If torch.einsum supported non-one letter axes, we could improve this solution further. -->
<!-- from https://github.com/heykeetae/Self-Attention-GAN/blob/master/sagan_models.py -->
End of explanation
#left
class SequencePredictionOld(nn.Module):
def __init__(self):
super(SequencePredictionOld, self).__init__()
self.lstm1 = nn.LSTMCell(1, 51)
self.lstm2 = nn.LSTMCell(51, 51)
self.linear = nn.Linear(51, 1)
def forward(self, input, future = 0):
outputs = []
h_t = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t = torch.zeros(input.size(0), 51, dtype=torch.double)
h_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
h_t, c_t = self.lstm1(input_t, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
for i in range(future):# if we should predict the future
h_t, c_t = self.lstm1(output, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
#right
class SequencePredictionNew(nn.Module):
def __init__(self):
super(SequencePredictionNew, self).__init__()
self.lstm1 = nn.LSTMCell(1, 51)
self.lstm2 = nn.LSTMCell(51, 51)
self.linear = nn.Linear(51, 1)
def forward(self, input, future=0):
b, t = input.shape
h_t, c_t, h_t2, c_t2 = torch.zeros(4, b, 51, dtype=self.linear.weight.dtype,
device=self.linear.weight.device)
outputs = []
for input_t in rearrange(input, 'b t -> t b ()'):
h_t, c_t = self.lstm1(input_t, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
for i in range(future): # if we should predict the future
h_t, c_t = self.lstm1(output, (h_t, c_t))
h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
return rearrange(outputs, 't b () -> b t')
seq_old = SequencePredictionOld().double()
seq_new = SequencePredictionNew().double()
initialize(seq_old)
initialize(seq_new)
x = torch.randn([10, 10], dtype=torch.double)
result_old = seq_old(x)
result_new = seq_new(x)
assert torch.allclose(result_old, result_new)
Explanation: Improving time sequence prediction
<!-- https://github.com/pytorch/examples/blob/master/time_sequence_prediction/train.py -->
While this example was considered to be simplistic, I had to analyze surrounding code to understand what kind of input was expected.
You can try yourself.
Additionally now the code works with any dtype, not only double; and new code supports using GPU.
End of explanation
#left
class SpacialTransformOld(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Spatial transformer localization-network
self.localization = nn.Sequential(
nn.Conv2d(1, 8, kernel_size=7),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True),
nn.Conv2d(8, 10, kernel_size=5),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True)
)
# Regressor for the 3 * 2 affine matrix
self.fc_loc = nn.Sequential(
nn.Linear(10 * 3 * 3, 32),
nn.ReLU(True),
nn.Linear(32, 3 * 2)
)
# Initialize the weights/bias with identity transformation
self.fc_loc[2].weight.data.zero_()
self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))
# Spatial transformer network forward function
def stn(self, x):
xs = self.localization(x)
xs = xs.view(-1, 10 * 3 * 3)
theta = self.fc_loc(xs)
theta = theta.view(-1, 2, 3)
grid = F.affine_grid(theta, x.size())
x = F.grid_sample(x, grid)
return x
#right
class SpacialTransformNew(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Spatial transformer localization-network
linear = nn.Linear(32, 3 * 2)
# Initialize the weights/bias with identity transformation
linear.weight.data.zero_()
linear.bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))
self.compute_theta = nn.Sequential(
nn.Conv2d(1, 8, kernel_size=7),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True),
nn.Conv2d(8, 10, kernel_size=5),
nn.MaxPool2d(2, stride=2),
nn.ReLU(True),
Rearrange('b c h w -> b (c h w)', h=3, w=3),
nn.Linear(10 * 3 * 3, 32),
nn.ReLU(True),
linear,
Rearrange('b (row col) -> b row col', row=2, col=3),
)
# Spatial transformer network forward function
def stn(self, x):
grid = F.affine_grid(self.compute_theta(x), x.size())
return F.grid_sample(x, grid)
Explanation: Transforming spacial transformer network (STN)
<!-- modified version of https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html -->
End of explanation
#left
def unsqueeze2d_old(input, factor=2):
assert factor >= 1 and isinstance(factor, int)
factor2 = factor ** 2
if factor == 1:
return input
size = input.size()
B = size[0]
C = size[1]
H = size[2]
W = size[3]
assert C % (factor2) == 0, "{}".format(C)
x = input.view(B, C // factor2, factor, factor, H, W)
x = x.permute(0, 1, 4, 2, 5, 3).contiguous()
x = x.view(B, C // (factor2), H * factor, W * factor)
return x
def squeeze2d_old(input, factor=2):
assert factor >= 1 and isinstance(factor, int)
if factor == 1:
return input
size = input.size()
B = size[0]
C = size[1]
H = size[2]
W = size[3]
assert H % factor == 0 and W % factor == 0, "{}".format((H, W))
x = input.view(B, C, H // factor, factor, W // factor, factor)
x = x.permute(0, 1, 3, 5, 2, 4).contiguous()
x = x.view(B, C * factor * factor, H // factor, W // factor)
return x
#right
def unsqueeze2d_new(input, factor=2):
return rearrange(input, 'b (c h2 w2) h w -> b c (h h2) (w w2)', h2=factor, w2=factor)
def squeeze2d_new(input, factor=2):
return rearrange(input, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=factor, w2=factor)
Explanation: new code will give reasonable errors when passed image size is different from expected
if batch size is divisible by 18, whatever you input in the old code, it'll fail no sooner than affine_grid.
Improving GLOW
That's a good old depth-to-space written manually!
Since GLOW is revertible, it will frequently rely on rearrange-like operations.
<!-- from https://github.com/chaiyujin/glow-pytorch/blob/master/glow/modules.py -->
End of explanation
#left
def YOLO_prediction_old(input, num_classes, num_anchors, anchors, stride_h, stride_w):
bs = input.size(0)
in_h = input.size(2)
in_w = input.size(3)
scaled_anchors = [(a_w / stride_w, a_h / stride_h) for a_w, a_h in anchors]
prediction = input.view(bs, num_anchors,
5 + num_classes, in_h, in_w).permute(0, 1, 3, 4, 2).contiguous()
# Get outputs
x = torch.sigmoid(prediction[..., 0]) # Center x
y = torch.sigmoid(prediction[..., 1]) # Center y
w = prediction[..., 2] # Width
h = prediction[..., 3] # Height
conf = torch.sigmoid(prediction[..., 4]) # Conf
pred_cls = torch.sigmoid(prediction[..., 5:]) # Cls pred.
FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor
# Calculate offsets for each grid
grid_x = torch.linspace(0, in_w - 1, in_w).repeat(in_w, 1).repeat(
bs * num_anchors, 1, 1).view(x.shape).type(FloatTensor)
grid_y = torch.linspace(0, in_h - 1, in_h).repeat(in_h, 1).t().repeat(
bs * num_anchors, 1, 1).view(y.shape).type(FloatTensor)
# Calculate anchor w, h
anchor_w = FloatTensor(scaled_anchors).index_select(1, LongTensor([0]))
anchor_h = FloatTensor(scaled_anchors).index_select(1, LongTensor([1]))
anchor_w = anchor_w.repeat(bs, 1).repeat(1, 1, in_h * in_w).view(w.shape)
anchor_h = anchor_h.repeat(bs, 1).repeat(1, 1, in_h * in_w).view(h.shape)
# Add offset and scale with anchors
pred_boxes = FloatTensor(prediction[..., :4].shape)
pred_boxes[..., 0] = x.data + grid_x
pred_boxes[..., 1] = y.data + grid_y
pred_boxes[..., 2] = torch.exp(w.data) * anchor_w
pred_boxes[..., 3] = torch.exp(h.data) * anchor_h
# Results
_scale = torch.Tensor([stride_w, stride_h] * 2).type(FloatTensor)
output = torch.cat((pred_boxes.view(bs, -1, 4) * _scale,
conf.view(bs, -1, 1), pred_cls.view(bs, -1, num_classes)), -1)
return output
#right
def YOLO_prediction_new(input, num_classes, num_anchors, anchors, stride_h, stride_w):
raw_predictions = rearrange(input, 'b (anchor prediction) h w -> prediction b anchor h w',
anchor=num_anchors, prediction=5 + num_classes)
anchors = torch.FloatTensor(anchors).to(input.device)
anchor_sizes = rearrange(anchors, 'anchor dim -> dim () anchor () ()')
_, _, _, in_h, in_w = raw_predictions.shape
grid_h = rearrange(torch.arange(in_h).float(), 'h -> () () h ()').to(input.device)
grid_w = rearrange(torch.arange(in_w).float(), 'w -> () () () w').to(input.device)
predicted_bboxes = torch.zeros_like(raw_predictions)
predicted_bboxes[0] = (raw_predictions[0].sigmoid() + grid_w) * stride_w # center x
predicted_bboxes[1] = (raw_predictions[1].sigmoid() + grid_h) * stride_h # center y
predicted_bboxes[2:4] = (raw_predictions[2:4].exp()) * anchor_sizes # bbox width and height
predicted_bboxes[4] = raw_predictions[4].sigmoid() # confidence
predicted_bboxes[5:] = raw_predictions[5:].sigmoid() # class predictions
# merging all predicted bboxes for each image
return rearrange(predicted_bboxes, 'prediction b anchor h w -> b (anchor h w) prediction')
Explanation: term squeeze isn't very helpful: which dimension is squeezed? There is torch.squeeze, but it's very different.
in fact, we could skip creating functions completely - it is a single call to einops anyway
Detecting problems in YOLO detection
<!-- mixture of
# https://github.com/BobLiu20/YOLOv3_PyTorch/blob/c6b483743598b5f64d520d81e7e5f47ba936d4c9/nets/yolo_loss.py#L28-L44
# https://github.com/BobLiu20/YOLOv3_PyTorch/blob/c6b483743598b5f64d520d81e7e5f47ba936d4c9/nets/yolo_loss.py#L70-L92
-->
End of explanation
fake_batch = torch.rand([100, 3, 1, 1]) + torch.zeros([100, 3, 32, 32])
from matplotlib import pyplot as plt
import torchvision.utils as vutils
#right
device = 'cpu'
plt.imshow(np.transpose(vutils.make_grid(fake_batch.to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
#right
padded = F.pad(fake_batch[:64], [1, 1, 1, 1])
plt.imshow(rearrange(padded, '(b1 b2) c h w -> (b1 h) (b2 w) c', b1=8).cpu())
# TODO: Hierarchical softmax
# TODO: some reinforcement stuff would also be needed
Explanation: We changed and fixed a lot:
new code won't fail if input is not on the first GPU
old code has wrong grid_x and grid_y for non-square images
new code doesn't use replication when broadcasting is sufficient
old code strangely sometimes takes .data, but this has no real effect, as some branches preserve gradient till the end
if gradients not needed, torch.no_grad should be used, so it's redundant
Simpler output for a bunch of pictures
Next time you need to output drawings of you generative models, you can use this trick
<!-- # from https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html -->
End of explanation |
10,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scalable Kalman filtering for Temporal-Spatial Analysis
Seismic monitoring of CO$_2$
State estimation
Time-series data
Methodology
Kalman filter
Scalability
Result
Visit my GitHub page!
Track a CO$_2$ Plume from Sensor Measurements
Objective
Step1: Kalman filtering
Linear dynamic system, estimation, inference
Two phases
Step2: Results
TRUE HIKF
<img src="./co2movie.gif" alt="Drawing" style="width
Step3: Summary
Efficient tools to interprete the time-series data recorded in the seismic sensors into spatial maps of a moving CO$_2$ plume, a problem very similar to CT scanning widely used in medical imaging. | Python Code:
from IPython.html.widgets import interact, interactive, fixed
from IPython.html.widgets import FloatSlider
from CO2simulation import CO2simulation
def plot_CO2plume(time):
import param as param
CO2 = CO2simulation(param)
x = CO2.extract_state(int(time/3))
data = CO2.extract_data(int(time/3))
fig_setting = vco2.getImgParam(param)
vco2.plotCO2_data_map(x, data, 0, 20, fig_setting)
plt.show()
interact(plot_CO2plume,
time = FloatSlider(value=0, min=0, max=120));
Explanation: Scalable Kalman filtering for Temporal-Spatial Analysis
Seismic monitoring of CO$_2$
State estimation
Time-series data
Methodology
Kalman filter
Scalability
Result
Visit my GitHub page!
Track a CO$_2$ Plume from Sensor Measurements
Objective: monitor a CO$_2$ plume for $5$ days resulted from injecting $300$ tons of CO$_2$ at a depth of $1657m$.
<img src="./field_experiment.png" alt="Drawing" style="width: 600px;"/>
* The sensor measures the travel time of a seismic signal from a source to a receiver.
The presence of CO$_2$ slows down the seismic signal and causes travel time delay along a ray path.
Goal: interpret the changes in seismic signals into maps of moving CO$_2$ plume.
End of explanation
vCO2.scale_barplot()
Explanation: Kalman filtering
Linear dynamic system, estimation, inference
Two phases:
Prediction
Update
<img src="./KF.png" alt="Drawing" style="width: 1000px;"/>
Scalability Issues
Computationally prohibitive for large-scale problem
Store and update covariance matrix ($N^2$) at each Kalman step
Computational cost grows quadratically with $N$
$80$ days for a typical problem size ~1 million
| Resolution | Low | Medium | High |
| ---------------- |:-------------:|:-----------:|:----------------:|
| State dimension |N = 3245 | N = 3245 x 4| N = 3245 x 16 |
| Run time | 1.2 min | 19 minutes | 4.4 hours |
| Storage cost | 100 MB | 1331 MB | 20 G |
Scalable Kalman filtering
$\mathcal{O}(N)$: a linear computational complexity algorithm
Conventional dimension reduction algorithms (e.g.,PCA) truncates information
Lossless compression for a kernel covariance matrix
Hierarchical matrices approach efficiently explore the structure of a kernel matrix
End of explanation
# For low resolution case show KF is equivalent to HiKF
# Running the high resolution case using HiKF
def plot_CO2maps(theta):
import param
CO2 = CO2simulation(param)
param.theta = (theta,1e-5)
hikf, x_kf, cov_kf = simCO2.CO2_filter(CO2, param)
fig_setting = vco2.getImgParam(param)
vco2.plotCO2map(x_kf,cov_kf,fig_setting)
plt.show()
print "Theta Variance"
print " %d %d" % (theta,np.sum(cov_kf[-1]))
interact(plot_CO2maps,
theta = FloatSlider(value=1.14, min=1.14e-3, max=1.14e1));
from IPython.display import Image
Image(filename='Q_R_ratio.png')
Explanation: Results
TRUE HIKF
<img src="./co2movie.gif" alt="Drawing" style="width: 600px;"/>
Filter Design
By choosing an appropriate $Q/R$ ratio to optimize the filter preformance
End of explanation
from moviepy.editor import *
olaf = (VideoFileClip("co2estimatefinal.avi")
.subclip((0,1),(0,10))
.resize(0.999))
olaf.write_gif("co2movie.gif")
Explanation: Summary
Efficient tools to interprete the time-series data recorded in the seismic sensors into spatial maps of a moving CO$_2$ plume, a problem very similar to CT scanning widely used in medical imaging.
End of explanation |
10,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Нормализация методом Маши
Step1: Данные для смеси 4 кишечных палочек в реальной пропорции. Выравнивали на референс не из данных.
Step2: Низкопокрытые образцы
А что, если нам мешают образцы, в которых низкое покрытие? Попробуем их удалить.
Step3: Как видим, изменилось не много. Так что, оставим их в покое.
Фильтрация по покрытию
Step4: Переберем персентили [25, 20, 15, 10] и количество плохих образцов от 0 до 3. | Python Code:
def normalize(M):
M_norm = np.full_like(M, 0)
for i in range(np.shape(M)[0]):
rev = 1 - M[i, :]
if np.dot(M[i, :], M[i, :]) > np.dot(rev, rev):
M_norm[i, :] = rev
else:
M_norm[i, :] = M[i, :]
return M_norm
Explanation: Нормализация методом Маши:
End of explanation
r = np.genfromtxt("LICHeE_4ecoli_without_ref/matrices/R_all", dtype=int, delimiter=' ')
x = np.genfromtxt("LICHeE_4ecoli_without_ref/matrices/X_all", dtype=int, delimiter=' ')
print("%s sites" % len(r))
Ncut = 5
print("Delete zero and almost zero profiles:")
good_ind = [i for i in range(np.shape(x)[0])
if not ((np.abs(r[i, :] - x[i, :]) <= Ncut).all() or (x[i, :] <= Ncut).all())]
print(len(good_ind), "remained")
x = x[good_ind, :]
r = r[good_ind, :]
f = normalize(np.divide(x, r))
draw_PCA(f)
Explanation: Данные для смеси 4 кишечных палочек в реальной пропорции. Выравнивали на референс не из данных.
End of explanation
print(np.median(r, axis = 0))
r_2 = np.delete(r, [2, 6], axis=1)
x_2 = np.delete(x, [2, 6], axis=1)
f_2 = normalize(np.divide(x_2, r_2))
draw_PCA(f_2)
Explanation: Низкопокрытые образцы
А что, если нам мешают образцы, в которых низкое покрытие? Попробуем их удалить.
End of explanation
def filter_by_coverage(cur_r, bad_percent, bad_samples):
def filter_row(row):
num_of_samples = len(row)
valid = np.sum(np.array(([(min_coverage < row) & (row < max_coverage)])))
return num_of_samples - valid <= bad_samples
min_coverage = np.percentile(cur_r, bad_percent, axis=0)
max_coverage = np.percentile(cur_r, 100-bad_percent, axis=0)
good_coverage = np.array([filter_row(row) for row in cur_r])
return good_coverage
Explanation: Как видим, изменилось не много. Так что, оставим их в покое.
Фильтрация по покрытию
End of explanation
f_pca = PCA(n_components=2).fit(f).transform(f)
percentiles = [25, 20, 15, 10]
plt.figure(figsize=(15, 15))
for i in range(4):
for j in range(4):
print(i, j, end="-")
plt.subplot(4, 4, i * 4 + j + 1)
draw_PCA(f, filter_by_coverage(r, percentiles[i], j), f_pca)
plt.tight_layout();
Explanation: Переберем персентили [25, 20, 15, 10] и количество плохих образцов от 0 до 3.
End of explanation |
10,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 2)
Step1: Eulerian Cycles
Before we look at different network types, let us reflect on the distant roots of network analysis. Graph theory is one of its ancestors and has been around for several centuries now. As an example, think of Euler's problem from 1736 of finding a 'Eulerian tour/cycle' over the Pregel bridges in Königsberg.
Q&A session #2
Do you remember how the problem of deciding wheter a graph has a Eulerian cycle can be solved for arbitrary undirected graphs? If not, use the web to find helpful characterizations of graphs with Eulerian cycles. Write it down here.
Answer
Step2: Differences between network types
As indicated by the previous task, graph theory and graph algorithms have been research foci for quite some time. In comparison, complex networks and their analysis have become a focus of investigation only recently. The reason for our interest in complex networks is their similarity to many real-world phenomena such as social interactions, web graphs, food webs, protein interactions and so forth.
But what makes a network complex or not complex?
Well, complex networks have 'non-trivial' topological features. Let us explore this statement in more detail with the help of data. To this end, we look at a social network, a technical mesh and an Erdös-Renyi random graph.
Step3: Some context on these networks is given below, first for MIT8. It stems from a larger collection of Facebook networks from the early days of the online social network. MI8 models
"Facebook friendships at 100 US universities at some time in 2005, as well as a number of node attributes such as dorm, gender, graduation year, and academic major. The data was apparently provided directly by Facebook. (...) It does not include the names of individual or even of any of the node attributes (they have been given integer ids)." http
Step4: The third network is a random graph generated according to the Erdös-Renyi $G(n, p)$ model. This model has been analyzed theoretically over the last 50 years or so. As we will see, however, it deviates dramatically from real networks in important aspects.
Giant Connected Component
Some types of realistic networks such as social ones usually have more than one connected component. However, even if there is more than one connected component, there is usually only one big one. Let us take a closer look at these giant components in particular, the differences of the three networks and other interesting properties in general.
Q&A Session #3
Print the NetworKit overview for each of the three graphs!
Answer | Python Code:
from networkit import *
%matplotlib inline
cd ~/Documents/workspace/NetworKit
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
Explanation: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 2)
End of explanation
# 2-2) and 2-3) Decide whether graph is Eulerian or not
Explanation: Eulerian Cycles
Before we look at different network types, let us reflect on the distant roots of network analysis. Graph theory is one of its ancestors and has been around for several centuries now. As an example, think of Euler's problem from 1736 of finding a 'Eulerian tour/cycle' over the Pregel bridges in Königsberg.
Q&A session #2
Do you remember how the problem of deciding wheter a graph has a Eulerian cycle can be solved for arbitrary undirected graphs? If not, use the web to find helpful characterizations of graphs with Eulerian cycles. Write it down here.
Answer:
Then enter code below to decide whether a graph has such a cycle or not.
Insert code in next cell
Test it on the PGP graph! What is the result? Does the result meet your expectations?
Answer:
End of explanation
# Load/generate 3 graphs of different types
mit8 = readGraph("input/MIT8.edgelist", Format.EdgeListTabZero)
airf1 = readGraph("input/airfoil1.graph", Format.METIS)
gen = generators.ErdosRenyiGenerator(1000, 0.01)
er1000 = gen.generate()
Explanation: Differences between network types
As indicated by the previous task, graph theory and graph algorithms have been research foci for quite some time. In comparison, complex networks and their analysis have become a focus of investigation only recently. The reason for our interest in complex networks is their similarity to many real-world phenomena such as social interactions, web graphs, food webs, protein interactions and so forth.
But what makes a network complex or not complex?
Well, complex networks have 'non-trivial' topological features. Let us explore this statement in more detail with the help of data. To this end, we look at a social network, a technical mesh and an Erdös-Renyi random graph.
End of explanation
from IPython.core.display import Image
Image('input/airfoil1-10p.png')
Explanation: Some context on these networks is given below, first for MIT8. It stems from a larger collection of Facebook networks from the early days of the online social network. MI8 models
"Facebook friendships at 100 US universities at some time in 2005, as well as a number of node attributes such as dorm, gender, graduation year, and academic major. The data was apparently provided directly by Facebook. (...) It does not include the names of individual or even of any of the node attributes (they have been given integer ids)." http://sociograph.blogspot.de/2011/03/facebook100-data-and-parser-for-it.html
The airfoil1 graph is a mesh that stems from a two-dimensional numerical simulation where one is interested in the airflow around an airplane wing. Meshes are usually easy to visualize, two-dimensional meshes are even planar in most (if not all reasonable) cases. The picture below illustrates this. The colors signify different vertex blocks, which we can ignore here.
End of explanation
# Code for Q&A Session #3
# 3-2) extract largest connected component
Explanation: The third network is a random graph generated according to the Erdös-Renyi $G(n, p)$ model. This model has been analyzed theoretically over the last 50 years or so. As we will see, however, it deviates dramatically from real networks in important aspects.
Giant Connected Component
Some types of realistic networks such as social ones usually have more than one connected component. However, even if there is more than one connected component, there is usually only one big one. Let us take a closer look at these giant components in particular, the differences of the three networks and other interesting properties in general.
Q&A Session #3
Print the NetworKit overview for each of the three graphs!
Answer:
For those graphs with more than one connected component, extract the largest connected component $C$ and continue working with $C$. Enter the code for this in the cell below this one.
What are the most striking topological differences between the three graphs in terms of the analytics kernels presented in the lecture?
Answer:
Pick an arbitrary pair of vertices $(u, v)$ in each network. Since we work with connected graphs, the two nodes are connected. What is the smallest number of hops a virus needs to make in the network to reach $v$ if it starts at $u$? What if $u$ and $v$ are chosen as worst case pair?
Answer:
What do you think: Why are complex networks more difficult to work with, what makes them 'complex'?
Answer:
Which parts of your answer to (5) concern theoretical analysis, which concern computational aspects?
Answer:
End of explanation |
10,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading utility
Add
Step1: Generation/rendering timing
~$0.189$ seconds per example/.aiff.
18.9s for 100
Step2: Feature extraction timing
~$0.88$ seconds per example/.aiff.
1m 28s for 100
Step3: Thought
Step4: Preprocessing | Python Code:
dir_list = os.listdir(path=this_dir)
_pickle_path = os.path.join(this_dir, "df.p")
if "df.p" in dir_list:
#_pickle_path = os.path.join(this_dir, "df.p")
_old_df = pd.read_pickle(_pickle_path)
_pickle_dir = make_out_dir(this_dir, "pickle_files")
dt_identifier = datetime.now().strftime("df-%Y_%m_%d_%H%M%S.p")
_old_pickle_path = os.path.join(_pickle_dir, dt_identifier)
pd.to_pickle(_old_pickle_path)
pmtx = gk.generator.gendy1.gen_params(dists=(0., 0.), rows=100)
df = gk.generator.gendy1.format_params(pmtx)
df
pmtx
Explanation: Loading utility
Add: join old df to new df
End of explanation
%time
for i, row in df.iterrows():
session = nonrealtimetools.Session()
builder = gk.generator.gendy1.make_builder(row)
out = gk.generator.gendy1.build_out(builder)
synthdef = builder.build()
with session.at(0):
synth_a = session.add_synth(duration=10, synthdef=synthdef)
gk.util.render_session(session, this_dir, row["hash"])
Explanation: Generation/rendering timing
~$0.189$ seconds per example/.aiff.
18.9s for 100
End of explanation
%timeit
for i, row in df.iterrows():
y, sr = librosa.load(os.path.join(this_dir, "aif_files", row["hash"] + ".aiff"))
_y_normed = librosa.util.normalize(y)
_mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13)
_cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr))
_mfcc_mean = gk.feature_extraction.get_stats(_mfcc)["mean"]
X_row = np.append(_mfcc_mean, _cent)
if i==0:
X_mtx = X_row
else:
X_mtx = np.vstack((X_mtx, X_row))
Explanation: Feature extraction timing
~$0.88$ seconds per example/.aiff.
1m 28s for 100
End of explanation
for i, row in df.iterrows():
session = nonrealtimetools.Session()
builder = gk.generator.gendy1.make_builder(row)
out = gk.generator.gendy1.build_out(builder)
synthdef = builder.build()
with session.at(0):
synth_a = session.add_synth(duration=10, synthdef=synthdef)
gk.util.render_session(session, this_dir, row["hash"])
y, sr = librosa.load(os.path.join(this_dir, "aif_files", row["hash"] + ".aiff"))
_y_normed = librosa.util.normalize(y)
_mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13)
_cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr))
_mfcc_mean = gk.feature_extraction.get_stats(_mfcc)["mean"]
X_row = np.append(_mfcc_mean, _cent)
if i==0:
X_mtx = X_row
else:
X_mtx = np.vstack((X_mtx, X_row))
X_mtx.shape
def col_rename_4_mfcc(c):
if (c < 13):
return "mfcc_mean_{}".format(c)
else:
return "spectral_centroid"
pd.DataFrame(X_mtx).rename_axis(lambda c: col_rename_4_mfcc(c), axis=1)
pmtx.shape
X_mtx.shape
X_mtx[0]
X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(
X_mtx, pmtx, test_size=0.4, random_state=1)
# Create linear regression objectc
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
Explanation: Thought: For feature extraction, it would probably be faster to extract all time domain vectors $y$ into a NumPy array and perform the necessary LibROSA operations across the rows of the vector, possibly leveraging under-the-hood efficiencies.
"1min 43s per loop" below
End of explanation
# Scale data
standard_scaler = sk.preprocessing.StandardScaler()
X_scaled = standard_scaler.fit_transform(X_mtx)
#Xte_s = standard_scaler.transform(X_test)
robust_scaler = sk.preprocessing.RobustScaler()
X_rscaled = robust_scaler.fit_transform(X_mtx)
#Xte_r = robust_scaler.transform(X_test)
X_scaled.mean(axis=0)
X_scaled.mean(axis=0).mean()
X_scaled.std(axis=0)
X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(
X_scaled, pmtx, test_size=0.4, random_state=1)
# Create linear regression objectc
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
y_test[0]
X_test[0]
regr.predict(X_test[0])
y_test[0]
Explanation: Preprocessing
End of explanation |
10,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12
Step1: AOS frames
Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data.
Step2: Virtual Channel 63 (Only Idle Data)
Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's.
Step3: Virtual channel 0
Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol.
Step4: APID 5
As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag. | Python Code:
def timestamps(packets):
epoch = np.datetime64('2000-01-01T12:00:00')
t = np.array([struct.unpack('>I', p[ccsds.SpacePacketPrimaryHeader.sizeof():][:4])[0]
for p in packets], 'uint32')
return epoch + t * np.timedelta64(1, 's')
def load_frames(path):
frame_size = 223 * 5 - 2
frames = np.fromfile(path, dtype = 'uint8')
frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))
return frames
frames = np.concatenate((
load_frames('lucy_frames_eb3frn_20211020_233618.u8'),
load_frames('lucy_frames_eb3frn_20211020_235911.u8')))
frames.shape[0]
Explanation: Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12:00:00).
Looking at the idle APID packets, the next byte might indicate fractional seconds (since it is still part of the secondary header rather than idle data), but it is difficult to be sure.
End of explanation
aos = [AOSFrame.parse(f) for f in frames]
collections.Counter([a.primary_header.transfer_frame_version_number for a in aos])
collections.Counter([a.primary_header.spacecraft_id for a in aos])
collections.Counter([a.primary_header.virtual_channel_id for a in aos])
Explanation: AOS frames
Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data.
End of explanation
vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63]
[a.primary_header for a in vc63[:10]]
vc63_frames = np.array([f for f, a in zip(frames, aos) if a.primary_header.virtual_channel_id == 63])
np.unique(vc63_frames[:, 6:8], axis = 0)
bytes(vc63_frames[0, 6:8]).hex()
np.unique(vc63_frames[:, 8:])
hex(170)
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc63])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 63 (OID) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
last_part = fc > 391000
fc[last_part].size/(fc[-1]-fc[last_part][0]+1)
Explanation: Virtual Channel 63 (Only Idle Data)
Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's.
End of explanation
vc0 = [a for a in aos if a.primary_header.virtual_channel_id == 0]
[a.primary_header for a in vc0[:10]]
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc0])
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(fc[1:], np.diff(fc)-1, '.')
plt.title("Lucy virtual channel 0 (telemetry) frame loss")
plt.xlabel('Virtual channel frame counter')
plt.ylabel('Lost frames');
last_part = fc > 391000
fc[last_part].size/(fc[-1]-fc[last_part][0]+1)
vc0_packets = list(ccsds.extract_space_packets(vc0, 49, 0))
vc0_t = timestamps(vc0_packets)
vc0_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc0_packets]
vc0_apids = collections.Counter([p.APID for p in vc0_sp_headers])
vc0_apids
apid_axis = {a : k for k, a in enumerate(sorted(vc0_apids))}
plt.figure(figsize = (10, 5), facecolor = 'w')
plt.plot(vc0_t, [apid_axis[p.APID] for p in vc0_sp_headers], '.')
plt.yticks(ticks=range(len(apid_axis)), labels=apid_axis)
plt.xlabel('Space Packet timestamp')
plt.ylabel('APID')
plt.title('Lucy Virtual Channel 0 APID distribution');
vc0_by_apid = {apid : [p for h,p in zip(vc0_sp_headers, vc0_packets)
if h.APID == apid] for apid in vc0_apids}
plot_apids(vc0_by_apid)
Explanation: Virtual channel 0
Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol.
End of explanation
tags = {2: Int16ub, 3: Int16ub, 15: Int32ub, 31: Int16ub, 32: Int16ub, 1202: Float64b,
1203: Float64b, 1204: Float64b, 1205: Float64b, 1206: Float64b, 1208: Float32b,
1209: Float32b, 1210: Float32b, 1601: Float32b, 1602: Float32b, 1603: Float32b,
1630: Float32b, 1631: Float32b, 1632: Float32b, 17539: Float32b, 17547: Float32b,
17548: Float32b, 21314: Int32sb, 21315: Int32sb, 21316: Int32sb, 21317: Int32sb,
46555: Int32sb, 46980: Int16ub, 46981: Int16ub, 46982: Int16ub, 47090: Int16ub,
47091: Int16ub, 47092: Int16ub,
}
values = list()
for packet in vc0_by_apid[5]:
t = timestamps([packet])[0]
packet = packet[6+5:] # skip primary and secondary headers
while True:
tag = Int16ub.parse(packet)
packet = packet[2:]
value = tags[tag].parse(packet)
packet = packet[tags[tag].sizeof():]
values.append((tag, value, t))
if len(packet) == 0:
break
values_keys = {v[0] for v in values}
values = {k: [(v[2], v[1]) for v in values if v[0] == k] for k in values_keys}
for k in sorted(values_keys):
vals = values[k]
plt.figure()
plt.title(f'Key {k}')
plt.plot([v[0] for v in vals], [v[1] for v in vals], '.')
Explanation: APID 5
As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag.
End of explanation |
10,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/cifar/cifar-10-python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 15
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
import numpy
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
x = numpy.array(x)
x_normed = (x - x.min(0)) / x.ptp(0)
return x_normed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
nb_classes = 10
targets = numpy.array(x).reshape(-1)
one_hot_targets = numpy.eye(nb_classes)[targets]
return one_hot_targets
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=(None, image_shape[0],
image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
batch_size, in_width, in_height, in_depth = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1],
in_depth, conv_num_outputs]))
biases = tf.Variable(tf.zeros(conv_num_outputs))
conv = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1],
padding='SAME')
conv = tf.nn.bias_add(conv, biases)
conv = tf.nn.relu(conv)
filter_shape = [1, pool_ksize[0], pool_ksize[1], 1]
strides = [1, pool_strides[0], pool_strides[1], 1]
return tf.nn.max_pool(conv, filter_shape, strides, 'SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = [2,2]
conv_strides = [1,1]
pool_ksize = [2,2]
pool_strides = [1,1]
conv_num_outputs = 16
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
conv_ksize = [3,3]
conv_strides = [2,2]
pool_ksize = [2,2]
pool_strides = [2,2]
conv_num_outputs = 40
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
conv_num_outputs = 10
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
num_outputs = 60
x_tensor = fully_conn(x_tensor, num_outputs)
num_outputs = 40
x_tensor = fully_conn(x_tensor, num_outputs)
num_outputs = 20
x_tensor = fully_conn(x_tensor, num_outputs)
num_classes = 10
return output(x_tensor, num_classes)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
cost = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
acc = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('cost: {}'.format(cost))
print('accuracy: {}'.format(accuracy))
print('validation accuracy: {}'.format(validation_accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 70
batch_size = 256
keep_probability = 0.9
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
10,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
I'm Wil Langford. I like math, Python, and games.
me@github
This talk is located in my DecoratorsTalk2015 repository at
Step2: function
Step3: function object
Step4: Uh-oh...
Step6: wrapper
Step8: decorator
Step9: Dust off your hands and kick back. We're completely, totally...
Step11: ... not done yet.
Step12: Usage of @property
Step13: Create your own!
Step14: and then...
Avoid exceptions when running the three cells below. If you run into problems, try adjusting your decorators above. | Python Code:
global PASSWORD
PASSWORD = "Guild o' Code"
Explanation: Intro
I'm Wil Langford. I like math, Python, and games.
me@github
This talk is located in my DecoratorsTalk2015 repository at:
https://github.com/wil-langford/DecoratorsTalk2015
(or http://goo.gl/AAJ7U0 for short)
To prepare for the talk, please clone the repository and load the iPython notebook. Let me know if you need any help with this step.
Concept roll call
function
docstring
function object
wrapper
decorator
@property
(optional) protocol
(optional) descriptor
Before we go any further, we need to choose a super-secure password.
End of explanation
def halver(num):
Returns half of the 'num' argument. # docstring
return num / 2
Explanation: function
End of explanation
print "halver's name:", halver.__name__
print "halver's docstring:", halver.__doc__
halver?
print halver(20)
print halver(10)
Explanation: function object
End of explanation
print halver(5)
print [i/2 for i in range(10)]
print [i/2.0 for i in range(10)]
print [float(i)/2 for i in range(10)]
Explanation: Uh-oh...
End of explanation
def float_wrapper(func):
def wrapper(float_me):
return func(float(float_me))
return wrapper
print float_wrapper(halver)(10)
print float_wrapper(halver)(5)
halve = float_wrapper(halver)
print halver(10)
print halver(5)
def halver(num):
Returns half of the 'num' argument.
This is a reimplementation of halver().
return num / 2
halver = float_wrapper(halver)
print halve(5)
Explanation: wrapper
End of explanation
@float_wrapper
def halver2(num):
Returns half of the 'num' argument.
This is a re-reimplementation of halver().
return num / 2
print halver2(5)
Explanation: decorator
End of explanation
print "halver's name:", halver.__name__
print "halver's docstring:", halver.__doc__
print "halver2's name:", halver2.__name__
print "halver2's docstring:", halver2.__doc__
halver?
Explanation: Dust off your hands and kick back. We're completely, totally...
End of explanation
import functools
def better_float_wrapper(func):
@functools.wraps(func) # this line is the only difference
def wrapper(num):
return func(float(num))
return wrapper
@better_float_wrapper
def halver3(num):
Returns half of the 'num' argument.
This is a re-reimplementation of halve().
return num / 2
print halver3(5)
print "halver3's name:", halver3.__name__
print "halver3's docstring:", halver3.__doc__
halver3?
Explanation: ... not done yet.
End of explanation
class StrictAttributeHolder(object):
def __init__(self):
self._int_val = None
@property
def int_val(self):
if self._int_val is not None:
return self._int_val
else:
raise Exception("Can't read what isn't written!")
@int_val.setter
def int_val(self, value):
if isinstance(value, int):
self._int_val = value
else:
raise TypeError("Can't set int_val to a non-int value!")
sah = StrictAttributeHolder()
print sah.int_val
sah.int_val = 5
print sah.int_val
sah.int_val = 5.0
sah.int_val = [5]
Explanation: Usage of @property
End of explanation
# Create a @timed_function decorator that computes and prints the execution time of
# any function that it wraps. Use *args and **kwargs to capture all function
# arguments.
def timed_function(func):
# Your implementation here
pass
# Create a @case_mod decorator that gives any function that it wraps an
# all-lowercase version of an input string and then returns an all-uppercase
# version of the wrapped function's output
def case_mod(func):
# Your implementation here
pass
# Create a @secured_function decorator that looks for a global password before
# running the wrapped function and will raise an exception instead of running
# the wrapped function if the wrong password is provided. Use *args and **kwargs
# capture all function arguments.
def timed_function(func):
global PASSWORD
# Your implementation here
pass
Explanation: Create your own!
End of explanation
# Execute this cell without modifying it.
picky_eater_food = "You can now write your own decorators!".split(' ')
@secured_function
@timed_function
@case_mod
def picky_eater(food):
if food.islower():
time.sleep(0.1 * len(food))
return food
else:
raise Exception("I don't wanna eat this!")
# Change ONLY the assigned value of PASSWORD in this cell, then
# execute it.
global PASSWORD
PASSWORD = ''
# Run this cell as-is without any exceptions cropping up and with
# an execution time printed out for each morsel in picky_eater_food.
for morsel in picky_eater_food:
print picky_eater(morsel)
Explanation: and then...
Avoid exceptions when running the three cells below. If you run into problems, try adjusting your decorators above.
End of explanation |
10,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1>Introduction to Data Analysis with Python</h1>
<br>
<h3>Dr. Thomas Wiecki</h3>
<br>
<h3>Lead Data Scientist</h3>
<img width=40% src="http
Step1: Lists
Step2: Dictionaries
Step3: Comprehensions
Step4: Level 1
Step5: Mixed types
Step6: Grouping
Step7: Seaborn
Step8: 2D distributions
Step9: All pairwise combinations
Step10: Seaborn
Step11: Level 2
Step12: Advanced example
Step13: Level 3
Step14: Cython
Step15: Numba
Step17: Level 4
Step18: Interactive data visualization with Bokeh | Python Code:
3 * 4
Explanation: <center>
<h1>Introduction to Data Analysis with Python</h1>
<br>
<h3>Dr. Thomas Wiecki</h3>
<br>
<h3>Lead Data Scientist</h3>
<img width=40% src="http://i2.wp.com/stuffled.com/wp-content/uploads/2014/09/Quantopian-Logo-EPS-vector-image.png?resize=1020%2C680">
</center>
<img src="http://cdn.nutanix.com/wp-content/uploads/2013/09/5530553658_cf0a5dd64d_z.jpg">
Source: http://www.nutanix.com/2013/09/16/the-cup-has-been-flipped/
<center>
<h1><strike>Introduction to Data Analysis with Python</strike></h1>
<h1>The Path of the PyData Ninja</h1>
<br>
<h3>Dr. Thomas Wiecki</h3>
<br>
<h3>Lead Data Scientist</h3>
<img width=40% src="http://i2.wp.com/stuffled.com/wp-content/uploads/2014/09/Quantopian-Logo-EPS-vector-image.png?resize=1020%2C680">
</center>
About me
Lead Data Scientist at Quantopian Inc: Building a crowd sourced hedge fund.
PhD from Brown University -- research on computational neuroscience and machine learning using Bayesian modeling.
Twitter: @twiecki
GitHub: @twiecki
Blog: http://twiecki.github.io
Developer of PyMC3.
<a href="https://quantopian.com"><img width=40% src="http://i2.wp.com/stuffled.com/wp-content/uploads/2014/09/Quantopian-Logo-EPS-vector-image.png?resize=1020%2C680"></a>
We back the best investment algorithms with investor capital, trading operations, and technology.
Do your research in our hosted IPython environment using stock price history, corporate fundamental data, and other data sets.
Write your algorithm in your browser. Then backtest it, for free, over 13 years of minute-level data.
When you enter the contest, your algorithm will also be considered for our hedge fund.
We're hiring in Düsseldorf: Operations Engineer!
Why use Python for data analysis?
Python is a general purpose language -> No hodge-podge of perl, bash, matlab, fortran.
Very easy to learn.
Quality and quantity of data analysis libraries is very high and growing at a rapid pace.
What are the alternatives?
R: "The best thing about R is that it was written by statisticians. The worst thing about R is that it was written by statisticians." Bow Cogwill
Matlab: $$$, not open
Jobs!
<img src="http://www.indeed.com/trendgraph/jobgraph.png?q=R++and+%28%22big+data%22+or+%22statistical+analysis%22+or+%22data+mining%22+or+%22data+analytics%22+or+%22machine+learning%22+or+%22quantitative+analysis%22+or+%22business+analytics%22+or+%22statistical+software%22+or+%22predictive+modeling%22%29+%21%22R+D%22+%21%22A+R%22+%21%22H+R%22+%21%22R+N%22++%21toys+%21kids+%21%22+R+Walgreen%22+%21walmart+%21%22HVAC+R%22+%21%22R+Bard%22++%2C+python+and+%28%22big+data%22+or+%22statistical+analysis%22+or+%22data+mining%22+or+%22data+analytics%22+or+%22machine+learning%22+or+%22quantitative+analysis%22+or+%22business+analytics%22+or+%22statistical+software%22+or+%22predictive+modeling%22%29">
<center> <h2>The PyData Stack</h2>
Source: Jake VanderPlas: State of the Tools
<center><img src='pydata_stack-0.jpg' width=50%></center>
<center> <h2>The PyData Stack</h2>
<center><img src='pydata_stack-1.jpg' width=50%></center>
<center> <h2>The PyData Stack</h2>
<center><img src='pydata_stack-2.jpg' width=50%></center>
<center> <h2>The PyData Stack</h2>
<center><img src='pydata_stack-3.jpg' width=50%></center>
<center> <h2>The PyData Stack</h2>
<center><img src='pydata_stack-4.jpg' width=50%></center>
Level 0: n00b
<img src="beginner.png">
How to get started
Start by installing the Anaconda Python distribution (use Python 3.4)
Install the jupyter notebook (former IPython)
Do a basic Python tutorial to get a handle on the syntax, e.g. Learn Python the Hard Way
Python basics
Interpreted and interactive
End of explanation
x = [1, 2, 3]
print(x)
x.append(4)
print(x)
Explanation: Lists
End of explanation
measurements = {'height': [1.70, 1.80, 1.50], 'weight': [60, 120, 50]}
measurements
measurements['height']
Explanation: Dictionaries
End of explanation
x = [1, 2, 3, 4]
[i**2 for i in x]
def calc_bmi(weight, height):
return weight / height**2
[calc_bmi(w, h) for w, h in zip(measurements['weight'], measurements['height'])]
Explanation: Comprehensions
End of explanation
import pandas as pd
import numpy as np
s = pd.Series([1,3,5,np.nan,6,8])
s
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
df[df.A > 0]
df.mean()
df.mean(axis='columns')
Explanation: Level 1: "The Pandas Wrangler"
<img src="amateur.png">
How to become a "Pandas Wrangler"
Learn Pandas (data wrangling): http://pandas.pydata.org/pandas-docs/stable/tutorials.html
Learn Seaborn (data visualization): http://stanford.edu/~mwaskom/software/seaborn/
Why not start with NumPy and Matplotlib?
These libraries have become core libraries.
Better results can be achieved starting with Pandas and Seaborn.
For more motivation, see http://twiecki.github.io/blog/2014/11/18/python-for-data-science/
Pandas
End of explanation
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
df2
df2.dtypes
Explanation: Mixed types
End of explanation
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
df.groupby('A').sum()
df.groupby(['A', 'B']).sum()
Explanation: Grouping
End of explanation
%matplotlib inline
import seaborn as sns
x = np.random.normal(size=100)
sns.distplot(x);
Explanation: Seaborn: Generating statistical plots
End of explanation
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
df
sns.jointplot(x="x", y="y", data=df, kind="kde");
Explanation: 2D distributions
End of explanation
iris = sns.load_dataset("iris")
sns.pairplot(iris);
Explanation: All pairwise combinations
End of explanation
tips = sns.load_dataset("tips")
tips.head()
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips);
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
col_wrap=2, size=3);
sns.factorplot(x="time", y="total_bill", hue="smoker",
col="day", data=tips, kind="box", size=4, aspect=.5);
Explanation: Seaborn: Regressions
End of explanation
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
clf.predict([[0, .5]])
Explanation: Level 2: "The Kaggle top scorer"
<img src="semi-pro.png">
Lots of machine learning and stats libraries
SciPy: comprehensive library of numerical routines like optimizers, integrators, FFT.
scikit-learn: The ML library out there
statsmodels: Frequentist statistics
SymPy: Symbolic Math
PyMC3: Probabilistic programming in Python
scikit-learn
Taken from http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
End of explanation
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.svm import SVC
digits = datasets.load_digits()
import matplotlib.pyplot as plt
#Display the first digit
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.grid('off')
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=0)
# Set the parameters by cross-validation
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
clf = GridSearchCV(SVC(C=1), tuned_parameters, cv=5)
clf.fit(X_train, y_train)
print(clf.best_params_)
y_true, y_pred = y_test, clf.predict(X_test)
ax = sns.heatmap(confusion_matrix(y_true, y_pred))
ax.set(xlabel='true label', ylabel='predicted label');
Explanation: Advanced example: Grid Search with Cross-Validation to find hyper parameters
Taken from http://scikit-learn.org/stable/auto_examples/grid_search_digits.html and http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html
End of explanation
import numpy as np
X = np.random.random((1000, 3))
def pairwise_python(X):
M = X.shape[0]
N = X.shape[1]
D = np.empty((M, M), dtype=np.float)
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = np.sqrt(d)
return D
%timeit pairwise_python(X)
Explanation: Level 3: "Lord of Speed"
<img src="pro.png">
Python is slow!
The interpreted language is indeed quite slow (just like matlab and R are slow)
Vectorize computations (i.e. the matlab way): leads to unreadable code.
Great tools to generate C-code
Cython: Write Python-like syntax that can be translated to fast C-code and called from Python.
Numba: Directly write Python and auto-translate to LLVM.
Theano: Write numerical expressions in a NumPy-like syntax to build up compute-graph that can be compiled.
PyCUDA: GPU programming.
Comparing Python, Cython and Numba
Taken from https://jakevdp.github.io/blog/2013/06/15/numba-vs-cython-take-2/
End of explanation
%load_ext cython
%%cython
import numpy as np
cimport cython
from libc.math cimport sqrt
@cython.boundscheck(False)
@cython.wraparound(False)
def pairwise_cython(double[:, ::1] X):
cdef int M = X.shape[0]
cdef int N = X.shape[1]
cdef double tmp, d
cdef double[:, ::1] D = np.empty((M, M), dtype=np.float64)
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = sqrt(d)
return np.asarray(D)
%timeit pairwise_cython(X)
Explanation: Cython
End of explanation
from numba.decorators import jit
pairwise_numba = jit(pairwise_python)
# Run once to compile before timing
pairwise_numba(X)
%timeit pairwise_numba(X)
Explanation: Numba
End of explanation
!ls -lahL POIWorld.csv
from dask import dataframe as dd
columns = ["name", "amenity", "Longitude", "Latitude"]
data = dd.read_csv('POIWorld.csv', usecols=columns)
data
with_name = data[data.name.notnull()]
is_starbucks = with_name.name.str.contains('[Ss]tarbucks')
is_dunkin = with_name.name.str.contains('[Dd]unkin')
starbucks = with_name[is_starbucks]
dunkin = with_name[is_dunkin]
from dask.diagnostics import ProgressBar
with ProgressBar():
starbucks_count, dunkin_count = dd.compute(starbucks.name.count(), dunkin.name.count())
starbucks_count, dunkin_count
locs = dd.compute(starbucks.Longitude,
starbucks.Latitude,
dunkin.Longitude,
dunkin.Latitude)
# extract arrays of values fro the series:
lon_s, lat_s, lon_d, lat_d = [loc.values for loc in locs]
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
def draw_USA():
initialize a basemap centered on the continental USA
plt.figure(figsize=(14, 10))
return Basemap(projection='lcc', resolution='l',
llcrnrlon=-119, urcrnrlon=-64,
llcrnrlat=22, urcrnrlat=49,
lat_1=33, lat_2=45, lon_0=-95,
area_thresh=10000)
m = draw_USA()
# Draw map background
m.fillcontinents(color='white', lake_color='#eeeeee')
m.drawstates(color='lightgray')
m.drawcoastlines(color='lightgray')
m.drawcountries(color='lightgray')
m.drawmapboundary(fill_color='#eeeeee')
# Plot the values in Starbucks Green and Dunkin Donuts Orange
style = dict(s=5, marker='o', alpha=0.5, zorder=2)
m.scatter(lon_s, lat_s, latlon=True,
label="Starbucks", color='#00592D', **style)
m.scatter(lon_d, lat_d, latlon=True,
label="Dunkin' Donuts", color='#FC772A', **style)
plt.legend(loc='lower left', frameon=False);
Explanation: Level 4: "High Priest of Big Data"
<img src="master.png">
Lots of things happening!
Big Data
Blaze + Dask
Ibis
PySpark
bcolz
Interactive data visualization
Bokeh
Plotly
pyxley
Work interactively on Big Data with Dask
Taken from https://jakevdp.github.io/blog/2015/08/14/out-of-core-dataframes-in-python/
End of explanation
from bokeh.io import output_notebook
from bokeh.resources import CDN
from bokeh.plotting import figure, show
output_notebook(resources=CDN)
from __future__ import print_function
from math import pi
from bokeh.browserlib import view
from bokeh.document import Document
from bokeh.embed import file_html
from bokeh.models.glyphs import Circle, Text
from bokeh.models import (
BasicTicker, ColumnDataSource, Grid, GridPlot, LinearAxis,
DataRange1d, PanTool, Plot, WheelZoomTool
)
from bokeh.resources import INLINE
from bokeh.sampledata.iris import flowers
from bokeh.plotting import show
colormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}
flowers['color'] = flowers['species'].map(lambda x: colormap[x])
source = ColumnDataSource(
data=dict(
petal_length=flowers['petal_length'],
petal_width=flowers['petal_width'],
sepal_length=flowers['sepal_length'],
sepal_width=flowers['sepal_width'],
color=flowers['color']
)
)
text_source = ColumnDataSource(
data=dict(xcenter=[125], ycenter=[135])
)
xdr = DataRange1d()
ydr = DataRange1d()
def make_plot(xname, yname, xax=False, yax=False, text=None):
plot = Plot(
x_range=xdr, y_range=ydr, background_fill="#efe8e2",
border_fill='white', title="", min_border=2, h_symmetry=False, v_symmetry=False,
plot_width=150, plot_height=150)
circle = Circle(x=xname, y=yname, fill_color="color", fill_alpha=0.2, size=4, line_color="color")
r = plot.add_glyph(source, circle)
xdr.renderers.append(r)
ydr.renderers.append(r)
xticker = BasicTicker()
if xax:
xaxis = LinearAxis()
plot.add_layout(xaxis, 'below')
xticker = xaxis.ticker
plot.add_layout(Grid(dimension=0, ticker=xticker))
yticker = BasicTicker()
if yax:
yaxis = LinearAxis()
plot.add_layout(yaxis, 'left')
yticker = yaxis.ticker
plot.add_layout(Grid(dimension=1, ticker=yticker))
plot.add_tools(PanTool(), WheelZoomTool())
if text:
text = " ".join(text.split('_'))
text = Text(
x={'field':'xcenter', 'units':'screen'},
y={'field':'ycenter', 'units':'screen'},
text=[text], angle=pi/4, text_font_style="bold", text_baseline="top",
text_color="#ffaaaa", text_alpha=0.7, text_align="center", text_font_size="28pt"
)
plot.add_glyph(text_source, text)
return plot
xattrs = ["petal_length", "petal_width", "sepal_width", "sepal_length"]
yattrs = list(reversed(xattrs))
plots = []
for y in yattrs:
row = []
for x in xattrs:
xax = (y == yattrs[-1])
yax = (x == xattrs[0])
text = x if (x==y) else None
plot = make_plot(x, y, xax, yax, text)
row.append(plot)
plots.append(row)
grid = GridPlot(children=plots, title="iris_splom")
show(grid)
Explanation: Interactive data visualization with Bokeh
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.