Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,300 | Given the following text description, write Python code to implement the functionality described.
Description:
Create a function that takes integers, floats, or strings representing
real numbers, and returns the larger variable in its given variable type.
Return None if the values are equal.
Note: If a real number is represented as a string, the floating point might be . or ,
compare_one(1, 2.5) ➞ 2.5
compare_one(1, "2,3") ➞ "2,3"
compare_one("5,1", "6") ➞ "6"
compare_one("1", 1) ➞ None
| Python Code:
def compare_one(a, b):
temp_a, temp_b = a, b
if isinstance(temp_a, str): temp_a = temp_a.replace(',','.')
if isinstance(temp_b, str): temp_b = temp_b.replace(',','.')
if float(temp_a) == float(temp_b): return None
return a if float(temp_a) > float(temp_b) else b |
13,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Given a list of values, there are several ways to count the frequency of each value.
Step2: You can use a Python dictionary
Step3: You can use a Counter (which is a dictionary with additional methods)
Step4: Or you can use the Hist object provided by thinkstats2
Step5: Hist provides Freq, which looks up the frequency of a value.
Step6: You can also use the bracket operator, which does the same thing.
Step7: If the value does not appear, it has frequency 0.
Step8: The Values method returns the values
Step9: So you can iterate the values and their frequencies like this
Step10: Or you can use the Items method
Step11: thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.
For example Hist plots the values and their frequencies as a bar graph.
Config takes parameters that label the x and y axes, among other things.
Step12: As an example, I'll replicate some of the figures from the book.
First, I'll load the data from the pregnancy file and select the records for live births.
Step13: Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.
Step14: Before plotting the ages, I'll apply floor to round down
Step15: As an exercise, plot the histogram of pregnancy lengths (column prglngth).
Step16: Hist provides smallest, which select the lowest values and their frequencies.
Step17: Use Largest to display the longest pregnancy lengths.
Step18: From live births, we can select first babies and others using birthord, then compute histograms of pregnancy length for the two groups.
Step19: We can use width and align to plot two histograms side-by-side.
Step20: Series provides methods to compute summary statistics
Step21: Here are the mean and standard deviation
Step22: As an exercise, confirm that std is the square root of var
Step23: Here's are the mean pregnancy lengths for first babies and others
Step24: And here's the difference (in weeks)
Step26: This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations
Step27: Compute the Cohen effect size for the difference in pregnancy length for first babies and others.
Step28: Exercises
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others.
Compute Cohen’s effect size to quantify the difference between the groups. How does it compare to the difference in pregnancy length?
Step29: For the next few exercises, we'll load the respondent file
Step30: Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.
Step31: Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.
Step32: Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.
Step33: Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?
Step34: Use Hist.Largest to find the largest values of <tt>parity</tt>.
Step35: Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.
Use <tt>totincr</tt> to select the respondents with the highest income (level 14). Plot the histogram of <tt>parity</tt> for just the high income respondents.
Step36: Find the largest parities for high income respondents.
Step37: Compare the mean <tt>parity</tt> for high income respondents and others.
Step38: Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others? | Python Code:
import numpy as np
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
Explanation: Chapter 2
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
t = [1, 2, 2, 3, 5]
Explanation: Given a list of values, there are several ways to count the frequency of each value.
End of explanation
hist = {}
for x in t:
hist[x] = hist.get(x, 0) + 1
hist
Explanation: You can use a Python dictionary:
End of explanation
from collections import Counter
counter = Counter(t)
counter
Explanation: You can use a Counter (which is a dictionary with additional methods):
End of explanation
import thinkstats2
hist = thinkstats2.Hist([1, 2, 2, 3, 5])
hist
Explanation: Or you can use the Hist object provided by thinkstats2:
End of explanation
hist.Freq(2)
Explanation: Hist provides Freq, which looks up the frequency of a value.
End of explanation
hist[2]
Explanation: You can also use the bracket operator, which does the same thing.
End of explanation
hist[4]
Explanation: If the value does not appear, it has frequency 0.
End of explanation
hist.Values()
Explanation: The Values method returns the values:
End of explanation
for val in sorted(hist.Values()):
print(val, hist[val])
Explanation: So you can iterate the values and their frequencies like this:
End of explanation
for val, freq in hist.Items():
print(val, freq)
Explanation: Or you can use the Items method:
End of explanation
import thinkplot
thinkplot.Hist(hist)
thinkplot.Config(xlabel='value', ylabel='frequency')
Explanation: thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.
For example Hist plots the values and their frequencies as a bar graph.
Config takes parameters that label the x and y axes, among other things.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import nsfg
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
Explanation: As an example, I'll replicate some of the figures from the book.
First, I'll load the data from the pregnancy file and select the records for live births.
End of explanation
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')
Explanation: Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.
End of explanation
ages = np.floor(live.agepreg)
hist = thinkstats2.Hist(ages, label='agepreg')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='years', ylabel='Count')
Explanation: Before plotting the ages, I'll apply floor to round down:
End of explanation
# Solution
hist = thinkstats2.Hist(live.prglngth, label='prglngth')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='weeks', ylabel='Count')
Explanation: As an exercise, plot the histogram of pregnancy lengths (column prglngth).
End of explanation
for weeks, freq in hist.Smallest(10):
print(weeks, freq)
Explanation: Hist provides smallest, which select the lowest values and their frequencies.
End of explanation
# Solution
for weeks, freq in hist.Largest(10):
print(weeks, freq)
Explanation: Use Largest to display the longest pregnancy lengths.
End of explanation
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
first_hist = thinkstats2.Hist(firsts.prglngth, label='first')
other_hist = thinkstats2.Hist(others.prglngth, label='other')
Explanation: From live births, we can select first babies and others using birthord, then compute histograms of pregnancy length for the two groups.
End of explanation
width = 0.45
thinkplot.PrePlot(2)
thinkplot.Hist(first_hist, align='right', width=width)
thinkplot.Hist(other_hist, align='left', width=width)
thinkplot.Config(xlabel='weeks', ylabel='Count', xlim=[27, 46])
Explanation: We can use width and align to plot two histograms side-by-side.
End of explanation
mean = live.prglngth.mean()
var = live.prglngth.var()
std = live.prglngth.std()
Explanation: Series provides methods to compute summary statistics:
End of explanation
mean, std
Explanation: Here are the mean and standard deviation:
End of explanation
# Solution
np.sqrt(var) == std
Explanation: As an exercise, confirm that std is the square root of var:
End of explanation
firsts.prglngth.mean(), others.prglngth.mean()
Explanation: Here's are the mean pregnancy lengths for first babies and others:
End of explanation
firsts.prglngth.mean() - others.prglngth.mean()
Explanation: And here's the difference (in weeks):
End of explanation
def CohenEffectSize(group1, group2):
Computes Cohen's effect size for two groups.
group1: Series or DataFrame
group2: Series or DataFrame
returns: float if the arguments are Series;
Series if the arguments are DataFrames
diff = group1.mean() - group2.mean()
var1 = group1.var()
var2 = group2.var()
n1, n2 = len(group1), len(group2)
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / np.sqrt(pooled_var)
return d
Explanation: This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations:
End of explanation
# Solution
CohenEffectSize(firsts.prglngth, others.prglngth)
Explanation: Compute the Cohen effect size for the difference in pregnancy length for first babies and others.
End of explanation
# Solution
firsts.totalwgt_lb.mean(), others.totalwgt_lb.mean()
# Solution
CohenEffectSize(firsts.totalwgt_lb, others.totalwgt_lb)
Explanation: Exercises
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others.
Compute Cohen’s effect size to quantify the difference between the groups. How does it compare to the difference in pregnancy length?
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz")
resp = nsfg.ReadFemResp()
Explanation: For the next few exercises, we'll load the respondent file:
End of explanation
# Solution
hist = thinkstats2.Hist(resp.totincr)
thinkplot.Hist(hist, label='totincr')
thinkplot.Config(xlabel='income (category)', ylabel='Count')
Explanation: Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.
End of explanation
# Solution
hist = thinkstats2.Hist(resp.ager)
thinkplot.Hist(hist, label='ager')
thinkplot.Config(xlabel='age (years)', ylabel='Count')
Explanation: Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.
End of explanation
# Solution
hist = thinkstats2.Hist(resp.numfmhh)
thinkplot.Hist(hist, label='numfmhh')
thinkplot.Config(xlabel='number of people', ylabel='Count')
Explanation: Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.
End of explanation
# Solution
# This distribution is positive-valued and skewed to the right.
hist = thinkstats2.Hist(resp.parity)
thinkplot.Hist(hist, label='parity')
thinkplot.Config(xlabel='parity', ylabel='Count')
Explanation: Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?
End of explanation
# Solution
hist.Largest(10)
Explanation: Use Hist.Largest to find the largest values of <tt>parity</tt>.
End of explanation
# Solution
rich = resp[resp.totincr == 14]
hist = thinkstats2.Hist(rich.parity)
thinkplot.Hist(hist, label='parity')
thinkplot.Config(xlabel='parity', ylabel='Count')
Explanation: Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.
Use <tt>totincr</tt> to select the respondents with the highest income (level 14). Plot the histogram of <tt>parity</tt> for just the high income respondents.
End of explanation
# Solution
hist.Largest(10)
Explanation: Find the largest parities for high income respondents.
End of explanation
# Solution
not_rich = resp[resp.totincr < 14]
rich.parity.mean(), not_rich.parity.mean()
Explanation: Compare the mean <tt>parity</tt> for high income respondents and others.
End of explanation
# Solution
# This effect is about 10 times stronger than the difference in pregnancy length.
# But remembering the design of the study, we should not make too much of this
# apparent effect.
CohenEffectSize(rich.parity, not_rich.parity)
Explanation: Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others?
End of explanation |
13,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Hierarchical model for Rugby prediction
@Author
Step2: This is a Rugby prediction exercise. So we'll input some data
Step3: What do we want to infer?
We want to infer the latent paremeters (every team's strength) that are generating the data we observe (the scorelines).
Moreover, we know that the scorelines are a noisy measurement of team strength, so ideally, we want a model that makes it easy to quantify our uncertainty about the underlying strengths.
Often we don't know what the Bayesian Model is explicitly, so we have to 'estimate' the Bayesian Model'
If we can't solve something, approximate it.
Markov-Chain Monte Carlo (MCMC) instead draws samples from the posterior.
Fortunately, this algorithm can be applied to almost any model.
What do we want?
We want to quantify our uncertainty
We want to also use this to generate a model
We want the answers as distributions not point estimates
What assumptions do we know for our 'generative story'?
We know that the Six Nations in Rugby only has 6 teams - they each play each other once
We have data from last year!
We also know that in sports scoring is modelled as a Poisson distribution
We consider home advantage to be a strong effect in sports
The model.
The league is made up by a total of T= 6 teams, playing each other once
in a season. We indicate the number of points scored by the home and the away team in the g-th game of the season (15 games) as $y_{g1}$ and $y_{g2}$ respectively. </p>
The vector of observed counts $\mathbb{y} = (y_{g1}, y_{g2})$ is modelled as independent Poisson
Step4: We did some munging above and adjustments of the data to make it tidier for our model.
The log function to away scores and home scores is a standard trick in the sports analytics literature
Building of the model
We now build the model in PyMC3, specifying the global parameters, and the team-specific parameters and the likelihood function
Step5: We specified the model and the likelihood function
All this runs on a Theano graph under the hood
Now we need to fit our model using the Maximum A Posteriori algorithm to decide where to start out No U Turn Sampler
Step6: Results
From the above we can start to understand the different distributions of attacking strength and defensive strength.
These are probabilistic estimates and help us better understand the uncertainty in sports analytics
Step7: Covariates
We should do some exploration of the variables. | Python Code:
!date
import numpy as np
import pandas as pd
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
%matplotlib inline
import pymc3 as pm, theano.tensor as tt
Explanation: A Hierarchical model for Rugby prediction
@Author: Peadar Coyle
@email: [email protected]
@date: 31/12/15
I came across the following blog post on http://danielweitzenfeld.github.io/passtheroc/blog/2014/10/28/bayes-premier-league/
Based on the work of Baio and Blangiardo
In this example, we're going to reproduce the first model described in the paper using PyMC3.
Since I am a rugby fan I decide to apply the results of the paper Bayesian Football to the Six Nations.
Rugby is a physical sport popular worldwide.
Six Nations consists of Italy, Ireland, Scotland, England, France and Wales
Game consists of scoring tries (similar to touch downs) or kicking the goal.
Average player is something like 100kg and 1.82m tall.
Paul O'Connell the Irish captain is Height: 6' 6" (1.98 m) Weight: 243 lbs (110 kg)
We will use a data set only consisting of the Six Nations 2014 data, and use this to build a generative and explainable model about the Six Nations 2015.
Motivation
Your estimate of the strength of a team depends on your estimates of the other strengths
Ireland are a stronger team than Italy for example - but by how much?
Source for Results 2014 are Wikipedia.
We want to infer a latent parameter - that is the 'strength' of a team based only on their scoring intensity, and all we have are their scores and results, we can't accurately measure the 'strength' of a team.
Probabilistic Programming is a brilliant paradigm for modeling these latent parameters
End of explanation
data_csv = StringIO(home_team,away_team,home_score,away_score
Wales,Italy,23,15
France,England,26,24
Ireland,Scotland,28,6
Ireland,Wales,26,3
Scotland,England,0,20
France,Italy,30,10
Wales,France,27,6
Italy,Scotland,20,21
England,Ireland,13,10
Ireland,Italy,46,7
Scotland,France,17,19
England,Wales,29,18
Italy,England,11,52
Wales,Scotland,51,3
France,Ireland,20,22)
Explanation: This is a Rugby prediction exercise. So we'll input some data
End of explanation
df = pd.read_csv(data_csv)
teams = df.home_team.unique()
teams = pd.DataFrame(teams, columns=['team'])
teams['i'] = teams.index
df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left')
df = df.rename(columns = {'i': 'i_home'}).drop('team', 1)
df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left')
df = df.rename(columns = {'i': 'i_away'}).drop('team', 1)
observed_home_goals = df.home_score.values
observed_away_goals = df.away_score.values
home_team = df.i_home.values
away_team = df.i_away.values
num_teams = len(df.i_home.drop_duplicates())
num_games = len(home_team)
g = df.groupby('i_away')
att_starting_points = np.log(g.away_score.mean())
g = df.groupby('i_home')
def_starting_points = -np.log(g.away_score.mean())
Explanation: What do we want to infer?
We want to infer the latent paremeters (every team's strength) that are generating the data we observe (the scorelines).
Moreover, we know that the scorelines are a noisy measurement of team strength, so ideally, we want a model that makes it easy to quantify our uncertainty about the underlying strengths.
Often we don't know what the Bayesian Model is explicitly, so we have to 'estimate' the Bayesian Model'
If we can't solve something, approximate it.
Markov-Chain Monte Carlo (MCMC) instead draws samples from the posterior.
Fortunately, this algorithm can be applied to almost any model.
What do we want?
We want to quantify our uncertainty
We want to also use this to generate a model
We want the answers as distributions not point estimates
What assumptions do we know for our 'generative story'?
We know that the Six Nations in Rugby only has 6 teams - they each play each other once
We have data from last year!
We also know that in sports scoring is modelled as a Poisson distribution
We consider home advantage to be a strong effect in sports
The model.
The league is made up by a total of T= 6 teams, playing each other once
in a season. We indicate the number of points scored by the home and the away team in the g-th game of the season (15 games) as $y_{g1}$ and $y_{g2}$ respectively. </p>
The vector of observed counts $\mathbb{y} = (y_{g1}, y_{g2})$ is modelled as independent Poisson:
$y_{gi}| \theta_{gj} \tilde\;\; Poisson(\theta_{gj})$
where the theta parameters represent the scoring intensity in the g-th game for the team playing at home (j=1) and away (j=2), respectively.</p>
We model these parameters according to a formulation that has been used widely in the statistical literature, assuming a log-linear random effect model:
$$log \theta_{g1} = home + att_{h(g)} + def_{a(g)} $$
$$log \theta_{g2} = att_{a(g)} + def_{h(g)}$$
The parameter home represents the advantage for the team hosting the game and we assume that this effect is constant for all the teams and throughout the season
The scoring intensity is determined jointly by the attack and defense ability of the two teams involved, represented by the parameters att and def, respectively
Conversely, for each t = 1, ..., T, the team-specific effects are modelled as exchangeable from a common distribution:
$att_{t} \; \tilde\;\; Normal(\mu_{att},\tau_{att})$ and $def_{t} \; \tilde\;\;Normal(\mu_{def},\tau_{def})$
End of explanation
model = pm.Model()
with pm.Model() as model:
# global model parameters
home = pm.Normal('home', 0, tau=.0001)
tau_att = pm.Gamma('tau_att', .1, .1)
tau_def = pm.Gamma('tau_def', .1, .1)
intercept = pm.Normal('intercept', 0, tau=.0001)
# team-specific model parameters
atts_star = pm.Normal("atts_star",
mu =0,
tau =tau_att,
shape=num_teams)
defs_star = pm.Normal("defs_star",
mu =0,
tau =tau_def,
shape=num_teams)
atts = pm.Deterministic('atts', atts_star - tt.mean(atts_star))
defs = pm.Deterministic('defs', defs_star - tt.mean(defs_star))
home_theta = tt.exp(intercept + home + atts[home_team] + defs[away_team])
away_theta = tt.exp(intercept + atts[away_team] + defs[home_team])
# likelihood of observed data
home_points = pm.Poisson('home_points', mu=home_theta, observed=observed_home_goals)
away_points = pm.Poisson('away_points', mu=away_theta, observed=observed_away_goals)
Explanation: We did some munging above and adjustments of the data to make it tidier for our model.
The log function to away scores and home scores is a standard trick in the sports analytics literature
Building of the model
We now build the model in PyMC3, specifying the global parameters, and the team-specific parameters and the likelihood function
End of explanation
pm.sample?
with model:
start = pm.find_MAP()
step = pm.NUTS(state=start)
trace = pm.sample(2000, step, init=start)
pm.traceplot(trace)
Explanation: We specified the model and the likelihood function
All this runs on a Theano graph under the hood
Now we need to fit our model using the Maximum A Posteriori algorithm to decide where to start out No U Turn Sampler
End of explanation
pm.forestplot(trace, varnames=['atts'], ylabels=['France', 'Ireland', 'Scotland', 'Italy', 'England', 'Wales'], main="Team Offense")
pm.forestplot(trace, varnames=['defs'], ylabels=['France', 'Ireland', 'Scotland', 'Italy', 'England', 'Wales'], main="Team Defense")
pm.plot_posterior?
pm.plot_posterior(trace[100:],
varnames=['defs'],
color='#87ceeb');
Explanation: Results
From the above we can start to understand the different distributions of attacking strength and defensive strength.
These are probabilistic estimates and help us better understand the uncertainty in sports analytics
End of explanation
df_trace = pm.trace_to_dataframe(trace[:1000])
import seaborn as sns
df_trace_att = df_trace[['atts_star__0','atts_star__1',
'atts_star__2',
'atts_star__3',
'atts_star__4',
'atts_star__5']]
df_trace_att.rename(columns={'atts_star__0':'atts_star_france','atts_star__1':'atts_star_ireland',
'atts_star__2':'atts_star_scotland',
'atts_star__3':'atts_star_italy',
'atts_star__4':'atts_star_england',
'atts_star__5':'atts_star_wales'}, inplace=True)
_ = sns.pairplot(df_trace_att)
Explanation: Covariates
We should do some exploration of the variables.
End of explanation |
13,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
https
Step7: Overview
Please see Chapter 3 for more details on logistic regression.
Implementing logistic regression in Python
The following implementation is similar to the Adaline implementation in Chapter 2 except that we replace the sum of squared errors cost function with the logistic cost function
$$J(\mathbf{w}) = \sum_{i=1}^{m} - y^{(i)} log \bigg( \phi\big(z^{(i)}\big) \bigg) - \big(1 - y^{(i)}\big) log\bigg(1-\phi\big(z^{(i)}\big)\bigg).$$
Step8: Reading-in the Iris data
Step9: A function for plotting decision regions | Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
https://github.com/1iyiwei/pyml
Python Machine Learning Essentials - Code Examples
Bonus Material - A Simple Logistic Regression Implementation
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
class LogisticRegression(object):
LogisticRegression classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
cost_ : list
Cost in every epoch.
def __init__(self, eta=0.01, n_iter=50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
y_val = self.activation(X)
errors = (y - y_val)
neg_grad = X.T.dot(errors)
self.w_[1:] += self.eta * neg_grad
self.w_[0] += self.eta * errors.sum()
self.cost_.append(self._logit_cost(y, self.activation(X)))
return self
def _logit_cost(self, y, y_val):
logit = -y.dot(np.log(y_val)) - ((1 - y).dot(np.log(1 - y_val)))
return logit
def _sigmoid(self, z):
return 1.0 / (1.0 + np.exp(-z))
def net_input(self, X):
Calculate net input
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
Activate the logistic neuron
z = self.net_input(X)
return self._sigmoid(z)
def predict_proba(self, X):
Predict class probabilities for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
Class 1 probability : float
return activation(X)
def predict(self, X):
Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
class : int
Predicted class label.
# equivalent to np.where(self.activation(X) >= 0.5, 1, 0)
return np.where(self.net_input(X) >= 0.0, 1, 0)
Explanation: Overview
Please see Chapter 3 for more details on logistic regression.
Implementing logistic regression in Python
The following implementation is similar to the Adaline implementation in Chapter 2 except that we replace the sum of squared errors cost function with the logistic cost function
$$J(\mathbf{w}) = \sum_{i=1}^{m} - y^{(i)} log \bigg( \phi\big(z^{(i)}\big) \bigg) - \big(1 - y^{(i)}\big) log\bigg(1-\phi\big(z^{(i)}\big)\bigg).$$
End of explanation
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/iris/iris.data', header=None)
df.tail()
import numpy as np
# select setosa and versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', 1, 0)
# extract sepal length and petal length
X = df.iloc[0:100, [0, 2]].values
# standardize features
X_std = np.copy(X)
X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
Explanation: Reading-in the Iris data
End of explanation
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
%matplotlib inline
import matplotlib.pyplot as plt
lr = LogisticRegression(n_iter=500, eta=0.2).fit(X_std, y)
plt.plot(range(1, len(lr.cost_) + 1), np.log10(lr.cost_))
plt.xlabel('Epochs')
plt.ylabel('Cost')
plt.title('Logistic Regression - Learning rate 0.01')
plt.tight_layout()
plt.show()
plot_decision_regions(X_std, y, classifier=lr)
plt.title('Logistic Regression - Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
Explanation: A function for plotting decision regions
End of explanation |
13,304 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Problem: | Problem:
import numpy as np
a = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
shift = 3
def solution(xs, n):
e = np.empty_like(xs)
if n >= 0:
e[:n] = np.nan
e[n:] = xs[:-n]
else:
e[n:] = np.nan
e[:n] = xs[-n:]
return e
result = solution(a, shift) |
13,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
def round_down(n):
s = str(n)
if n <= 20:
return n
elif n < 100:
return int(s[0] + '0'), int(s[1])
elif n<1000:
return int(s[0] + '00'),int(s[1]),int(s[2])
assert round_down(5) == 5
assert round_down(55) == (50,5)
assert round_down(222) == (200,2,2)
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
lst = []
dic = {
0: 'zero',
1: 'one',
2: 'two',
3: 'three',
4: 'four',
5: 'five',
6: 'six',
7: 'seven',
8: 'eight',
9: 'nine',
10: 'ten',
11: 'eleven',
12: 'twelve',
13: 'thirteen',
14: 'fourteen',
15: 'fifteen',
16: 'sixteen',
17: 'seventeen',
18: 'eighteen',
19: 'nineteen',
20: 'twenty',
30: 'thirty',
40: 'forty',
50: 'fifty',
60: 'sixty',
70: 'seventy',
80: 'eighty',
90: 'ninety',
100: 'one hundred',
200: 'two hundred',
300: 'three hundred',
400: 'four hundred',
500: 'five hundred',
600: 'six hundred',
700: 'seven hundred',
800: 'eight hundred',
900: 'nine hundred'}
for i in range(1,n+1):
if i <= 20:
for entry in dic:
if i == entry:
lst.append(dic[i])
elif i < 100:
first,second = round_down(i)
for entry in dic:
if first == entry:
if second == 0:
lst.append(dic[first])
else:
lst.append(dic[first] + '-' + dic[second])
elif i <1000:
first,second,third = round_down(i)
for entry in dic:
if first == entry:
if second == 0 and third == 0:
lst.append(dic[first])
elif second == 0:
lst.append(dic[first] + ' and ' + dic[third])
elif second == 1:
#For handling the teen case
lst.append(dic[first] + ' and ' + dic[int(str(second)+str(third))])
elif third == 0:
#Here I multiply by 10 because round_down removes the 0 for my second digit
lst.append(dic[first] + ' and ' + dic[second*10])
else:
lst.append(dic[first] + ' and ' + dic[second*10] + '-' + dic[third])
elif i == 1000:
lst.append('one thousand')
return lst
number_to_words(5)
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
assert len(number_to_words(5))==5
assert len(number_to_words(900))==900
assert number_to_words(50)[-1]=='fifty'
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
lst2 = []
for entry in number_to_words(n):
count = 0
for char in entry:
if char != ' ' and char != '-':
count = count + 1
lst2.append(count)
return lst2
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
assert count_letters(1) == [3]
assert len(count_letters(342)) == 342
assert count_letters(5) == [3,3,5,4,4]
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
print(sum(count_letters(1000)))
print(sum(count_letters(998)))
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
13,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Workshop 4 - Performance Metrics
In this workshop we study 2 performance metrics(Spread and Inter-Generational Distance) on GA optimizing the POM3 model.
Step2: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize?
TODO2
Step10: Data Format
For our experiments we store the data in the following format.
data = {
"expt1"
Step13: Reference Set
Almost all the traditional measures you consider need a reference set for its computation. A theoritical reference set would be the ideal pareto frontier. This is fine for
a) Mathematical Models
Step17: Spread
Calculating spread
Step20: IGD = inter-generational distance; i.e. how good are you compared to the best known?
Find a reference set (the best possible solutions)
For each optimizer
For each item in its final Pareto frontier
Find the nearest item in the reference set and compute the distance to it.
Take the mean of all the distances. This is IGD for the optimizer
Note that the less the mean IGD, the better the optimizer since
this means its solutions are closest to the best of the best. | Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
import pom3_ga, sys
import pickle
# TODO 1: Enter your unity ID here
__author__ = "<sbiswas4>"
Explanation: Workshop 4 - Performance Metrics
In this workshop we study 2 performance metrics(Spread and Inter-Generational Distance) on GA optimizing the POM3 model.
End of explanation
def normalize(problem, points):
Normalize all the objectives
in each point and return them
meta = problem.objectives
all_objs = []
for point in points:
objs = []
for i, o in enumerate(problem.evaluate(point)):
low, high = meta[i].low, meta[i].high
# TODO 3: Normalize 'o' between 'low' and 'high'; Then add the normalized value to 'objs'
if high==low:
objs.append(0)
continue
else:
objs.append((o - low)/(high-low))
all_objs.append(objs)
return all_objs
Explanation: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize?
TODO2 : So that no single entity,objective has undue advantage over the others just based on its unit. If the different objectives are standardized they're brought to a level playing field to be compared easily.
End of explanation
Performing experiments for [5, 10, 50] generations.
problem = pom3_ga.POM3()
pop_size = 10
repeats = 10
test_gens = [5, 10, 50]
def save_data(file_name, data):
Save 'data' to 'file_name.pkl'
with open(file_name + ".pkl", 'wb') as f:
pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)
def load_data(file_name):
Retrieve data from 'file_name.pkl'
with open(file_name + ".pkl", 'rb') as f:
return pickle.load(f)
def build(problem, pop_size, repeats, test_gens):
Repeat the experiment for 'repeats' number of repeats for each value in 'test_gens'
tests = {t: [] for t in test_gens}
tests[0] = [] # For Initial Population
for _ in range(repeats):
init_population = pom3_ga.populate(problem, pop_size)
pom3_ga.say(".")
for gens in test_gens:
tests[gens].append(normalize(problem, pom3_ga.ga(problem, init_population, retain_size=pop_size, gens=gens)[1]))
tests[0].append(normalize(problem, init_population))
print("\nCompleted")
return tests
Repeat Experiments
# tests = build(problem, pop_size, repeats, test_gens)
Save Experiment Data into a file
# save_data("dump", tests)
Load the experimented data from dump.
tests = load_data("dump")
Explanation: Data Format
For our experiments we store the data in the following format.
data = {
"expt1":[repeat1, repeat2, ...],
"expt2":[repeat1, repeat2, ...],
.
.
.
}
repeatx = [objs1, objs2, ....] // All of the final population
objs1 = [norm_obj1, norm_obj2, ...] // Normalized objectives of each member of the final population.
End of explanation
def make_reference(problem, *fronts):
Make a reference set comparing all the fronts.
Here the comparison we use is bdom. It can
be altered to use cdom as well
retain_size = len(fronts[0])
reference = []
for front in fronts:
reference+=front
def bdom(one, two):
Return True if 'one' dominates 'two'
else return False
:param one - [pt1_obj1, pt1_obj2, pt1_obj3, pt1_obj4]
:param two - [pt2_obj1, pt2_obj2, pt2_obj3, pt2_obj4]
dominates = False
for i, obj in enumerate(problem.objectives):
gt, lt = pom3_ga.gt, pom3_ga.lt
better = lt if obj.do_minimize else gt
# TODO 3: Use the varaibles declared above to check if one dominates two
return dominates
def fitness(one, dom):
return len([1 for another in reference if dom(one, another)])
fitnesses = []
for point in reference:
fitnesses.append((fitness(point, bdom), point))
reference = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return reference[:retain_size]
make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
Explanation: Reference Set
Almost all the traditional measures you consider need a reference set for its computation. A theoritical reference set would be the ideal pareto frontier. This is fine for
a) Mathematical Models: Where we can solve the problem to obtain the set.
b) Low Runtime Models: Where we can do a one time exaustive run to obtain the model.
But most real world problems are neither mathematical nor have a low runtime. So what do we do?. Compute an approximate reference set
One possible way of constructing it is:
1. Take the final generation of all the treatments.
2. Select the best set of solutions from all the final generations
End of explanation
def eucledian(one, two):
Compute Eucledian Distance between
2 vectors. We assume the input vectors
are normalized.
:param one: Vector 1
:param two: Vector 2
:return:
# TODO 4: Code up the eucledian distance. https://en.wikipedia.org/wiki/Euclidean_distance
#dist = 0
return (sum([(o-t)**2 for o,t in zip(one, two)]) / len(one))**0.5
#return dist
def sort_solutions(solutions):
Sort a list of list before computing spread
def sorter(lst):
m = len(lst)
weights = reversed([10 ** i for i in xrange(m)])
return sum([element * weight for element, weight in zip(lst, weights)])
return sorted(solutions, key=sorter)
def closest(one, many):
min_dist = sys.maxint
closest_point = None
for this in many:
dist = eucledian(this, one)
if dist < min_dist:
min_dist = dist
closest_point = this
return min_dist, closest_point
def spread(obtained, ideals):
Calculate the spread (a.k.a diversity)
for a set of solutions
s_obtained = sort_solutions(obtained)
s_ideals = sort_solutions(ideals)
d_f = closest(s_ideals[0], s_obtained)[0]
d_l = closest(s_ideals[-1], s_obtained)[0]
n = len(s_ideals)
distances = []
for i in range(len(s_obtained)-1):
distances.append(eucledian(s_obtained[i], s_obtained[i+1]))
d_bar = sum(distances)/len(distances)
# TODO 5: Compute the value of spread using the definition defined in the previous cell.
d_sum = sum([abs(d_i - d_bar) for d_i in distances])
delta = (d_f + d_l + d_sum) / (d_f + d_l + (n-1)*d_bar)
return delta
ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
print(spread(tests[5][0], ref))
print(spread(tests[10][0], ref))
print(spread(tests[50][0], ref))
Explanation: Spread
Calculating spread:
<img width=300 src="http://mechanicaldesign.asmedigitalcollection.asme.org/data/Journals/JMDEDB/27927/022006jmd3.jpeg">
Consider the population of final gen(P) and the Pareto Frontier(R).
Find the distances between the first point of P and first point of R(d<sub>f</sub>) and last point of P and last point of R(d<sub>l</sub>)
Find the distance between all points and their nearest neighbor d<sub>i</sub> and
their nearest neighbor
Then:
<img width=300 src="https://raw.githubusercontent.com/txt/ase16/master/img/spreadcalc.png">
If all data is maximally spread, then all distances d<sub>i</sub> are near mean d
which would make Δ=0 ish.
Note that less the spread of each point to its neighbor, the better
since this means the optimiser is offering options across more of the frontier.
End of explanation
def igd(obtained, ideals):
Compute the IGD for a
set of solutions
:param obtained: Obtained pareto front
:param ideals: Ideal pareto front
:return:
# TODO 6: Compute the value of IGD using the definition defined in the previous cell.
igd_val = sum([closest(ideal, obtained)[0] for ideal in ideals]) / len(ideals)
return igd_val
# igd_val = 0
# return igd_val
ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
print(igd(tests[5][0], ref))
print(igd(tests[10][0], ref))
print(igd(tests[50][0], ref))
import sk
sk = reload(sk)
def format_for_sk(problem, data, measure):
Convert the experiment data into the format
required for sk.py and computet the desired
'measure' for all the data.
gens = data.keys()
reps = len(data[gens[0]])
measured = {gen:["gens_%d"%gen] for gen in gens}
for i in range(reps):
ref_args = [data[gen][i] for gen in gens]
ref = make_reference(problem, *ref_args)
for gen in gens:
measured[gen].append(measure(data[gen][i], ref))
return measured
def report(problem, tests, measure):
measured = format_for_sk(problem, tests, measure).values()
sk.rdivDemo(measured)
print("*** IGD ***")
report(problem, tests, igd)
print("\n*** Spread ***")
report(problem, tests, spread)
Explanation: IGD = inter-generational distance; i.e. how good are you compared to the best known?
Find a reference set (the best possible solutions)
For each optimizer
For each item in its final Pareto frontier
Find the nearest item in the reference set and compute the distance to it.
Take the mean of all the distances. This is IGD for the optimizer
Note that the less the mean IGD, the better the optimizer since
this means its solutions are closest to the best of the best.
End of explanation |
13,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Prepare the data
Step2: Build the artificial neural-network
Step3: Single layer forward propagation step
$$\boldsymbol{Z}^{[l]} = \boldsymbol{W}^{[l]} \cdot \boldsymbol{A}^{[l-1]} + \boldsymbol{b}^{[l]}$$
$$\boldsymbol{A}^{[l]} = g^{[l]}(\boldsymbol{Z}^{[l]})$$
Step4: Figure
Step5: For each $l = L, L-1, \ldots, 2$
Step6: Train the artificial neural-network model | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: <a href="https://colab.research.google.com/github/marxav/hello-world-python/blob/master/ann_101_numpy_step_by_step.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Import Python Librairies
End of explanation
X_train = np.array([1.0,])
Y_train = np.array([-2.0,])
Explanation: Prepare the data
End of explanation
ANN_ARCHITECTURE = [
{"input_dim": 1, "output_dim": 2, "activation": "relu"},
{"input_dim": 2, "output_dim": 2, "activation": "relu"},
{"input_dim": 2, "output_dim": 1, "activation": "none"},
]
PSEUDO_RANDOM_PARAM_VALUES = {
'W1': np.array([[ 0.01],
[-0.03]]),
'b1': np.array([[ 0.02],
[-0.04]]),
'W2': np.array([[ 0.05, -0.06 ],
[-0.07, 0.08]]),
'b2': np.array([[ 0.09],
[-0.10]]),
'W3': np.array([[-0.11, -0.12]]),
'b3': np.array([[-0.13]])
}
def relu(Z):
return np.maximum(0,Z)
def relu_backward(dA, Z):
dZ = np.array(dA, copy = True)
dZ[Z <= 0] = 0;
return dZ;
Explanation: Build the artificial neural-network
End of explanation
def single_layer_forward_propagation(A_prev, W_curr, b_curr, activation):
# calculation of the input value for the activation function
Z_curr = np.dot(W_curr, A_prev) + b_curr
# selection of activation function
if activation == "none":
return Z_curr, Z_curr
elif activation == "relu":
activation_func = relu
else:
raise Exception('Non-supported activation function')
# return of calculated activation A and the intermediate Z matrix
return activation_func(Z_curr), Z_curr
def full_forward_propagation(X, params_values, ann_architecture):
# creating a temporary memory to store the information needed for a backward step
memory = {}
# X vector is the activation for layer 0
A_curr = X
# iteration over network layers
for idx, layer in enumerate(ann_architecture):
# we number network layers starting from 1
layer_idx = idx + 1
# transfer the activation from the previous iteration
A_prev = A_curr
# extraction of the activation function for the current layer
activ_function_curr = layer["activation"]
# extraction of W for the current layer
W_curr = params_values["W" + str(layer_idx)]
# extraction of b for the current layer
b_curr = params_values["b" + str(layer_idx)]
# calculation of activation for the current layer
A_curr, Z_curr = single_layer_forward_propagation(A_prev, W_curr, b_curr, activ_function_curr)
# saving calculated values in the memory
memory["A" + str(idx)] = A_prev
memory["Z" + str(layer_idx)] = Z_curr
# return of prediction vector and a dictionary containing intermediate values
return A_curr, memory
def get_cost_value(Ŷ, Y):
# this cost function works for 1-dimension only
# to do: use a quadratic function instead
cost = Ŷ - Y
return np.squeeze(cost)
Explanation: Single layer forward propagation step
$$\boldsymbol{Z}^{[l]} = \boldsymbol{W}^{[l]} \cdot \boldsymbol{A}^{[l-1]} + \boldsymbol{b}^{[l]}$$
$$\boldsymbol{A}^{[l]} = g^{[l]}(\boldsymbol{Z}^{[l]})$$
End of explanation
def single_layer_backward_propagation(dA_curr, W_curr, b_curr, Z_curr, A_prev, activation, layer, debug=False):
# end of BP1 or BP2
if activation == "none": # i.e. no σ in the layer
dZ_curr = dA_curr
else: # i.e. σ in the layer
if activation == "relu":
backward_activation_func = relu_backward
else:
raise Exception('activation function not supported.')
# calculation of the activation function derivative
dZ_curr = backward_activation_func(dA_curr, Z_curr)
if debug:
print('Step_4: layer',layer,'dZ=', dZ_curr.tolist())
# BP3: derivative of the matrix W
dW_curr = np.dot(dZ_curr, A_prev.T) # BP3
if debug:
# tolist() allows printing a numpy array on a single debug line
print('Step_4: layer',layer,'dW=dZ.A_prev.T=', dZ_curr.tolist(), '.', A_prev.T.tolist())
print(' dW=', dW_curr.tolist())
# BP4: derivative of the vector b
db_curr = np.sum(dZ_curr, axis=1, keepdims=True) # BP4
if debug:
print('Step_4: layer',layer,'db=', db_curr.tolist())
# beginning of BP2: error (a.k.a. delta) at the ouptut of matrix A_prev
# but without taking into account the derivating of the activation function
# which will be done after, in the other layer (cf. "end of BP2")
dA_prev = np.dot(W_curr.T, dZ_curr)
if debug:
print('Step_4: layer',layer,'dA_prev=W.T.dZ=', W_curr.T.tolist(), '.', dZ_curr.tolist())
print(' dA_prev=', dA_prev.tolist())
return dA_prev, dW_curr, db_curr
def full_backward_propagation(Ŷ, cost, memory, params_values, ann_architecture, debug=False):
grads_values = {}
# number of examples
m = Ŷ.shape[1]
# initiation of gradient descent algorithm
# i.e. compute 𐤃C (beginning of BP1)
dA_prev = cost.reshape(Ŷ.shape)
for layer_idx_prev, layer in reversed(list(enumerate(ann_architecture))):
# we number network layers from 1
layer_idx_curr = layer_idx_prev + 1
# extraction of the activation function for the current layer
activ_function_curr = layer["activation"]
dA_curr = dA_prev
A_prev = memory["A" + str(layer_idx_prev)]
Z_curr = memory["Z" + str(layer_idx_curr)]
W_curr = params_values["W" + str(layer_idx_curr)]
b_curr = params_values["b" + str(layer_idx_curr)]
dA_prev, dW_curr, db_curr = single_layer_backward_propagation(
dA_curr, W_curr, b_curr, Z_curr, A_prev, activ_function_curr, layer_idx_curr, debug)
grads_values["dW" + str(layer_idx_curr)] = dW_curr
grads_values["db" + str(layer_idx_curr)] = db_curr
return grads_values
Explanation: Figure: The four main formula of backpropagation at each layer. For more detail refer to http://neuralnetworksanddeeplearning.com/chap2.html
End of explanation
def update(params_values, grads_values, ann_architecture, learning_rate, m):
# iteration over network layers
for layer_idx, layer in enumerate(ann_architecture, 1):
params_values["W" + str(layer_idx)] -= learning_rate * grads_values["dW" + str(layer_idx)] / m
params_values["b" + str(layer_idx)] -= learning_rate * grads_values["db" + str(layer_idx)] / m
return params_values;
def train(X, Y, ann_architecture, params_values, learning_rate, debug=False, callback=None):
# initiation of neural net parameters
# initiation of lists storing the history
# of metrics calculated during the learning process
cost_history = []
# performing calculations for subsequent iterations
Ŷ, memory = full_forward_propagation(X, params_values, ann_architecture)
if debug:
print('Step_2: memory=%s', memory)
print('Step_2: Ŷ=', Ŷ)
# calculating metrics and saving them in history (just for future information)
cost = get_cost_value(Ŷ, Y)
if debug:
print('Step_3: cost=%.5f' % cost)
cost_history.append(cost)
# step backward - calculating gradient
grads_values = full_backward_propagation(Ŷ, cost, memory, params_values, ann_architecture, debug)
#print('grads_values:',grads_values)
# updating model state
m = X.shape[0] # m is number of samples in the batch
params_values = update(params_values, grads_values, ann_architecture, learning_rate, m)
if debug:
print('Step_5: params_values=', params_values)
return params_values
X_train = X_train.reshape(X_train.shape[0], 1)
Y_train = Y_train.reshape(Y_train.shape[0], 1)
Explanation: For each $l = L, L-1, \ldots, 2$:
* update the weights according to the rule $w^l \rightarrow w^l-\frac{\eta}{m} \sum_x \delta^{x,l} (a^{x,l-1})^T$
update the biases according to the rule $b^l \rightarrow b^l-\frac{\eta}{m} \sum_x \delta^{x,l}$
End of explanation
debug = True
# Training
ann_architecture = ANN_ARCHITECTURE
param_values = PSEUDO_RANDOM_PARAM_VALUES.copy()
if debug:
print('X_train:', X_train)
print('Y_train:', Y_train)
print('ann_architecture:', ANN_ARCHITECTURE)
# implementation of the stochastic gradient descent
EPOCHS = 2
for epoch in range(EPOCHS):
if debug:
print('##### EPOCH %d #####' % epoch)
print('Step_0: param_values:', param_values)
samples_per_batch = 1
for i in range(int(X_train.shape[0]/samples_per_batch)):
si = i * samples_per_batch
sj = (i + 1) * samples_per_batch
if debug:
print('Step_1: X_train[%d,%d]=%s' % (si, sj, X_train[si:sj]))
learning_rate = 0.01
param_values = train(
np.transpose(X_train[si:sj]),
np.transpose(Y_train[si:sj]),
ann_architecture,
param_values,
learning_rate,
debug)
Explanation: Train the artificial neural-network model
End of explanation |
13,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some Useful Functions
Import the LArray library
Step1: with total
Add totals to one or several axes
Step2: See with_total for more details and examples.
where
The where function can be used to apply some computation depending on a condition
Step3: See where for more details and examples.
clip
Set all data between a certain range
Step4: See clip for more details and examples.
divnot0
Replace division by 0 by 0
Step5: See divnot0 for more details and examples.
ratio
The ratio (rationot0) method returns an array with all values divided by the sum of values along given axes
Step6: See ratio and rationot0 for more details and examples.
percents
Step7: See percent for more details and examples.
diff
The diff method calculates the n-th order discrete difference along a given axis.
The first order difference is given by out[n+1] = in[n+1] - in[n] along the given axis.
Step8: See diff for more details and examples.
growth_rate
The growth_rate method calculates the growth along a given axis.
It is roughly equivalent to a.diff(axis, d, label) / a[axis.i[
Step9: See growth_rate for more details and examples.
shift
The shift method drops first label of an axis and shifts all subsequent labels | Python Code:
from larray import *
# load 'demography_eurostat' dataset
demography_eurostat = load_example_data('demography_eurostat')
# extract the 'population' array from the dataset
population = demography_eurostat.population
population
Explanation: Some Useful Functions
Import the LArray library:
End of explanation
population.with_total('gender', label='Total')
Explanation: with total
Add totals to one or several axes:
End of explanation
# where(condition, value if true, value if false)
where(population < population.mean('time'), -population, population)
Explanation: See with_total for more details and examples.
where
The where function can be used to apply some computation depending on a condition:
End of explanation
# values below 10 millions are set to 10 millions
population.clip(minval=10**7)
# values above 40 millions are set to 40 millions
population.clip(maxval=4*10**7)
# values below 10 millions are set to 10 millions and
# values above 40 millions are set to 40 millions
population.clip(10**7, 4*10**7)
# Using vectors to define the lower and upper bounds
lower_bound = sequence(population.time, initial=5_500_000, inc=50_000)
upper_bound = sequence(population.time, 41_000_000, inc=100_000)
print(lower_bound, '\n')
print(upper_bound, '\n')
population.clip(lower_bound, upper_bound)
Explanation: See where for more details and examples.
clip
Set all data between a certain range:
End of explanation
divisor = ones(population.axes, dtype=int)
divisor['Male'] = 0
divisor
population / divisor
# we use astype(int) since the divnot0 method
# returns a float array in this case while
# we want an integer array
population.divnot0(divisor).astype(int)
Explanation: See clip for more details and examples.
divnot0
Replace division by 0 by 0:
End of explanation
population.ratio('gender')
# which is equivalent to
population / population.sum('gender')
Explanation: See divnot0 for more details and examples.
ratio
The ratio (rationot0) method returns an array with all values divided by the sum of values along given axes:
End of explanation
# or, if you want the previous ratios in percents
population.percent('gender')
Explanation: See ratio and rationot0 for more details and examples.
percents
End of explanation
# calculates 'diff[year+1] = population[year+1] - population[year]'
population.diff('time')
# calculates 'diff[year+2] = population[year+2] - population[year]'
population.diff('time', d=2)
# calculates 'diff[year] = population[year+1] - population[year]'
population.diff('time', label='lower')
Explanation: See percent for more details and examples.
diff
The diff method calculates the n-th order discrete difference along a given axis.
The first order difference is given by out[n+1] = in[n+1] - in[n] along the given axis.
End of explanation
population.growth_rate('time')
Explanation: See diff for more details and examples.
growth_rate
The growth_rate method calculates the growth along a given axis.
It is roughly equivalent to a.diff(axis, d, label) / a[axis.i[:-d]]:
End of explanation
population.shift('time')
# when shift is applied on an (increasing) time axis,
# it effectively brings "past" data into the future
population_shifted = population.shift('time')
stack({'population_shifted_2014': population_shifted[2014], 'population_2013': population[2013]}, 'array')
Explanation: See growth_rate for more details and examples.
shift
The shift method drops first label of an axis and shifts all subsequent labels
End of explanation |
13,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this notebook, we experiment with the optimal histogram algorithm. We will implement a simple version based on recursion and you will do the hard job of implementing a dynamic programming-based version.
References
Step1: Now, try to understand how the algorithm works -- feel free to modify the code to output more if you need. Specifically,
Observe and understand how the recursion works (set DEBUG = 2)
Observe and understand how many sub-problems are being solved again and again (set DEBUG = 1), especially when the input array is longer. | Python Code:
LARGE_NUM = 1000000000.0
EMPTY = -1
DEBUG = 2
#DEBUG = 1
import numpy as np
def sse(arr):
if len(arr) == 0: # deal with arr == []
return 0.0
avg = np.average(arr)
val = sum( [(x-avg)*(x-avg) for x in arr] )
return val
def calc_depth(b):
return 5 - b
def v_opt_rec(xx, b):
mincost = LARGE_NUM
n = len(xx)
# check boundary condition:
if n < b:
return LARGE_NUM + 1
elif b == 1:
return sse(xx)
else: # the general case
if DEBUG > 1:
#print('.. BEGIN: input = {!s:<30}, b = {}'.format(xx, b))
print('..{}BEGIN: input = {!s:<30}, b = {}'.format(' '*calc_depth(b), xx, b))
for t in range(n):
prefix = xx[0 : t+1]
suffix = xx[t+1 : ]
cost = sse(prefix) + v_opt_rec(suffix, b - 1)
mincost = min(mincost, cost)
if DEBUG > 0:
#print('.. END: input = {!s:<32}, b = {}, mincost = {}'.format(xx, b, mincost))
print('..{}END: input = {!s:<32}, b = {}, mincost = {}'.format(' '*calc_depth(b), xx, b, mincost))
return mincost
Explanation: Introduction
In this notebook, we experiment with the optimal histogram algorithm. We will implement a simple version based on recursion and you will do the hard job of implementing a dynamic programming-based version.
References:
* H. V. Jagadish, Nick Koudas, S. Muthukrishnan, Viswanath Poosala, Kenneth C. Sevcik, Torsten Suel: Optimal Histograms with Quality Guarantees. VLDB 1998: 275-286. (url: http://engineering.nyu.edu/~suel/papers/vopt.pdf)
* Dynamic Programming (wikipedia): https://en.wikipedia.org/wiki/Dynamic_programming
End of explanation
x = [7, 9, 13, 5]
b = 3
c = v_opt_rec(x, b)
print('optimal cost = {}'.format(c))
x = [1, 3, 9, 13, 17]
b = 4
c = v_opt_rec(x, b)
print('c = {}'.format(c))
x = [3, 1, 18, 9, 13, 17]
b = 4
c = v_opt_rec(x, b)
print('c = {}'.format(c))
x = [1, 2, 3, 4, 5, 6]
b = 4
c = v_opt_rec(x, b)
print('c = {}'.format(c))
Explanation: Now, try to understand how the algorithm works -- feel free to modify the code to output more if you need. Specifically,
Observe and understand how the recursion works (set DEBUG = 2)
Observe and understand how many sub-problems are being solved again and again (set DEBUG = 1), especially when the input array is longer.
End of explanation |
13,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate Region of Interests (ROI) labeled arrays for simple shapes
This example notebook explain the use of analysis module "skbeam/core/roi" https
Step1: Easily switch between interactive and static matplotlib plots
Step2: Draw annual (ring-shaped) regions of interest
Step3: Test when there is same spacing between rings
Step4: Test when there is different spacing between rings
Step5: Test when there is no spacing between rings
Step6: Generate a ROI of Segmented Rings¶
Step7: find the inner and outer radius of each ring
Step8: Generate a ROI of Pies
Step9: Rectangle region of interests.
Step10: Generate Bar ROI's
Step11: Create Horizontal bars and Vertical bars
Step12: Create Box ROI's
Step13: Plot bar rois, box rois and rectangle rois | Python Code:
import skbeam.core.roi as roi
import skbeam.core.correlation as corr
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import LogNorm
import xray_vision.mpl_plotting as mpl_plot
Explanation: Generate Region of Interests (ROI) labeled arrays for simple shapes
This example notebook explain the use of analysis module "skbeam/core/roi" https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/roi.py
End of explanation
interactive_mode = False
import matplotlib as mpl
if interactive_mode:
%matplotlib notebook
else:
%matplotlib inline
backend = mpl.get_backend()
cmap='viridis'
Explanation: Easily switch between interactive and static matplotlib plots
End of explanation
center = (100., 100.) # center of the rings
# Image shape which is used to determine the maximum extent of output pixel coordinates
img_shape = (200, 205)
first_q = 10.0 # inner radius of the inner-most ring
delta_q = 5.0 #ring thickness
num_rings = 7 # number of Q rings
# step or spacing, spacing between rings
one_step_q = 5.0 # one spacing between rings
step_q = [2.5, 3.0, 5.8] # differnt spacing between rings
Explanation: Draw annual (ring-shaped) regions of interest
End of explanation
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=one_step_q,
num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Same spacing between rings")
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
Explanation: Test when there is same spacing between rings
End of explanation
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=step_q,
num_rings=4)
print("edges when there is different spacing between rings", edges)
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Different spacing between rings")
axes.set_xlim(50, 150)
axes.set_ylim(50, 150)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
Explanation: Test when there is different spacing between rings
End of explanation
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("There is no spacing between rings")
axes.set_xlim(50, 150)
axes.set_ylim(50, 150)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
Explanation: Test when there is no spacing between rings
End of explanation
center = (75, 75) # center of the rings
#Image shape which is used to determine the maximum extent of output pixel coordinates
img_shape = (150, 140)
first_q = 5.0 # inner radius of the inner-most ring
delta_q = 5.0 #ring thickness
num_rings = 4 # number of rings
slicing = 4 # number of pie slices or list of angles in radians
spacing = 4 # margin between rings, 0 by default
Explanation: Generate a ROI of Segmented Rings¶
End of explanation
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=spacing,
num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Segmented Rings")
axes.set_xlim(38, 120)
axes.set_ylim(38, 120)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
Explanation: find the inner and outer radius of each ring
End of explanation
first_q = 0
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=50, num_rings=1)
edges
slicing = 10 # number of pie slices or list of angles in radians
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Pies")
axes.set_xlim(20, 140)
axes.set_ylim(20, 140)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
Explanation: Generate a ROI of Pies
End of explanation
# Image shape which is used to determine the maximum extent of output pixel coordinates
shape = (15, 26)
# coordinates of the upper-left corner and width and height of each rectangle
roi_data = np.array(([2, 2, 6, 3], [6, 7, 8, 5], [8, 18, 5, 10]),
dtype=np.int64)
#Elements not inside any ROI are zero; elements inside each ROI are 1, 2, 3, corresponding
# to the order they are specified in coords.
label_array = roi.rectangles(roi_data, shape)
roi_inds, pixel_list = roi.extract_label_indices(label_array)
Explanation: Rectangle region of interests.
End of explanation
shape = (20, 20)
edges = [[3, 4], [5, 7], [12, 15]]
edges
Explanation: Generate Bar ROI's
End of explanation
h_label_array = roi.bar(edges, (20, 25)) # Horizontal Bars
v_label_array = roi.bar(edges, (20, 20), horizontal=False) # Vertical Bars
Explanation: Create Horizontal bars and Vertical bars
End of explanation
b_label_array = roi.box((20, 20), edges)
Explanation: Create Box ROI's
End of explanation
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
axes[1, 0].set_title("Horizontal Bars")
im = mpl_plot.show_label_array(axes[1, 0], h_label_array, cmap)
axes[0, 1].set_title("Vertical Bars")
im = mpl_plot.show_label_array(axes[0, 1], v_label_array, cmap)
axes[1, 1].set_title("Box Rois")
im = mpl_plot.show_label_array(axes[1, 1], b_label_array, cmap)
axes[0, 0].set_title("Rectangle Rois")
im = mpl_plot.show_label_array(axes[0, 0], label_array, cmap)
plt.show()
import skbeam
print(skbeam.__version__)
Explanation: Plot bar rois, box rois and rectangle rois
End of explanation |
13,311 | Given the following text description, write Python code to implement the functionality described.
Description:
Count of m digit integers that are divisible by an integer n
Returns count of m digit numbers having n as divisor ; Generating largest number of m digit ; Generating largest number of m - 1 digit ; returning number of dividend ; Driver code
| Python Code:
def findCount(m , n ) :
num1 = 0
for i in range(0 , m ) :
num1 =(num1 * 10 ) + 9
num2 = 0
for i in range(0 ,(m - 1 ) ) :
num2 =(num2 * 10 ) + 9
return int(( num1 / n ) -(num2 / n ) )
m = 2 ; n = 6
print(findCount(m , n ) )
|
13,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview of Plotly for Python
Victoria Gregory
4/1/2016
What is Plotly?
plotly.js
Step1: Getting started
Easy to install
Step2: The following code will make a simple line and scatter plot
Step3: Figure objects store data like a Python dictionary.
Step4: Can save a static image as well
Step5: Histograms
Step6: Distplots
Similar to seaborn.distplot. Plot a histogram, kernel density or normal curve, and a rug plot all together.
Step7: 2D Contour Plot
Step8: 3D Surface Plot
Plot the function
Step9: Matplotlib Conversion | Python Code:
import plotly.tools as tls
tls.embed('https://plot.ly/~AnnaG/1/nfl-defensive-player-size-2013-season/')
tls.embed('https://plot.ly/~chris/7378/relative-number-of-311-complaints-by-city/')
tls.embed('https://plot.ly/~empet/2922/a-scoreboard-for-republican-candidates-as-of-august-17-2015-annotated-heatmap/')
tls.embed('https://plot.ly/~vgregory757/2/_2014-us-city-populations-click-legend-to-toggle-traces/')
Explanation: Overview of Plotly for Python
Victoria Gregory
4/1/2016
What is Plotly?
plotly.js: online JavaScript graphing library
Today I'll talk about its Python client
Both plotly.js and the Python library are free and open-source
Similar libraries for Julia, R, and Matlab
What can I do with Plotly?
Useful for data visualization and fully interactive graphics
Standard graphics interface across languages
Easily shareable online
20 types of charts, including statistical plots, 3D charts, and maps
Complete list here
Just a few examples...
End of explanation
# (*) Tools to communicate with Plotly's server
import plotly.plotly as py
# (*) Useful Python/Plotly tools
import plotly.tools as tls
# (*) Graph objects to piece together your Plotly plots
import plotly.graph_objs as go
Explanation: Getting started
Easy to install: pip install plotly
How to save and view files?
Can work offline and save as .html files to open on web browser
Jupyter notebook
Upload to online account for easy sharing: import statement automatically signs you in
How It Works
Graph objects
Same structure as native Python dictionaries and lists
Defined as new classes
Every Plotly plot type has its own graph object, i.e., Scatter, Bar, Histogram
All information in a Plotly plot is contained in a Figure object, which contains
a Data object: stores data and style options, i.e., setting the line color
a Layout object: for aesthetic features outside the plotting area, i.e., setting the title
trace: refers to a set of data meant to be plotted as a whole (like an $x$ and $y$ pairing)
Interactivity is automatic!
Line/Scatter Plots
The following import statements load the three main modules:
End of explanation
# Create random data with numpy
import numpy as np
N = 100
random_x = np.linspace(0, 1, N)
random_y0 = np.random.randn(N)+5
random_y1 = np.random.randn(N)
random_y2 = np.random.randn(N)-5
# (1.1) Make a 1st Scatter object
trace0 = go.Scatter(
x = random_x,
y = random_y0,
mode = 'markers',
name = '$\mu = 5$',
hoverinfo='x+y' # choosing what to show on hover
)
# (1.2) Make a 2nd Scatter object
trace1 = go.Scatter(
x = random_x,
y = random_y1,
mode = 'lines+markers',
name = '$\mu = 0$',
hoverinfo='x+y'
)
# (1.3) Make a 3rd Scatter object
trace2 = go.Scatter(
x = random_x,
y = random_y2,
mode = 'lines',
name = '$\mu = -5$',
hoverinfo='x+y'
)
# (2) Make Data object
# Data is list-like, must use [ ]
data = go.Data([trace0, trace1, trace2])
# (3) Make Layout object (Layout is dict-like)
layout = go.Layout(title='$\\text{Some scatter objects distributed as } \
\mathcal{N}(\mu,1)$',
xaxis=dict(title='x-axis label'),
yaxis=dict(title='y-axis label'),
showlegend=True)
# (4) Make Figure object (Figure is dict-like)
fig = go.Figure(data=data, layout=layout)
print(fig) # print the figure object in notebook
Explanation: The following code will make a simple line and scatter plot:
End of explanation
# (5) Send Figure object to Plotly and show plot in notebook
py.iplot(fig, filename='scatter-mode')
Explanation: Figure objects store data like a Python dictionary.
End of explanation
py.image.save_as(fig, filename='scatter-mode.png')
Explanation: Can save a static image as well:
End of explanation
# (1) Generate some random numbers
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
# (2.1) Create the first Histogram object
trace1 = go.Histogram(
x=x0,
histnorm='count',
name='control',
autobinx=False,
xbins=dict(
start=-3.2,
end=2.8,
size=0.2
),
marker=dict(
color='fuchsia',
line=dict(
color='grey',
width=0
)
),
opacity=0.75
)
# (2.2) Create the second Histogram object
trace2 = go.Histogram(
x=x1,
name='experimental',
autobinx=False,
xbins=dict(
start=-1.8,
end=4.2,
size=0.2
),
marker=dict(
color='rgb(255, 217, 102)'
),
opacity=0.75
)
# (3) Create Data object
data = [trace1, trace2]
# (4) Create Layout object
layout = go.Layout(
title='Sampled Results',
xaxis=dict(
title='Value'
),
yaxis=dict(
title='Count'
),
barmode='overlay',
bargap=0.25,
bargroupgap=0.3,
showlegend=True
)
fig = go.Figure(data=data, layout=layout)
# (5) Send Figure object to Plotly and show plot in notebook
py.iplot(fig, filename='histogram_example')
Explanation: Histograms
End of explanation
from plotly.tools import FigureFactory as FF
# Add histogram data
x1 = np.random.randn(200)-2
x2 = np.random.randn(200)
x3 = np.random.randn(200)+2
x4 = np.random.randn(200)+4
# Group data together
hist_data = [x1, x2, x3, x4]
group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4']
# Create distplot with custom bin_size
fig = FF.create_distplot(hist_data, group_labels, bin_size=.2)
# Plot!
py.iplot(fig, filename='Distplot with Multiple Datasets', \
validate=False)
Explanation: Distplots
Similar to seaborn.distplot. Plot a histogram, kernel density or normal curve, and a rug plot all together.
End of explanation
x = np.random.randn(1000)
y = np.random.randn(1000)
py.iplot([go.Histogram2dContour(x=x, y=y, \
contours=go.Contours(coloring='fill')), \
go.Scatter(x=x, y=y, mode='markers', \
marker=go.Marker(color='white', size=3, opacity=0.3))])
Explanation: 2D Contour Plot
End of explanation
# Define the function to be plotted
def fxy(x, y):
A = 1 # choose a maximum amplitude
return A*(np.cos(np.pi*x*y))**2 * np.exp(-(x**2+y**2)/2.)
# Choose length of square domain, make row and column vectors
L = 4
x = y = np.arange(-L/2., L/2., 0.1) # use a mesh spacing of 0.1
yt = y[:, np.newaxis] # (!) make column vector
# Get surface coordinates!
z = fxy(x, yt)
trace1 = go.Surface(
z=z, # link the fxy 2d numpy array
x=x, # link 1d numpy array of x coords
y=y # link 1d numpy array of y coords
)
# Package the trace dictionary into a data object
data = go.Data([trace1])
# Dictionary of style options for all axes
axis = dict(
showbackground=True, # (!) show axis background
backgroundcolor="rgb(204, 204, 204)", # set background color to grey
gridcolor="rgb(255, 255, 255)", # set grid line color
zerolinecolor="rgb(255, 255, 255)", # set zero grid line color
)
# Make a layout object
layout = go.Layout(
title='$f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$', # set plot title
scene=go.Scene( # (!) axes are part of a 'scene' in 3d plots
xaxis=go.XAxis(axis), # set x-axis style
yaxis=go.YAxis(axis), # set y-axis style
zaxis=go.ZAxis(axis) # set z-axis style
)
)
# Make a figure object
fig = go.Figure(data=data, layout=layout)
# (@) Send to Plotly and show in notebook
py.iplot(fig, filename='surface')
Explanation: 3D Surface Plot
Plot the function: $f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$
End of explanation
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
n = 50
x, y, z, s, ew = np.random.rand(5, n)
c, ec = np.random.rand(2, n, 4)
area_scale, width_scale = 500, 5
fig, ax = plt.subplots()
sc = ax.scatter(x, y, c=c,
s=np.square(s)*area_scale,
edgecolor=ec,
linewidth=ew*width_scale)
ax.grid()
py.iplot_mpl(fig)
Explanation: Matplotlib Conversion
End of explanation |
13,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Idea
Using the vmstat command line utility to quickly determine the root cause of performance problems.
Step1: Data Input
In this version, we use a helper library that I've built to read in data sources into pandas' DataFrame.
Step2: Data Selection
Step3: Visualization | Python Code:
%less ../dataset/vmstat_loadtest.log
Explanation: Idea
Using the vmstat command line utility to quickly determine the root cause of performance problems.
End of explanation
from ozapfdis.linux import vmstat
stats = vmstat.read_logfile("../dataset/vmstat_loadtest.log")
stats.head()
Explanation: Data Input
In this version, we use a helper library that I've built to read in data sources into pandas' DataFrame.
End of explanation
cpu_data = stats.iloc[:, -5:]
cpu_data.head()
Explanation: Data Selection
End of explanation
%matplotlib inline
cpu_data.plot.area();
Explanation: Visualization
End of explanation |
13,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Pandas könyvtár
Mérési vagy szimulációs adatainkat gyakran célszerűbb a puszta számokat tartalmazó list vagy numpy.array adatszerkezet helyett a pandas könyvtár DataFrame osztályában tárolunk, ahol a számok mellett feliratozni is tudjuk a sorokat, illetve oszlopokat, illetve a különböző oszlopokban különböző adattípusok lehetnek.
A DataFrame-re gondolhatunk úgy, mint egy szokásos Excel-táblázatra, az osztályhoz tartozó függvények egy része ugyanis nagyon hasonlít a táblázatkezelőkből ismertekére. Használata első ránézésre talán bonyolultnak tűnhet, de mindenképpen megéri a befektetett energiát.
A szokásos import
Step1: Beolvasás fájlból
Legtöbbször vesszővel vagy tabulátorral elválasztott értékeket (amit a sep kulcsszóval állíthatunk be, ha szükséges) olvasunk be egyszerű szövegfájlokból. Itt be kell állítanunk, hogy a táblázatunk tartalmaz-e fejlécet (header), illetve hogy a soroknak van-e nevük (index).
Az alapértelmezett beállítás megpróbálja kitalálni, hogy van-e fejléc, és a sorokat magától megszámozza.
Step2: Mi viszont azt szeretnénk, ha a legelső oszlopot a sorok neveiként olvasnánk be, mint az exceles esetben.
Step3: Most a fejlécbeolvasót kikapcsoljuk. Így az első sor ismeretlen (NaN) sorfelirattal bekerül az értékek elé, az oszlopok pedig 0-tól számozódnak.
Step4: Ha elfelejtettük beállítani az oszlopok neveit eredetileg, utólag is megtehetjük azt a .set_index(oszlopnev) függvény segítségével. A set_index() függvény visszatérési értéke alapesetben az új oszloppal indexelt DataFrame.
Step5: Ezt megtehettük volna úgy is, hogy a nevsor nevű DataFrame-et rögtön felülírjuk helyben (inplace)
Step6: Adatok elérése
A pandas a DataFrame-ben tárolt értékeket elsősorban a fejléccel és a sorok neveivel teszi elérhetővé.
Ha egy oszlop nevét stringként szögletes zárójelekben írjuk a DataFrame neve mögé, visszakapjuk az oszlopot.
Step7: Ha több oszlopot is vissza szeretnénk kapni, akkor azokat egy stringeket tartalmazó listában írjuk a DataFrame mögötti szögletes zárójelbe.
Step8: Ha egy oszlopnak szeretnénk elérni az egyik elemét
Step9: Vigyázat, mindig először az oszlop nevét írtuk! Ha egy sort szeretnénk visszakapni, az ix objektumot kell használunk.
Step10: Aki szeretné ugyanúgy számokkal indexelni a DataFrame-et, mint egy array-t, annak erre az iloc biztosít lehetőséget. Nézzük meg az előző eléréseket iloc-kal!
Step11: Sőt, a DataFrame belsejét átalakíthatjuk numpy array-jé, és alkalmazhatjuk rá a korábban tanult módszereket
Step12: Az oszlopnevek és a sornevek elérése
Írassuk ki a táblázatunk oszlopainak a nevét!
Step13: Írassuk ki a táblázatunk sorainak a nevét!
Step14: Szükség lehet rá, hogy a fenti listákat tényleg Python-féle list-ként kapjuk vissza.
Step15: Egyszerű sor- és oszlopműveletek
A DataFrame-re is könnyű néhány beépített függvény segítségével különböző aggregált értékeket számolni.
Például álljon itt oszloponként a számok összege
Step16: Mit tegyünk, ha ezt soronként szeretnénk visszakapni? Változtassuk meg az összegzés "tengelyét" (axis)! Az előző eset ugyanis az alapértelmezett axis=0 volt, ami oszloponként végzi a műveletet. Csak a jegyeket tartalmazó oszlopokat összegezzük.
Step17: Számoltassuk meg, hány elem van az oszlopokban, illetve a sorokban!
Step18: Ezt persze az array-hez hasonlóan is megtehettük volna
Step19: További ötletek beépített függvényekre
Step20: Láttuk, hogy minden sorhoz kaptunk egy igaz/hamis értéket. Most a fenti kifejezést beírjuk a []-be
Step21: De más feltételt is megadhatunk, például hogy kinek adott Eszter 3-asnál jobb jegyet.
Step22: Két feltételt összefűzhetünk egymáshoz, ilyenkor a & és a | operátorokat használjuk and és or helyett, mert azok nem tudnak két sorozatot elemenként összehasonlítani. A feltételeket zárójelbe kell tenni, különben hibát kapunk.
Ezek alapján az, akinek Eszter hármasnál jobbat adott, és idősebb 20 évesnél
Step23: Sorba rendezés
Szükségünk lehet arra, hogy a táblázatunkat sorba rendezzük valamelyik oszlop szerint. Ilyenkor a sort_values(by="oszlop_neve") függvényt használjuk, melynek megadhatjuk, hogy növekvő (ascending=True), vagy csökkenő (ascending=False) sorrendben szeretnénk-e a rendezést.
A függvény visszatérési értéke a rendezett táblázat.
Step24: Ha azt szeretnénk, hogy az eredeti DataFrame-ben rendezve tárolódjanak el a sorok, be kell kapcsolnunk az inplace=True paramétert, ami felülírja a DataFrame-et a rendezés után.
Step25: Persze, ezt elérhettük volna szokásos értékadással is.
Step26: Ha a DataFrame indexe szerint szeretnénk sorba rendezni, akkor a sort_index() függvény segít (itt is választhatjuk, hogy helyben szeretnénk-e a rendezést az inplace=True segítségével)
Step27: Új sor/oszlop hozzáadása, törlés
Ha új sort szeretnénk hozzáadni a táblázathoz, akkor a .loc["Új_sor_indexe"] változónak egy, az oszlopok számával megegyező hosszúságú listát kell odaadnunk.
Step28: Ha új oszlopot, akkor hasonlóan járunk el, de nem szükséges a loc, mert az a sorokat indexeli.
Step29: Ha sort szeretnénk törölni, a drop függvénnyel tehetjük meg.
Step30: Elég ritkán, de szeretnénk a táblázatunkból oszlopokat törölni
Step31: Csoportosítás
Egy oszlop értékei szerint csoportosíthatjuk a DataFrame-et, és utána a csoportokon végezhetünk műveleteket.
Például az emelt szintű érettségit tevők (1), illtve nem tevők (0) maximum jegyeit láthatjuk a következő sorban.
Figyeljük meg, hogy a Nem oszlop maximális értéke mindkét csoportban a "lány", hiszen az hátrébb áll az abc-ben, mint a fiú.
Step32: Egyszerre két oszlop szerint is csoportosíthatunk, ilyenkor listát kell a groupby-nak átadnunk. Itt már nem csak az Emelt oszlop, hanem a Nem oszlop is a táblázat indexének a része, ezt hívjuk többszintű indexelésnek. A továbbiakban az órán erre nem lesz szükség, csak a példa kedvéért áll itt.
Step33: Érettségi adatok feldolgozása
Az alábbiakban az elmúlt pár év érettségi statisztikai adatait fogjuk megvizsgálni. Ez a példa sok szempontból jól illusztrál olyan problémákat, amelyek valós adatbázis-elemzések kapcsán felmerülhetnek.
Ilyen például a hiányzó adatok kezelése, vagy a nem egészen kompatibilis adatbázisok egységes kezelése.
Az érettségi adatokat a fenti honlap az előzőekben megismert elválasztóval tagolt tagolt (comma separated value, röviden csv) formátumban teszi elérhetővé, itt az elválasztójel a pontosvessző.
Mivel ékezetes karakterek is vannak a fájlban, át kell állítanunk a karakterkódolást is a beolvasásnál. Az értékeket a sorokban a ";" karakter választja el, a sorok nevei a 0. oszlopban vannak.
Step34: Listáztassuk ki, milyen oszlopnevek vannak a fájlban!
Step35: Látható, hogy az év, szint megadják, hogy melyik évben, melyik szintű érettségiről van szó. Azt is megállapíthatjuk, hogy ősszel vagy tavasszal (időszak) írta-e a diák az érettségit, az iskolájáról és a képzési típusról is rögzítve van a statisztika. Emellett részletes írásbeli és szóbeli, illetve összpontszám, összesített százalék is szerepel az adatok között.
Érdemes az első néhány sort kiíratni példaként, hogy lássuk, mivel is van dolgunk. Most transzponálva írjuk ki, hogy elférjen a képernyőre.
Step36: Itt aztán már tényleg nagy hasznát vesszük a fentebb tanult csoportosítási, aggregálási műveleteknek, a következőkben felteszünk néhány példakérdést, és megválaszoljuk azt.
Melyik évben mennyi volt az emelt szintű érettségik jegyeinek átlaga?
Ehhez először kiválasztjuk az emelt szintű érettségit tartalmazó sorokat, majd azokat év szerint csoportosítjuk. Kiválasztjuk az "érdemjegy" oszlopot, amit a végén átlagolunk. A csoportosítás miatt az átlag évenként kerül kiszámítása.
Step37: Vajon a fiúk vagy a lányok írtak jobb pontszámú középszintű érettségit 2015-ben?
A "\" jel csak azért kell, hogy ne írjunk túl hosszú sorokat a Pythonnak, mert az nehéz lenne elolvasni. Ha ilyen jelet teszel a sor végére, akkor az értelmező úgy olvassa, mintha a következő sor a "\" jel helyére lenne fűzve.
Először logikai indexeléssel kiválasztjuk a 2015-ös középszintű érettségiket tartalmazó sorokat. Több feltételt a sorokra egyszerre az and operátor helyett az & operátorral adhatunk meg, és a feltételeket zárójeleznünk kell, hogy jól olvassa az értelmező.
Ezek után csoportosítunk a vizsgázó neme szerint, majd vesszük az összpontszámok átlagát.
Step38: Számoljuk le, melyik iskolatípusban hány érettségiző jelent meg, illetve nem jelent meg!
Most egyszerre két oszlop szerint is csoportosítottunk, a csoportosítás alapját képező oszlopok nevét listaként kell megadni a groupby-nak. Utána egy tetszőleges oszlopot (pl. év) kiválasztva megszámláltathatjuk csoportonként a sorokat a count-tal.
Step39: Ábrázolás
A pandas nagy erőssége, hogy a DataFrame-ekből nagyon rövid szintaxissal lehet egészen elfogadható ábrákat készíteni. Ehhez a pandas a matplotlib könyvtárat használja, melyet emiatt be is kell importálnunk.
Step40: Az ábra paramétereit (title, ylabel stb.) a matplotlibben megszokott módon állíthatjuk be.
Elsőként növeljük meg alapértelmezetten a tengelyfeliratokat
Step41: Nézzük meg ábrán is az emelt szintű érettségik évenkénti átlagát! Ehhez csak a fenti parancs végére hozzá kell fűznünk a "plot" szócskát. Az oszlopdiagram rajzolásához megadhatjuk a plot függvénynek a kind kulcsszóval, hogy kind="bar". Az x tengely feliratai a DataFrame indexei lesznek, de az y tengelynek már mi kell, hogy nevet adjunk.
Step42: Megnézhetjük kördiagramon, hogy melyik iskolatípusból hányan érettségiztek közép- és emelt szinten 2011 és 2015 között. Ehhez két alábrát készítünk a múltkor tanultakhoz hasonlóan.
Vajon mit csinált az autopct kulcsszó?
Step43: ☠ Haladóknak
Összefűzés, join
Két DataFrame-et összefűzhetünk egymás alá, ha a pd.concat() függvénynek egy ugyanannyi oszlopból álló DataFrame-eket tartalmazó listát adunk oda.
Példánkban kétszer egymás alá írjuk ugyanazt a DataFrame-et.
Step44: Létrehozunk egy másik DataFrame-et, és egy Énekkar nevű oszlopot teszünk bele az előző DataFrame indexeivel.
Step45: Hogyan tudnánk ezt az oszlopot hozzáilleszteni az előző táblázathoz? Megtehetjük concat segítségével, de át kell állítanunk, hogy melyik irányban fűzzük össze a két táblázatot (axis=1 jelenti, hogy az oszlopok mellé szeretnénk írni).
Step46: Megtehetnénk azt is, hogy elkészítjük a két táblázat Descartes-szorzatát, azaz az egyikből minden sort összepárosítunk a másik minden sorával, majd kiválogatjuk ebből a sorhalmazból csak azokat a sorokat, amelyekben az indexek megegyeznek.
(Aki ismeri az SQL-nyelv join parancsát, ez az ún. inner join.)
Step47: Contains
Ha egy stringeket tartalmazó oszlopban végig kell néznünk, hogy megvan-e valamilyen karaktersorozat
Step48: Apply
Ha egy tetszőleges függvényt szeretnénk egy oszlop minden egyes elemére alkalmazni, megtehetjük az apply segítségével. Az apply belsejébe írjuk a függvényt, amit alkalmazni szeretnénk.
Elsőként például készítünk egy függvényt, ami egy számhoz hozzáad egyet
Step49: Ezek után mindenkit öregítünk egy évvel.
Step50: Az apply-t is lehet soronként is végeztetni az axis=1 kulcsszó segítségével. Például írjuk meg kézzel azt a függvényt, ami a két kapott jegy átlagát kiszámolja.
Step51: Pivot
Új táblázatot is készíthetünk összesített eredmények alapján az eredetiből. Hogy legyen valami látható eredményünk, adjuk még hozzá Károlyt a táblázatunkhoz.
Step52: Most megnézzünk nemenként és emelt szintű érettségi szerint, hogy melyik kategóriában hány ember van. Mivel a csoportosítás készít nekünk egy többszintű indexet, ezt kiiktatjuk a reset_index(inplace=True) parancs segítségével.
Step53: Készítsünk egy táblázatot, melyben a sorok a nemek, az oszlopok, hogy tett-e valaki emelt szintű érettségit, és az értékek a kategóriák leszámlálásai | Python Code:
import pandas as pd
Explanation: A Pandas könyvtár
Mérési vagy szimulációs adatainkat gyakran célszerűbb a puszta számokat tartalmazó list vagy numpy.array adatszerkezet helyett a pandas könyvtár DataFrame osztályában tárolunk, ahol a számok mellett feliratozni is tudjuk a sorokat, illetve oszlopokat, illetve a különböző oszlopokban különböző adattípusok lehetnek.
A DataFrame-re gondolhatunk úgy, mint egy szokásos Excel-táblázatra, az osztályhoz tartozó függvények egy része ugyanis nagyon hasonlít a táblázatkezelőkből ismertekére. Használata első ránézésre talán bonyolultnak tűnhet, de mindenképpen megéri a befektetett energiát.
A szokásos import:
End of explanation
pd.read_csv("data/kisnevsor.csv")
Explanation: Beolvasás fájlból
Legtöbbször vesszővel vagy tabulátorral elválasztott értékeket (amit a sep kulcsszóval állíthatunk be, ha szükséges) olvasunk be egyszerű szövegfájlokból. Itt be kell állítanunk, hogy a táblázatunk tartalmaz-e fejlécet (header), illetve hogy a soroknak van-e nevük (index).
Az alapértelmezett beállítás megpróbálja kitalálni, hogy van-e fejléc, és a sorokat magától megszámozza.
End of explanation
pd.read_csv("data/kisnevsor.csv",index_col=0)
Explanation: Mi viszont azt szeretnénk, ha a legelső oszlopot a sorok neveiként olvasnánk be, mint az exceles esetben.
End of explanation
pd.read_csv("data/kisnevsor.csv",header=None,index_col=0)
Explanation: Most a fejlécbeolvasót kikapcsoljuk. Így az első sor ismeretlen (NaN) sorfelirattal bekerül az értékek elé, az oszlopok pedig 0-tól számozódnak.
End of explanation
nevsor=pd.read_csv("data/kisnevsor.csv")
ujnevsor=nevsor.set_index("Unnamed: 0")
Explanation: Ha elfelejtettük beállítani az oszlopok neveit eredetileg, utólag is megtehetjük azt a .set_index(oszlopnev) függvény segítségével. A set_index() függvény visszatérési értéke alapesetben az új oszloppal indexelt DataFrame.
End of explanation
nevsor.set_index("Unnamed: 0",inplace=True)
Explanation: Ezt megtehettük volna úgy is, hogy a nevsor nevű DataFrame-et rögtön felülírjuk helyben (inplace):
End of explanation
df=pd.read_csv("data/kisnevsor.csv",index_col=0)
print(df["Eszter"])
Explanation: Adatok elérése
A pandas a DataFrame-ben tárolt értékeket elsősorban a fejléccel és a sorok neveivel teszi elérhetővé.
Ha egy oszlop nevét stringként szögletes zárójelekben írjuk a DataFrame neve mögé, visszakapjuk az oszlopot.
End of explanation
df[["Eszter","Nem","Kor"]]
Explanation: Ha több oszlopot is vissza szeretnénk kapni, akkor azokat egy stringeket tartalmazó listában írjuk a DataFrame mögötti szögletes zárójelbe.
End of explanation
print(df["Eszter"]["Zita"])
Explanation: Ha egy oszlopnak szeretnénk elérni az egyik elemét:
End of explanation
df.ix["Zita"]
Explanation: Vigyázat, mindig először az oszlop nevét írtuk! Ha egy sort szeretnénk visszakapni, az ix objektumot kell használunk.
End of explanation
df.iloc[:,0] # az első (0.) oszlop
df.iloc[0,0] # az első sor első eleme
Explanation: Aki szeretné ugyanúgy számokkal indexelni a DataFrame-et, mint egy array-t, annak erre az iloc biztosít lehetőséget. Nézzük meg az előző eléréseket iloc-kal!
End of explanation
df.as_matrix()
Explanation: Sőt, a DataFrame belsejét átalakíthatjuk numpy array-jé, és alkalmazhatjuk rá a korábban tanult módszereket :-)
End of explanation
df.columns
Explanation: Az oszlopnevek és a sornevek elérése
Írassuk ki a táblázatunk oszlopainak a nevét!
End of explanation
df.index
Explanation: Írassuk ki a táblázatunk sorainak a nevét!
End of explanation
df.columns.tolist()
list(df.columns)
Explanation: Szükség lehet rá, hogy a fenti listákat tényleg Python-féle list-ként kapjuk vissza.
End of explanation
df.sum()
Explanation: Egyszerű sor- és oszlopműveletek
A DataFrame-re is könnyű néhány beépített függvény segítségével különböző aggregált értékeket számolni.
Például álljon itt oszloponként a számok összege:
End of explanation
df[["Eszter","Orsi"]].sum(axis=1)
Explanation: Mit tegyünk, ha ezt soronként szeretnénk visszakapni? Változtassuk meg az összegzés "tengelyét" (axis)! Az előző eset ugyanis az alapértelmezett axis=0 volt, ami oszloponként végzi a műveletet. Csak a jegyeket tartalmazó oszlopokat összegezzük.
End of explanation
df.count()
df.count(axis=1)
Explanation: Számoltassuk meg, hány elem van az oszlopokban, illetve a sorokban!
End of explanation
df.shape
Explanation: Ezt persze az array-hez hasonlóan is megtehettük volna:
End of explanation
df["Nem"]=="lány"
Explanation: További ötletek beépített függvényekre: mean, median, min, max, std.
Boolean indexing
Nagyon gyakran előfordul, hogy a táblázatunkból csak bizonyos feltételeknek megfelelő sorokat szeretnénk látni. Ha a táblázat sorainak számával megegyező hosszú igaz/hamis sorozatot adunk meg a DataFrame mögötti szögletes zárójelben, akkor csak az igaz elemeket fogjuk visszakapni visszatérési értékként.
Először nézzük meg, mi történik, ha megkérdezzük, hogy egy oszlop egyenlő-e egy értékkel:
End of explanation
df[df["Nem"]=="lány"]
Explanation: Láttuk, hogy minden sorhoz kaptunk egy igaz/hamis értéket. Most a fenti kifejezést beírjuk a []-be:
End of explanation
df[df["Eszter"]>3]
Explanation: De más feltételt is megadhatunk, például hogy kinek adott Eszter 3-asnál jobb jegyet.
End of explanation
df[(df["Eszter"]>3) & (df["Kor"]>20)]
Explanation: Két feltételt összefűzhetünk egymáshoz, ilyenkor a & és a | operátorokat használjuk and és or helyett, mert azok nem tudnak két sorozatot elemenként összehasonlítani. A feltételeket zárójelbe kell tenni, különben hibát kapunk.
Ezek alapján az, akinek Eszter hármasnál jobbat adott, és idősebb 20 évesnél:
End of explanation
df.sort_values(by="Kor",ascending=False)
Explanation: Sorba rendezés
Szükségünk lehet arra, hogy a táblázatunkat sorba rendezzük valamelyik oszlop szerint. Ilyenkor a sort_values(by="oszlop_neve") függvényt használjuk, melynek megadhatjuk, hogy növekvő (ascending=True), vagy csökkenő (ascending=False) sorrendben szeretnénk-e a rendezést.
A függvény visszatérési értéke a rendezett táblázat.
End of explanation
df.sort_values(by="Kor",ascending=False,inplace=True)
Explanation: Ha azt szeretnénk, hogy az eredeti DataFrame-ben rendezve tárolódjanak el a sorok, be kell kapcsolnunk az inplace=True paramétert, ami felülírja a DataFrame-et a rendezés után.
End of explanation
df=df.sort_values(by="Kor",ascending=False)
Explanation: Persze, ezt elérhettük volna szokásos értékadással is.
End of explanation
df.sort_index(inplace=True)
Explanation: Ha a DataFrame indexe szerint szeretnénk sorba rendezni, akkor a sort_index() függvény segít (itt is választhatjuk, hogy helyben szeretnénk-e a rendezést az inplace=True segítségével):
End of explanation
df.loc["Dávid"]=[5,5,"fiú",20]
df
Explanation: Új sor/oszlop hozzáadása, törlés
Ha új sort szeretnénk hozzáadni a táblázathoz, akkor a .loc["Új_sor_indexe"] változónak egy, az oszlopok számával megegyező hosszúságú listát kell odaadnunk.
End of explanation
df["Emelt"]=[0,0,1,1,0]
df
Explanation: Ha új oszlopot, akkor hasonlóan járunk el, de nem szükséges a loc, mert az a sorokat indexeli.
End of explanation
df.drop("Bálint",inplace=True)
df
Explanation: Ha sort szeretnénk törölni, a drop függvénnyel tehetjük meg.
End of explanation
del df["Kor"] #ritkán
df["Kor"]=[22,19,20,20]
Explanation: Elég ritkán, de szeretnénk a táblázatunkból oszlopokat törölni:
End of explanation
df.groupby("Emelt").max()
Explanation: Csoportosítás
Egy oszlop értékei szerint csoportosíthatjuk a DataFrame-et, és utána a csoportokon végezhetünk műveleteket.
Például az emelt szintű érettségit tevők (1), illtve nem tevők (0) maximum jegyeit láthatjuk a következő sorban.
Figyeljük meg, hogy a Nem oszlop maximális értéke mindkét csoportban a "lány", hiszen az hátrébb áll az abc-ben, mint a fiú.
End of explanation
df.groupby(["Emelt","Nem"]).max()
Explanation: Egyszerre két oszlop szerint is csoportosíthatunk, ilyenkor listát kell a groupby-nak átadnunk. Itt már nem csak az Emelt oszlop, hanem a Nem oszlop is a táblázat indexének a része, ezt hívjuk többszintű indexelésnek. A továbbiakban az órán erre nem lesz szükség, csak a példa kedvéért áll itt.
End of explanation
erettsegi_adat=pd.read_csv("data/erettsegi.csv.gz",encoding="utf8",sep=";",index_col=0)
Explanation: Érettségi adatok feldolgozása
Az alábbiakban az elmúlt pár év érettségi statisztikai adatait fogjuk megvizsgálni. Ez a példa sok szempontból jól illusztrál olyan problémákat, amelyek valós adatbázis-elemzések kapcsán felmerülhetnek.
Ilyen például a hiányzó adatok kezelése, vagy a nem egészen kompatibilis adatbázisok egységes kezelése.
Az érettségi adatokat a fenti honlap az előzőekben megismert elválasztóval tagolt tagolt (comma separated value, röviden csv) formátumban teszi elérhetővé, itt az elválasztójel a pontosvessző.
Mivel ékezetes karakterek is vannak a fájlban, át kell állítanunk a karakterkódolást is a beolvasásnál. Az értékeket a sorokban a ";" karakter választja el, a sorok nevei a 0. oszlopban vannak.
End of explanation
print("\n".join(erettsegi_adat.columns.tolist()))
Explanation: Listáztassuk ki, milyen oszlopnevek vannak a fájlban!
End of explanation
erettsegi_adat.head().transpose()
Explanation: Látható, hogy az év, szint megadják, hogy melyik évben, melyik szintű érettségiről van szó. Azt is megállapíthatjuk, hogy ősszel vagy tavasszal (időszak) írta-e a diák az érettségit, az iskolájáról és a képzési típusról is rögzítve van a statisztika. Emellett részletes írásbeli és szóbeli, illetve összpontszám, összesített százalék is szerepel az adatok között.
Érdemes az első néhány sort kiíratni példaként, hogy lássuk, mivel is van dolgunk. Most transzponálva írjuk ki, hogy elférjen a képernyőre.
End of explanation
erettsegi_adat[erettsegi_adat["szint"]=="E"].groupby("év")["érdemjegy"].mean()
Explanation: Itt aztán már tényleg nagy hasznát vesszük a fentebb tanult csoportosítási, aggregálási műveleteknek, a következőkben felteszünk néhány példakérdést, és megválaszoljuk azt.
Melyik évben mennyi volt az emelt szintű érettségik jegyeinek átlaga?
Ehhez először kiválasztjuk az emelt szintű érettségit tartalmazó sorokat, majd azokat év szerint csoportosítjuk. Kiválasztjuk az "érdemjegy" oszlopot, amit a végén átlagolunk. A csoportosítás miatt az átlag évenként kerül kiszámítása.
End of explanation
erettsegi_adat[
(erettsegi_adat["szint"]=="K") &\
(erettsegi_adat["év"]==2015)].\
groupby("vizsgázó neme")["össz pontszám"].mean()
Explanation: Vajon a fiúk vagy a lányok írtak jobb pontszámú középszintű érettségit 2015-ben?
A "\" jel csak azért kell, hogy ne írjunk túl hosszú sorokat a Pythonnak, mert az nehéz lenne elolvasni. Ha ilyen jelet teszel a sor végére, akkor az értelmező úgy olvassa, mintha a következő sor a "\" jel helyére lenne fűzve.
Először logikai indexeléssel kiválasztjuk a 2015-ös középszintű érettségiket tartalmazó sorokat. Több feltételt a sorokra egyszerre az and operátor helyett az & operátorral adhatunk meg, és a feltételeket zárójeleznünk kell, hogy jól olvassa az értelmező.
Ezek után csoportosítunk a vizsgázó neme szerint, majd vesszük az összpontszámok átlagát.
End of explanation
erettsegi_adat.groupby(["vizsgázó képzési típusa", "vizsgázó részvétele"])["év"].count()
Explanation: Számoljuk le, melyik iskolatípusban hány érettségiző jelent meg, illetve nem jelent meg!
Most egyszerre két oszlop szerint is csoportosítottunk, a csoportosítás alapját képező oszlopok nevét listaként kell megadni a groupby-nak. Utána egy tetszőleges oszlopot (pl. év) kiválasztva megszámláltathatjuk csoportonként a sorokat a count-tal.
End of explanation
%pylab inline
Explanation: Ábrázolás
A pandas nagy erőssége, hogy a DataFrame-ekből nagyon rövid szintaxissal lehet egészen elfogadható ábrákat készíteni. Ehhez a pandas a matplotlib könyvtárat használja, melyet emiatt be is kell importálnunk.
End of explanation
rcParams["font.size"]=15
Explanation: Az ábra paramétereit (title, ylabel stb.) a matplotlibben megszokott módon állíthatjuk be.
Elsőként növeljük meg alapértelmezetten a tengelyfeliratokat:
End of explanation
erettsegi_adat[erettsegi_adat["szint"]=="E"].groupby("év")["érdemjegy"].mean().plot(kind="bar", figsize=(12, 9))
ylabel("Emelt szint átlag")
ylim(0,5)
Explanation: Nézzük meg ábrán is az emelt szintű érettségik évenkénti átlagát! Ehhez csak a fenti parancs végére hozzá kell fűznünk a "plot" szócskát. Az oszlopdiagram rajzolásához megadhatjuk a plot függvénynek a kind kulcsszóval, hogy kind="bar". Az x tengely feliratai a DataFrame indexei lesznek, de az y tengelynek már mi kell, hogy nevet adjunk.
End of explanation
subplot(1,2,1)
erettsegi_adat[
(erettsegi_adat["vizsgázó részvétele"]=="megjelent") &\
(erettsegi_adat["szint"]=="K")]\
.groupby(["vizsgázó képzési típusa"])\
.size()\
.plot(kind="pie",autopct='%.1f',figsize=(20,10))
title("Középszint")
subplot(1,2,2)
erettsegi_adat[
(erettsegi_adat["vizsgázó részvétele"]=="megjelent") &\
(erettsegi_adat["szint"]=="E")]\
.groupby(["vizsgázó képzési típusa"])\
.size()\
.plot(kind="pie",autopct='%.1f',figsize=(20,10))
title("Emelt szint")
Explanation: Megnézhetjük kördiagramon, hogy melyik iskolatípusból hányan érettségiztek közép- és emelt szinten 2011 és 2015 között. Ehhez két alábrát készítünk a múltkor tanultakhoz hasonlóan.
Vajon mit csinált az autopct kulcsszó?
End of explanation
pd.concat([df,df])
Explanation: ☠ Haladóknak
Összefűzés, join
Két DataFrame-et összefűzhetünk egymás alá, ha a pd.concat() függvénynek egy ugyanannyi oszlopból álló DataFrame-eket tartalmazó listát adunk oda.
Példánkban kétszer egymás alá írjuk ugyanazt a DataFrame-et.
End of explanation
import numpy as np
df2=pd.DataFrame(np.array([[0,1,1,1]]).transpose(),columns=["Énekkar"],index=df.index)
df2
Explanation: Létrehozunk egy másik DataFrame-et, és egy Énekkar nevű oszlopot teszünk bele az előző DataFrame indexeivel.
End of explanation
pd.concat([df,df2],axis=1)
Explanation: Hogyan tudnánk ezt az oszlopot hozzáilleszteni az előző táblázathoz? Megtehetjük concat segítségével, de át kell állítanunk, hogy melyik irányban fűzzük össze a két táblázatot (axis=1 jelenti, hogy az oszlopok mellé szeretnénk írni).
End of explanation
pd.merge(df,df2,left_index=True,right_index=True)
Explanation: Megtehetnénk azt is, hogy elkészítjük a két táblázat Descartes-szorzatát, azaz az egyikből minden sort összepárosítunk a másik minden sorával, majd kiválogatjuk ebből a sorhalmazból csak azokat a sorokat, amelyekben az indexek megegyeznek.
(Aki ismeri az SQL-nyelv join parancsát, ez az ún. inner join.)
End of explanation
df["Nem"].str.contains("án")
Explanation: Contains
Ha egy stringeket tartalmazó oszlopban végig kell néznünk, hogy megvan-e valamilyen karaktersorozat:
End of explanation
def hozzaad(x):
return x+1
Explanation: Apply
Ha egy tetszőleges függvényt szeretnénk egy oszlop minden egyes elemére alkalmazni, megtehetjük az apply segítségével. Az apply belsejébe írjuk a függvényt, amit alkalmazni szeretnénk.
Elsőként például készítünk egy függvényt, ami egy számhoz hozzáad egyet:
End of explanation
df["Kor"].apply(hozzaad)
Explanation: Ezek után mindenkit öregítünk egy évvel.
End of explanation
def atlag(sor):
return (sor["Eszter"]+sor["Orsi"])/2
df.apply(atlag,axis=1)
Explanation: Az apply-t is lehet soronként is végeztetni az axis=1 kulcsszó segítségével. Például írjuk meg kézzel azt a függvényt, ami a két kapott jegy átlagát kiszámolja.
End of explanation
df.loc["Károly"]=[4,5,"fiú",1,20]
Explanation: Pivot
Új táblázatot is készíthetünk összesített eredmények alapján az eredetiből. Hogy legyen valami látható eredményünk, adjuk még hozzá Károlyt a táblázatunkhoz.
End of explanation
p=df.groupby(["Nem","Emelt"]).count()
p.reset_index(inplace=True)
p
Explanation: Most megnézzünk nemenként és emelt szintű érettségi szerint, hogy melyik kategóriában hány ember van. Mivel a csoportosítás készít nekünk egy többszintű indexet, ezt kiiktatjuk a reset_index(inplace=True) parancs segítségével.
End of explanation
p.pivot_table(values="Eszter",columns="Emelt",index="Nem")
Explanation: Készítsünk egy táblázatot, melyben a sorok a nemek, az oszlopok, hogy tett-e valaki emelt szintű érettségit, és az értékek a kategóriák leszámlálásai:
End of explanation |
13,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with BigQuery ML
BigQuery ML enables users to create and execute machine learning models in BigQuery using SQL queries. The goal is to democratize machine learning by enabling SQL practitioners to build models using their existing tools and to increase development speed by eliminating the need for data movement.
In this tutorial, you use the sample Google Analytics sample dataset for BigQuery to create a model that predicts whether a website visitor will make a transaction. For information on the schema of the Analytics dataset, see BigQuery export schema in the Google Analytics Help Center.
Objectives
In this tutorial, you use
Step1: Next, you create a BigQuery dataset to store your ML model. Run the following to create your dataset
Step2: Create your model
Next, you create a logistic regression model using the Google Analytics sample
dataset for BigQuery. The model is used to predict whether a
website visitor will make a transaction. The standard SQL query uses a
CREATE MODEL statement to create and train the model. Standard SQL is the
default query syntax for the BigQuery python client library.
The BigQuery python client library provides a cell magic,
%%bigquery, which runs a SQL query and returns the results as a Pandas
DataFrame.
To run the CREATE MODEL query to create and train your model
Step3: The query takes several minutes to complete. After the first iteration is
complete, your model (sample_model) appears in the navigation panel of the
BigQuery web UI. Because the query uses a CREATE MODEL statement to create a
table, you do not see query results. The output is an empty DataFrame.
Get training statistics
To see the results of the model training, you can use the
ML.TRAINING_INFO
function, or you can view the statistics in the BigQuery web UI. This functionality
is not currently available in the BigQuery Classic web UI.
In this tutorial, you use the ML.TRAINING_INFO function.
A machine learning algorithm builds a model by examining many examples and
attempting to find a model that minimizes loss. This process is called empirical
risk minimization.
Loss is the penalty for a bad prediction — a number indicating
how bad the model's prediction was on a single example. If the model's
prediction is perfect, the loss is zero; otherwise, the loss is greater. The
goal of training a model is to find a set of weights that have low
loss, on average, across all examples.
To see the model training statistics that were generated when you ran the
CREATE MODEL query
Step4: Note
Step5: When the query is complete, the results appear below the query. The
results should look like the following
Step6: When the query is complete, the results appear below the query. The
results should look like the following. Because model training is not
deterministic, your results may differ.
In the next example, you try to predict the number of transactions each website
visitor will make. This query is identical to the previous query except for the
GROUP BY clause. Here the GROUP BY clause — GROUP BY fullVisitorId
— is used to group the results by visitor ID.
To run the query that predicts purchases per user
Step7: When the query is complete, the results appear below the query. The
results should look like the following | Python Code:
from google.cloud import bigquery
client = bigquery.Client(location="US")
Explanation: Getting started with BigQuery ML
BigQuery ML enables users to create and execute machine learning models in BigQuery using SQL queries. The goal is to democratize machine learning by enabling SQL practitioners to build models using their existing tools and to increase development speed by eliminating the need for data movement.
In this tutorial, you use the sample Google Analytics sample dataset for BigQuery to create a model that predicts whether a website visitor will make a transaction. For information on the schema of the Analytics dataset, see BigQuery export schema in the Google Analytics Help Center.
Objectives
In this tutorial, you use:
BigQuery ML to create a binary logistic regression model using the CREATE MODEL statement
The ML.EVALUATE function to evaluate the ML model
The ML.PREDICT function to make predictions using the ML model
Create your dataset
Enter the following code to import the BigQuery Python client library and initialize a client. The BigQuery client is used to send and receive messages from the BigQuery API.
End of explanation
dataset = client.create_dataset("bqml_tutorial")
Explanation: Next, you create a BigQuery dataset to store your ML model. Run the following to create your dataset:
End of explanation
%%bigquery
CREATE OR REPLACE MODEL `bqml_tutorial.sample_model`
OPTIONS(model_type='logistic_reg') AS
SELECT
IF(totals.transactions IS NULL, 0, 1) AS label,
IFNULL(device.operatingSystem, "") AS os,
device.isMobile AS is_mobile,
IFNULL(geoNetwork.country, "") AS country,
IFNULL(totals.pageviews, 0) AS pageviews
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE
_TABLE_SUFFIX BETWEEN '20160801' AND '20170630'
Explanation: Create your model
Next, you create a logistic regression model using the Google Analytics sample
dataset for BigQuery. The model is used to predict whether a
website visitor will make a transaction. The standard SQL query uses a
CREATE MODEL statement to create and train the model. Standard SQL is the
default query syntax for the BigQuery python client library.
The BigQuery python client library provides a cell magic,
%%bigquery, which runs a SQL query and returns the results as a Pandas
DataFrame.
To run the CREATE MODEL query to create and train your model:
End of explanation
%%bigquery
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `bqml_tutorial.sample_model`)
Explanation: The query takes several minutes to complete. After the first iteration is
complete, your model (sample_model) appears in the navigation panel of the
BigQuery web UI. Because the query uses a CREATE MODEL statement to create a
table, you do not see query results. The output is an empty DataFrame.
Get training statistics
To see the results of the model training, you can use the
ML.TRAINING_INFO
function, or you can view the statistics in the BigQuery web UI. This functionality
is not currently available in the BigQuery Classic web UI.
In this tutorial, you use the ML.TRAINING_INFO function.
A machine learning algorithm builds a model by examining many examples and
attempting to find a model that minimizes loss. This process is called empirical
risk minimization.
Loss is the penalty for a bad prediction — a number indicating
how bad the model's prediction was on a single example. If the model's
prediction is perfect, the loss is zero; otherwise, the loss is greater. The
goal of training a model is to find a set of weights that have low
loss, on average, across all examples.
To see the model training statistics that were generated when you ran the
CREATE MODEL query:
End of explanation
%%bigquery
SELECT
*
FROM ML.EVALUATE(MODEL `bqml_tutorial.sample_model`, (
SELECT
IF(totals.transactions IS NULL, 0, 1) AS label,
IFNULL(device.operatingSystem, "") AS os,
device.isMobile AS is_mobile,
IFNULL(geoNetwork.country, "") AS country,
IFNULL(totals.pageviews, 0) AS pageviews
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE
_TABLE_SUFFIX BETWEEN '20170701' AND '20170801'))
Explanation: Note: Typically, it is not a best practice to use a SELECT * query. Because the model output is a small table, this query does not process a large amount of data. As a result, the cost is minimal.
When the query is complete, the results appear below the query. The results should look like the following:
The loss column represents the loss metric calculated after the given iteration
on the training dataset. Since you performed a logistic regression, this column
is the log loss.
The eval_loss column is the same loss metric calculated on
the holdout dataset (data that is held back from training to validate the model).
For more details on the ML.TRAINING_INFO function, see the
BigQuery ML syntax reference.
Evaluate your model
After creating your model, you evaluate the performance of the classifier using
the ML.EVALUATE
function. You can also use the ML.ROC_CURVE
function for logistic regression specific metrics.
A classifier is one of a set of enumerated target values for a label. For
example, in this tutorial you are using a binary classification model that
detects transactions. The two classes are the values in the label column:
0 (no transactions) and not 1 (transaction made).
To run the ML.EVALUATE query that evaluates the model:
End of explanation
%%bigquery
SELECT
country,
SUM(predicted_label) as total_predicted_purchases
FROM ML.PREDICT(MODEL `bqml_tutorial.sample_model`, (
SELECT
IFNULL(device.operatingSystem, "") AS os,
device.isMobile AS is_mobile,
IFNULL(totals.pageviews, 0) AS pageviews,
IFNULL(geoNetwork.country, "") AS country
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE
_TABLE_SUFFIX BETWEEN '20170701' AND '20170801'))
GROUP BY country
ORDER BY total_predicted_purchases DESC
LIMIT 10
Explanation: When the query is complete, the results appear below the query. The
results should look like the following:
Because you performed a logistic regression, the results include the following
columns:
precision
recall
accuracy
f1_score
log_loss
roc_auc
Use your model to predict outcomes
Now that you have evaluated your model, the next step is to use it to predict
outcomes. You use your model to predict the number of transactions made by
website visitors from each country. And you use it to predict purchases per user.
To run the query that uses the model to predict the number of transactions:
End of explanation
%%bigquery
SELECT
fullVisitorId,
SUM(predicted_label) as total_predicted_purchases
FROM ML.PREDICT(MODEL `bqml_tutorial.sample_model`, (
SELECT
IFNULL(device.operatingSystem, "") AS os,
device.isMobile AS is_mobile,
IFNULL(totals.pageviews, 0) AS pageviews,
IFNULL(geoNetwork.country, "") AS country,
fullVisitorId
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE
_TABLE_SUFFIX BETWEEN '20170701' AND '20170801'))
GROUP BY fullVisitorId
ORDER BY total_predicted_purchases DESC
LIMIT 10
Explanation: When the query is complete, the results appear below the query. The
results should look like the following. Because model training is not
deterministic, your results may differ.
In the next example, you try to predict the number of transactions each website
visitor will make. This query is identical to the previous query except for the
GROUP BY clause. Here the GROUP BY clause — GROUP BY fullVisitorId
— is used to group the results by visitor ID.
To run the query that predicts purchases per user:
End of explanation
client.delete_dataset(dataset, delete_contents=True)
Explanation: When the query is complete, the results appear below the query. The
results should look like the following:
Cleaning up
To delete the resources created by this tutorial, execute the following code to delete the dataset and its contents:
End of explanation |
13,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 5.3
Step1: Load data
Step2: Number of reverts per page per bot pair
Group by language, page ID, and botpair_sorted
Grouping by these three columns creates a very simple and useful intersection for this metric. If there is only one revert for a language/page ID/botpair_sorted set, then the reverting bot's revert was for sure unreciprocated by the reverted bot. If there are two reverts, then the most likely outcome is that the reverting bot's revert was followed by a revert by the reverted bot, although this could also mean that the reverting bot reverted the reverted bot twice. Higher counts imply heavy back-and-forth reverts between two bots on a single page.
We count the number of reverts with the same language, page ID, and sorted botpair, then assign that value to reverts_per_page_botpair_sorted for every revert matching these three columns. Note that this initial analysis is conducted in 0-load-process-data.ipynb, but we have included it again for clarity.
Step3: Add reverts_per_page_botpair_sorted to df_all
Step4: Analysis
Number of reverts by revert_per_page_botpair_sorted, all languages, articles only
For example, 528,104 reverts were not reciprocated at all. 25,528 reverts were part of a two-bot revert chain on the same page in the same language lasting 2 reverts. 3,987 reverts were part of a two-bot revert chain in the same page in the same language lasting 3 reverts, and so on.
Step5: Number of reverts by revert_per_page_botpair_sorted, English only, articles only
Step6: Checking that the sum of the counts and the total number of reverts are the same
Step7: Finding pages with more than 500 reverts by/on the same bots
Step8: From a manual lookup
Step9: Median time to revert for a Mathbot-curated list
Step10: Runtime | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import glob
import datetime
import pickle
%matplotlib inline
start = datetime.datetime.now()
Explanation: Section 5.3: Reverts per page (setup and exploratory)
This is a data analysis script used to produce findings in the paper, which you can run based entirely off the files in this GitHub repository. This notebook produces part of the analysis for all languages, and the notebook 4-3-reverts-per-page-enwiki-plots is an independent replication of this analysis in R that contains plots for the English Wikipedia, which are included in the paper. Note that the R notebook cannot be run on mybinder due to memory requirements, while this one can be.
This entire notebook can be run from the beginning with Kernel -> Restart & Run All in the menu bar. It takes less than 1 minute to run on a laptop running a Core i5-2540M processor.
End of explanation
!unxz --keep --force ../../datasets/parsed_dataframes/df_all_2016.pickle.xz
!ls ../../datasets/parsed_dataframes/*.pickle
with open("../../datasets/parsed_dataframes/df_all_2016.pickle", "rb") as f:
df_all = pickle.load(f)
df_all.sample(2).transpose()
Explanation: Load data
End of explanation
groupby_lang_page_bps = df_all.groupby(["language", "rev_page", "botpair_sorted"])
df_groupby = pd.DataFrame(groupby_lang_page_bps['rev_id'].count()).reset_index().rename(columns={"rev_id":"reverts_per_page_botpair_sorted"})
df_groupby.sample(25)
Explanation: Number of reverts per page per bot pair
Group by language, page ID, and botpair_sorted
Grouping by these three columns creates a very simple and useful intersection for this metric. If there is only one revert for a language/page ID/botpair_sorted set, then the reverting bot's revert was for sure unreciprocated by the reverted bot. If there are two reverts, then the most likely outcome is that the reverting bot's revert was followed by a revert by the reverted bot, although this could also mean that the reverting bot reverted the reverted bot twice. Higher counts imply heavy back-and-forth reverts between two bots on a single page.
We count the number of reverts with the same language, page ID, and sorted botpair, then assign that value to reverts_per_page_botpair_sorted for every revert matching these three columns. Note that this initial analysis is conducted in 0-load-process-data.ipynb, but we have included it again for clarity.
End of explanation
df_all = df_all.drop("reverts_per_page_botpair_sorted",1)
df_all = pd.merge(df_all, df_groupby, how='left',
left_on=["language", "rev_page", "botpair_sorted"],
right_on=["language", "rev_page", "botpair_sorted"])
Explanation: Add reverts_per_page_botpair_sorted to df_all
End of explanation
df_all.query("page_namespace == 0").reverts_per_page_botpair_sorted.value_counts().sort_index()
df_all.query("page_namespace == 0").reverts_per_page_botpair_sorted.value_counts().sum()
import matplotlib.ticker
sns.set(font_scale=1.5, style="whitegrid")
fig, ax = plt.subplots(figsize=[8,6])
df_all.query("page_namespace == 0").reverts_per_page_botpair_sorted.value_counts().sort_index().plot(kind='bar', ax=ax)
ax.set_yscale('log')
ax.set_ylim((pow(10,0),pow(10,6)))
ax.set_ylabel("Number of articles (log scale)")
ax.set_xlabel("Number of reverts on page between the same two bots")
ax.yaxis.set_major_formatter(matplotlib.ticker.FormatStrFormatter('%d'))
Explanation: Analysis
Number of reverts by revert_per_page_botpair_sorted, all languages, articles only
For example, 528,104 reverts were not reciprocated at all. 25,528 reverts were part of a two-bot revert chain on the same page in the same language lasting 2 reverts. 3,987 reverts were part of a two-bot revert chain in the same page in the same language lasting 3 reverts, and so on.
End of explanation
df_all.query("page_namespace == 0 and language=='en'").reverts_per_page_botpair_sorted.value_counts().sort_index()
sns.set(font_scale=1.5, style="whitegrid")
df_all.query("page_namespace == 0 and language == 'en'").reverts_per_page_botpair_sorted.value_counts().sort_index().plot(kind='bar')
Explanation: Number of reverts by revert_per_page_botpair_sorted, English only, articles only
End of explanation
df_all.query("page_namespace == 0 and language=='en'").reverts_per_page_botpair_sorted.value_counts().sum()
len(df_all.query("page_namespace == 0 and language=='en'"))
Explanation: Checking that the sum of the counts and the total number of reverts are the same
End of explanation
gb = df_all.query("reverts_per_page_botpair_sorted > 500").groupby(["language", "page_namespace", "rev_page", "botpair_sorted"])
gb['rev_id'].count()
Explanation: Finding pages with more than 500 reverts by/on the same bots
End of explanation
len(df_all.query("language == 'en' and rev_page == 4626266"))
len(df_all.query("language == 'en' and rev_page == 11238105"))
len(df_all.query("language == 'en' and rev_page == 5964327"))
Explanation: From a manual lookup:
page_id page_title
- 974956 Possibly_unfree_files
- 4626266 Administrator_intervention_against_vandalism/TB2
- 5964327 Suspected_copyright_violations
- 11005908 Tutorial/Editing/sandbox
- 11238105 Usernames_for_administrator_attention/Bot
How many total bot-bot reverts in these pages?
End of explanation
df_all.query("language == 'en' and rev_page == 5971841").groupby("botpair")['time_to_revert_days'].median()
Explanation: Median time to revert for a Mathbot-curated list
End of explanation
end = datetime.datetime.now()
time_to_run = end - start
minutes = int(time_to_run.seconds/60)
seconds = time_to_run.seconds % 60
print("Total runtime: ", minutes, "minutes, ", seconds, "seconds")
Explanation: Runtime
End of explanation |
13,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle Dogs and Cats Image Identification Problem
Achieved 98.9% accuracy - average of two test sets. Data taken from the 25k images of the Kaggle cats vs. dogs problem. 16k images were used for training. 3k images for validation, 3k each for two test sets. Each set was balanced, 50% dogs, 50% cats. In the future I may further divide the test sets so that a mean and standard deviation of test set accuracy could be calculated.
TODO
Step1: Get train, validation and 2 test data sets - data had previously been split by a Python script.
Validation set has variable images so that it can be doubled to produce a larger validation set. This can work since each replicated image is randomized in rotation, flip, skew, shift and so is in a sense a 'different' image.
Having two test data sets allows gor some glimpse of the repeatability of the model on new data - in the future I may split these further so a standard deviation of accuracy on the test sets can be determined.
Step2: Set up base model - had success for this problem with the Xception model. It will not be retrained for the first training phase which will output the training for the added dense layers only.
Step3: Build the model.
Step4: Pre-train the added dense layers. Set workers to a reasonable number for the CPU. I have an 8 core, 16 thread, Ryzen 7. We could go higher on workers but this seemed enough. Note that this is set up to run Keras / TensorFlow with a GPU.
Step5: Set the base model to have the last few layers be trainable. Preserve most of the layers from the pre-trained model.
Step6: Train the model. Now training both the dense layers and last few of the base Xception model.
Step7: Score the model on two previously unseen data sets. | Python Code:
%matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
from keras.applications import Xception
from keras.preprocessing.image import ImageDataGenerator
from keras import models
from keras import layers
from keras import optimizers
import tensorflow as tf
Explanation: Kaggle Dogs and Cats Image Identification Problem
Achieved 98.9% accuracy - average of two test sets. Data taken from the 25k images of the Kaggle cats vs. dogs problem. 16k images were used for training. 3k images for validation, 3k each for two test sets. Each set was balanced, 50% dogs, 50% cats. In the future I may further divide the test sets so that a mean and standard deviation of test set accuracy could be calculated.
TODO: Plot history to look for overfitting, but with class and work this will need to wait.
Note: Image sizes are a smaller than the default for the Xception base model. This is because my GPU memory could not handle a full-size Xception model.
Set up imports
End of explanation
base_dir = r'C:\Users\Vette\Desktop\Regis\#MSDS686 Deep Learning\cats_dogs'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
test2_dir = os.path.join(base_dir, 'test2')
batch_size = 20
seed = 321
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
test2_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir,
target_size=(240, 240),
batch_size=50,
class_mode='binary')
validation_generator = validation_datagen.flow_from_directory(validation_dir,
target_size=(240, 240),
batch_size=50,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(test_dir,
target_size=(240, 240),
batch_size=50,
class_mode='binary')
test2_generator = test2_datagen.flow_from_directory(test2_dir,
target_size=(240, 240),
batch_size=50,
class_mode='binary')
Explanation: Get train, validation and 2 test data sets - data had previously been split by a Python script.
Validation set has variable images so that it can be doubled to produce a larger validation set. This can work since each replicated image is randomized in rotation, flip, skew, shift and so is in a sense a 'different' image.
Having two test data sets allows gor some glimpse of the repeatability of the model on new data - in the future I may split these further so a standard deviation of accuracy on the test sets can be determined.
End of explanation
conv_base = Xception(weights='imagenet',
include_top=False,
input_shape=(240, 240, 3))
conv_base.summary()
conv_base.trainable = False
Explanation: Set up base model - had success for this problem with the Xception model. It will not be retrained for the first training phase which will output the training for the added dense layers only.
End of explanation
def build_model():
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
return model
Explanation: Build the model.
End of explanation
with tf.device('/gpu:0'):
np.random.seed(seed)
model = build_model()
print('Pre-train dense layers')
history = model.fit_generator(train_generator,
steps_per_epoch=160,
epochs=8,
validation_data=validation_generator,
validation_steps=30,
verbose=1,
workers=10)
Explanation: Pre-train the added dense layers. Set workers to a reasonable number for the CPU. I have an 8 core, 16 thread, Ryzen 7. We could go higher on workers but this seemed enough. Note that this is set up to run Keras / TensorFlow with a GPU.
End of explanation
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if 'block13' in layer.name:
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
Explanation: Set the base model to have the last few layers be trainable. Preserve most of the layers from the pre-trained model.
End of explanation
with tf.device('/gpu:0'):
print('Train Model')
np.random.seed(seed)
model = build_model()
history = model.fit_generator(train_generator,
steps_per_epoch=320,
epochs=20,
validation_data=validation_generator,
validation_steps=60,
verbose=1,
initial_epoch=8,
workers=10)
Explanation: Train the model. Now training both the dense layers and last few of the base Xception model.
End of explanation
with tf.device('/gpu:0'):
scores = model.evaluate_generator(test_generator, workers=8)
print('#1 Loss, Accuracy: ', scores)
scores = model.evaluate_generator(test2_generator, workers=8)
print('#2 Loss, Accuracy: ', scores)
Explanation: Score the model on two previously unseen data sets.
End of explanation |
13,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsl Bash Tutorial
This tutorial will show you how to run Bash scripts as Parsl apps.
Load parsl
Import parsl, and check the module version. This tutorial requires version 0.2.0 or above.
Step1: Define resources
To execute parsl we need to first define a set of resources on which the apps can run. Here we use a pool of threads.
Step2: Defining Bash Apps
To demonstrate how to run apps written as Bash scripts, we use two mock science applications
Step3: Running Bash Apps
Now that we've defined an app, let's run 10 parallel instances of it using a for loop. Each run will write 100 random numbers, each between 0 and 99, to the output file.
In order to track files created by Bash apps, a list of data futures (one for each file in the outputs[] list) is made available as an attribute of the AppFuture returned upon calling the decorated app fn.
<App_Future> = App_Function(... , outputs=['x.txt', 'y.txt'...])
[<DataFuture> ... ] = <App_Future>.outputs
Step4: Handling Futures
The variable "simulated_results" contains a list of AppFutures, each corresponding to a running bash app.
Now let's print the status of the 10 jobs by checking if the app futures are done.
Note
Step5: Retrieving Results
Each of the Apps return one DataFuture. Here we put all of these (data futures of file outputs) together into a list (simulation_outputs). This is done by iterating over each of the AppFutures and taking the first and only DataFuture in it's outputs list.
Step6: Defining a Second Bash App
We now explore how Parsl can be used to block on results. Let's define another app, analyze(), that calls stats.sh to find the average of the numbers in a set of files.
Step7: Blocking on Results
We call analyze with the list of data futures as inputs. This will block until all the simulate runs have completed and the data futures have 'resolved'. Finally, we print the result when it is ready. | Python Code:
# Import Parsl
import parsl
from parsl import *
print(parsl.__version__) # The version should be v0.2.1+
Explanation: Parsl Bash Tutorial
This tutorial will show you how to run Bash scripts as Parsl apps.
Load parsl
Import parsl, and check the module version. This tutorial requires version 0.2.0 or above.
End of explanation
workers = ThreadPoolExecutor(max_workers=4)
# We pass the workers to the DataFlowKernel which will execute our Apps over the workers.
dfk = DataFlowKernel(executors=[workers])
Explanation: Define resources
To execute parsl we need to first define a set of resources on which the apps can run. Here we use a pool of threads.
End of explanation
@App('bash', dfk)
def simulate(sim_steps=1, sim_range=100, sim_values=5, outputs=[], stdout=None, stderr=None):
# The bash app function requires that the bash script is returned from the function as a
# string. Positional and Keyword args to the fn() are formatted into the cmd_line string
# All arguments to the app function are made available at the time of string formatting a
# string assigned to cmd_line.
# Here we compose the command-line call to simulate.sh with keyword arguments to simulate()
# and redirect stdout to the first file listed in the outputs list.
return '''echo "sim_steps: {sim_steps}\nsim_range: {sim_range}\nsim_values: {sim_values}"
echo "Starting run at $(date)"
$PWD/bin/simulate.sh --timesteps {sim_steps} --range {sim_range} --nvalues {sim_values} > {outputs[0]}
echo "Done at $(date)"
ls
'''
Explanation: Defining Bash Apps
To demonstrate how to run apps written as Bash scripts, we use two mock science applications: simulate.sh and stats.sh. The simulation.sh script serves as a trivial proxy for any more complex scientific simulation application. It generates and prints a set of one or more random integers in the range [0-2^62) as controlled by its command line arguments. The stats.sh script serves as a trivial model of an "analysis" program. It reads N files each containing M integers and prints the average of all those numbers to stdout. Like simulate.sh it logs environmental information to stderr.
The following cell show how apps can be composed from arbitrary Bash scripts. The simulate signature shows how variables can be passed to the Bash script (e.g., "sim_steps") as well as how standard Parsl parameters are managed (e.g., "stdout").
End of explanation
simulated_results = []
# Launch 10 parallel runs of simulate() and put the futures in a list
for sim_index in range(10):
sim_fut = simulate(sim_steps=1,
sim_range=100,
sim_values=100,
outputs = ['stdout.{0}.txt'.format(sim_index)],
stderr='stderr.{0}.txt'.format(sim_index))
simulated_results.extend([sim_fut])
Explanation: Running Bash Apps
Now that we've defined an app, let's run 10 parallel instances of it using a for loop. Each run will write 100 random numbers, each between 0 and 99, to the output file.
In order to track files created by Bash apps, a list of data futures (one for each file in the outputs[] list) is made available as an attribute of the AppFuture returned upon calling the decorated app fn.
<App_Future> = App_Function(... , outputs=['x.txt', 'y.txt'...])
[<DataFuture> ... ] = <App_Future>.outputs
End of explanation
print ([i.done() for i in simulated_results])
Explanation: Handling Futures
The variable "simulated_results" contains a list of AppFutures, each corresponding to a running bash app.
Now let's print the status of the 10 jobs by checking if the app futures are done.
Note: you can re-run this step until all the jobs complete (all status are True) or go on, as a later step will block until all the jobs are complete.
End of explanation
# Grab just the data futures for the output files from each simulation
simulation_outputs = [i.outputs[0] for i in simulated_results]
Explanation: Retrieving Results
Each of the Apps return one DataFuture. Here we put all of these (data futures of file outputs) together into a list (simulation_outputs). This is done by iterating over each of the AppFutures and taking the first and only DataFuture in it's outputs list.
End of explanation
@App('bash', dfk)
def analyze(inputs=[], stdout=None, stderr=None):
# Here we compose the commandline for stats.sh that take a list of filenames as arguments
# Since we want a space separated list, rather than a python list (e.g: ['x.txt', 'y.txt'])
# we create a string by joining the filenames of each item in the inputs list and using
# that string to format the cmd_line explicitly
input_files = ' '.join([i for i in inputs])
return '$PWD/bin/stats.sh {0}'.format(input_files)
Explanation: Defining a Second Bash App
We now explore how Parsl can be used to block on results. Let's define another app, analyze(), that calls stats.sh to find the average of the numbers in a set of files.
End of explanation
results = analyze(inputs=simulation_outputs,
stdout='analyze.out',
stderr='analyze.err')
results.result()
with open('analyze.out', 'r') as f:
print(f.read())
Explanation: Blocking on Results
We call analyze with the list of data futures as inputs. This will block until all the simulate runs have completed and the data futures have 'resolved'. Finally, we print the result when it is ready.
End of explanation |
13,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Standard Python/Pandas Opening
Step1: Read in CSV File from Downloads
Step2: Code Below to Extract Input Sample Headers for Github
prod_df_example = prod_df.head(0)
cats_df_example = cats_df.head(0)
sale_df_example = sale_df.head(0)
saletotesprod_df_example = saletotesprod_df.head(0)
prod_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/prod_df_example.csv")
cats_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/cats_df_example.csv")
sale_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/sale_df_example.csv")
saletotesprod_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/saletotes_prod_df_example.csv")
Below import not needed at this time.
top_30 = pd.read_csv('/home/saisons/Downloads/top_sellers30_srtd.csv',
dtype={'category_id'
Step3: below infos not needed at this time.
top_30.info()
Merge Product and Category DFs
this line not needed, was used to confirm category ID data fix
cats_df[cats_df.category_id == '196094447518203739']
Step4: prod_info1_fix = prod_info1.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
prod_info1_fix.info()
prod_info2_fix = prod_info2.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
prod_info2_fix.info()
prod_info3_fix = prod_info3.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
prod_info3_fix.info()
If NaN sorting out is required at a later date, enable 3 markdown lines above and add _fix to merge values below
Step5: If NaN sorting out is required at a later date, enable 3 markdown lines above and add _fix to merge values below | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Standard Python/Pandas Opening
End of explanation
prod_df = pd.read_csv('/home/saisons/Code/zazzle-product-analysis/inputs/cl_pr_lst.csv',
dtype={'category_id': np.str, 'product_id': np.str, 'num_of_products': np.int},
keep_default_na=False)
prod_df.head()
cats_df = pd.read_csv('/home/saisons/Code/zazzle-product-analysis/inputs/cl_cat_lst.csv',
dtype={'category_id': np.str, 'product_id': np.str},
keep_default_na=False)
cats_df.head()
sale_df = pd.read_csv('/home/saisons/Code/zazzle-product-analysis/inputs/cur_zsales.csv',
dtype={'category_id': np.str, 'product_id': np.str, 'num_of_products': np.int},#, 'royalty_usd': np.float},
keep_default_na=False)
sale_df.head()
saletotesprod_df = pd.read_csv('/home/saisons/Code/zazzle-product-analysis/inputs/zsales_totes.csv',
dtype={'category_id': np.str, 'product_id': np.str},
keep_default_na=False)
saletotesprod_df.head()
Explanation: Read in CSV File from Downloads
End of explanation
prod_df.info()
cats_df.info()
sale_df.info()
saletotesprod_df.info()
Explanation: Code Below to Extract Input Sample Headers for Github
prod_df_example = prod_df.head(0)
cats_df_example = cats_df.head(0)
sale_df_example = sale_df.head(0)
saletotesprod_df_example = saletotesprod_df.head(0)
prod_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/prod_df_example.csv")
cats_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/cats_df_example.csv")
sale_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/sale_df_example.csv")
saletotesprod_df_example.to_csv("/home/saisons/Code/zazzle-product-analysis/inputs/saletotes_prod_df_example.csv")
Below import not needed at this time.
top_30 = pd.read_csv('/home/saisons/Downloads/top_sellers30_srtd.csv',
dtype={'category_id': np.str})
Info Data for Review
Below lines # out as only needed to verify data integrity while building merges
End of explanation
prodjn1 = pd.merge(prod_df, cats_df, on='category_id', left_index=True, how='outer')
prodjn1.head()
prodjn1.info()
prodjn1.drop(['store_y'],inplace=True,axis=1)
prodjn1.drop(['category_string_y'],inplace=True,axis=1)
prodjn1.drop(['category_1_y'],inplace=True,axis=1)
prodjn1.drop(['category_2_y'],inplace=True,axis=1)
prodjn1.drop(['category_3_y'],inplace=True,axis=1)
prodjn1.drop(['category_4_y'],inplace=True,axis=1)
prodjn1.head()
prodjn1.info()
prodjn2 = pd.merge(prodjn1, saletotesprod_df, on='product_id', left_index=True, how='outer')
prodjn2.head()
prodjn2.info()
prod_info1 = prodjn2.groupby(
["product_id", "category_id"]
).num_sales.sum().reset_index().sort_values("num_sales", ascending=False)
prod_info1.head()
prod_info2 = prodjn2.groupby(
["product_id", "category_id"]
).quant_sold.sum().reset_index().sort_values("quant_sold", ascending=False)
prod_info2.head()
prod_info3 = prodjn2.groupby(
["product_id", "category_id"]
).royal_total.sum().reset_index().sort_values("royal_total", ascending=False)
prod_info3.head()
prod_info1.info()
prod_info2.info()
prod_info3.info()
Explanation: below infos not needed at this time.
top_30.info()
Merge Product and Category DFs
this line not needed, was used to confirm category ID data fix
cats_df[cats_df.category_id == '196094447518203739']
End of explanation
prod_info_jn1 = pd.merge(prod_info1, prod_info2, on='product_id', left_index=True, how='outer')
prod_info_jn1.drop(['category_id_y'],inplace=True,axis=1)
prod_info_jn1.head()
prod_info_jn1.info()
Explanation: prod_info1_fix = prod_info1.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
prod_info1_fix.info()
prod_info2_fix = prod_info2.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
prod_info2_fix.info()
prod_info3_fix = prod_info3.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
prod_info3_fix.info()
If NaN sorting out is required at a later date, enable 3 markdown lines above and add _fix to merge values below
End of explanation
prod_info_jn2 = pd.merge(prod_info_jn1, prod_info3, on='product_id', left_index=True, how='outer')
prod_info_jn2.drop(['category_id_x'],inplace=True,axis=1)
prod_info_jn2.head()
prod_info_jn2.info()
cat_info_1 = prodjn2.groupby(
["category_id"]
).royal_total.sum().reset_index().sort_values("royal_total", ascending=False)
cat_info_1.head()
cat_info_1.info()
cat_info_2 = prodjn2.groupby(
["category_id"]
).num_sales.sum().reset_index().sort_values("num_sales", ascending=False)
cat_info_2.head()
cat_info_2.info()
cat_info_3 = prodjn2.groupby(
["category_id"]
).quant_sold.sum().reset_index().sort_values("quant_sold", ascending=False)
cat_info_3.head()
cat_info_3.info()
cat_info_jn1 = pd.merge(cat_info_1, cat_info_2, on='category_id', left_index=True, how='outer')
#prod_info_jn2.drop(['category_id_x'],inplace=True,axis=1)
cat_info_jn1.head()
cat_info_jn1.info()
cat_info_jn2 = pd.merge(cat_info_jn1, cat_info_3, on='category_id', left_index=True, how='outer')
#prod_info_jn2.drop(['category_id_x'],inplace=True,axis=1)
cat_info_jn2.head()
cat_info_jn2.info()
cat_info_jn3 = pd.merge(cats_df, cat_info_jn2, on='category_id', left_index=True, how='outer')
#prod_info_jn2.drop(['category_id_x'],inplace=True,axis=1)
cat_info_jn3.head()
cat_info_jn3.info()
cat_info_jn3_srt = cat_info_jn3.sort_values("quant_sold", ascending=False)
# cat_info_jn3_srt.drop([1,inplace=True,axis=1)
# cat_info_jn3_srt.drop(['index'],inplace=True,axis=1)
cat_info_jn3.head()
cat_info_jn3_srt.info()
cat_info_jn3.to_csv("/home/saisons/Code/zazzle-product-analysis/outputs/cat_info_jn3.csv")
prodjn2.to_csv("/home/saisons/Code/zazzle-product-analysis/outputs/final_output_products.csv")
cat_list = ['196717574142457136', '196739766766069967', '196990207237662364', '196959432771907138', '196189645389229864']
five_cats_list = prodjn2[prodjn2['category_id'].isin(cat_list)]
five_cats_list.head(5)
five_cats_list.info()
five_cats_list.to_csv("/home/saisons/Code/zazzle-product-analysis/outputs/five_cats_prods_list.csv")
full_prod_list = prodjn2.groupby(
["final_product_type"]
).quant_sold.sum().reset_index().sort_values("quant_sold", ascending=False)
full_prod_list.head()
full_prod_list.info()
full_prod_list.to_csv("/home/saisons/Code/zazzle-product-analysis/outputs/full_prod_list.csv")
Explanation: If NaN sorting out is required at a later date, enable 3 markdown lines above and add _fix to merge values below
End of explanation |
13,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plots of the mode of the mutation rate over time
Step1: The distribution of mutation rate modes as a function of population size
Step2: I need to remove the runs where the distribution of mutations got screwed up because the whole population had a mutation rate of 1.
Step3: The mode of the mode of the mutation rate as a function of population size
To better show what's going on, I'll instead plot the empirical distribution of the dominant mutation rate for each population size. I've adjusted the axis labels manually because adjusting all the bar widths and plotting on the log scale and then attempting to rescale is hard to get looking right, manually inserting the correct labels is the easiest way I've figured out how to do things.
Step4: The mutation rate landscape catastrophe | Python Code:
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plot_mu_trajectory(ax, mu_modes[25600][0][:2*10**6])
ax.set_xlabel('generation', fontsize=28);
ax.set_ylabel('mode of the mutation rate, $\mu_{mode}$', fontsize=28);
plt.savefig('mu_mode_trajectoryK25600.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plot_mu_trajectory(ax, mu_modes[400][0][:2*10**4])
ax.set_xlabel('generation', fontsize=28);
ax.set_ylabel('mode of the mutation rate, $\mu_{mode}$', fontsize=28);
plt.savefig('mu_mode_trajectoryK400.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plot_mu_trajectory(ax, mu_modes[1600][0][:2*10**5])
ax.set_xlabel('generation', fontsize=28);
ax.set_ylabel('mode of the mutation rate, $\mu_{mode}$', fontsize=28);
plt.savefig('mu_mode_trajectoryK1600.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.plot(f_maxes[200][1][::100000])
Explanation: Plots of the mode of the mutation rate over time
End of explanation
def mus_to_mudist(mus):
mus_v = np.unique(mus)
mudist = np.zeros_like(mus_v,dtype='int64')
for i, mu in enumerate(mus_v):
mudist[i] = np.sum(mus==mu)
return mus_v, mudist/np.sum(mudist)
def mean_mode_mu(mus_v, mudist):
return np.sum(mus_v*mudist)
mu_dists = OrderedDict()
for K in Ks:
mu_dists[K]=[]
for mu_mode in mu_modes[K]:
mu_dists[K].append(mus_to_mudist(mu_mode[:]))
def mean_std_mu_dists(mu_dists):
mu_dists_s = []
for mu_dist in mu_dists:
mu_dists_s.append(pd.Series(mu_dist[1], mu_dist[0]))
N = len(mu_dists_s)
whole_index = mu_dists_s[0].index
for i in range(1, N):
whole_index = whole_index.union(mu_dists_s[i].index)
for i in range(1, N):
mu_dists_s[i]=(mu_dists_s[i]).reindex(index=whole_index, fill_value=0)
mu_dist_total = 0
mu_dist_total2 = 0
for mu_dist in mu_dists_s:
mu_dist_total = mu_dist_total + mu_dist
mu_dist_total2 = mu_dist_total2 + mu_dist**2
mean_mu_dist = mu_dist_total/N
mean_squared_mu_dist = mu_dist_total2/N
std_mu_dist = np.sqrt(mean_squared_mu_dist - mean_mu_dist**2)
return mean_mu_dist.dropna(), std_mu_dist.dropna()/np.sqrt(N)
def bar_plot_mudist(ax, mudist_m, mudist_std):
mus_v = mudist_m.index
prob = mudist_m.values
yerr = mudist_std.values
ind = np.arange(mus_v.size)
ax.bar(ind, prob, yerr=yerr)
ax.set_xticks(ind);
ax.set_xticklabels(['{:.2g}'.format(mu) for mu in mus_v], rotation=0);
Explanation: The distribution of mutation rate modes as a function of population size
End of explanation
for mud in mu_dists[200]:
print(mud[0])
for mud in mu_dists[400]:
print(mud[0])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
bar_plot_mudist(ax, *mean_std_mu_dists([mu_dists[200][i] for i in [0,2,5,6,7,8,9,10,11,13,14,16,17]]))
ax.set_xlabel('Mode of the mutation rate, $\mu_{mode}$', fontsize=28)
ax.set_ylabel('Probability', fontsize=28)
plt.savefig('db_mu_mode_distK200.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
bar_plot_mudist(ax, *mean_std_mu_dists(mu_dists[1600]))
ax.set_xlabel('Mode of the mutation rate, $\mu_{mode}$', fontsize=28)
ax.set_ylabel('Probability', fontsize=28)
plt.savefig('db_mu_mode_distK1600.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
bar_plot_mudist(ax, *mean_std_mu_dists(mu_dists[25600]))
ax.set_xlabel('Mode of the mutation rate, $\mu_{mode}$', fontsize=28)
ax.set_ylabel('Probability', fontsize=28)
plt.savefig('db_mu_mode_distK25600.pdf')
Explanation: I need to remove the runs where the distribution of mutations got screwed up because the whole population had a mutation rate of 1.
End of explanation
fig = plt.figure(figsize=(18,18));
ax = fig.add_subplot(111, projection='3d');
ys, yerss = mean_std_mu_dists([mu_dists[200][i] for i in [0,2,5,6,7,8,9,10,11,13,14,16,17]])
xs = np.log2(ys.index)
ax.bar(xs, ys, zs=10, zdir='y', alpha=.6, color=matplotlib.colors.hsv_to_rgb(np.array([0,1,1])));
colors = [matplotlib.colors.hsv_to_rgb(np.array([x,1,1])) for x in [3/32, 5/32, 8/32, 16/32, 20/32, 24/32, 28/32]]
for i, K in enumerate(Ks[1:]):
ys, yerrs = mean_std_mu_dists(mu_dists[K])
xs = np.log2(ys.index)
ax.bar(xs, ys, zs=-i*10, zdir='y', alpha=.7, color = colors[i]);
ax.set_xlabel('Mode of the mutation rate, $\mu_{mode}$ ', labelpad=40);
ax.set_ylabel('Population size, $K$', labelpad=25);
ax.set_zlabel('Probability');
ax.set_yticklabels([25600, 12800, 6400, 3200, 1600, 800, 400, 200],
rotation=-15, rotation_mode='anchor', ha='left', va='bottom');
ax.set_xticks(list(range(-14,-1)));
ax.set_xticklabels(['{:6.5f}'.format(i) for i in .00008*2**np.arange(14)],
rotation=45, rotation_mode='anchor', ha='right', va='center');
ax.plot3D(np.arange(-13,-8)+.25,10*np.arange(-6,-1),np.zeros(5), color='k',
marker='*', markersize=20, markerfacecolor='white')
ax.plot3D(np.arange(-9,-7)+.25,10*np.arange(-2,0),np.zeros(2), color='k', marker='')
ax.plot3D(np.arange(-8,-5)+.25,10*np.arange(-1,2),np.zeros(3), color='k', marker='x', markersize=20)
plt.savefig('drift_barrier_scaling.pdf')
Explanation: The mode of the mode of the mutation rate as a function of population size
To better show what's going on, I'll instead plot the empirical distribution of the dominant mutation rate for each population size. I've adjusted the axis labels manually because adjusting all the bar widths and plotting on the log scale and then attempting to rescale is hard to get looking right, manually inserting the correct labels is the easiest way I've figured out how to do things.
End of explanation
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.plot(f_maxes[200][0][:5*10**4],marker='')
ax.set_xlabel('generation', fontsize=28)
ax.set_ylabel('maximum fitness, $f_{max}$', fontsize=28);
plt.savefig('fitness_catastrophe.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plot_mu_trajectory(ax, mu_modes[200][0][:5*10**4])
ax.set_xlabel('generation', fontsize=28);
ax.set_ylabel('mode of the mutation rate, $\mu_{mode}$', fontsize=28);
plt.savefig('mutation_rate_during_catastrophe.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.plot(f_maxes[400][0][:5*10**4],marker='')
ax.set_xlabel('generation', fontsize=28)
ax.set_ylabel('maximum fitness, $f_{max}$', fontsize=28);
plt.savefig('fitness_not_catastrophe.pdf')
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plot_mu_trajectory(ax, mu_modes[400][0][:5*10**4])
ax.set_xlabel('generation', fontsize=28);
ax.set_ylabel('mode of the mutation rate, $\mu_{mode}$', fontsize=28);
plt.savefig('mutation_rate_not_catastrophe.pdf')
Explanation: The mutation rate landscape catastrophe
End of explanation |
13,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get Data
Step1: Basic Market Map
Step2: GDP data with grouping by continent
World Bank national accounts data, and OECD National Accounts data files. (The World Bank
Step3: Setting the color based on data
Step4: Adding a widget as tooltip | Python Code:
data = pd.read_csv('../../data_files/country_codes.csv', index_col=[0])
country_codes = data.index.values
country_names = data['Name']
Explanation: Get Data
End of explanation
market_map = MarketMap(names=country_codes,
# basic data which needs to set for each map
ref_data=data,
# Data frame which can be used for different properties of the map
# Axis and scale for color data
tooltip_fields=['Name'],
layout=Layout(width='800px', height='600px'))
market_map
market_map.colors = ['MediumSeaGreen']
market_map.font_style = {'font-size': '16px', 'fill':'white'}
market_map.title = 'Country Map'
market_map.title_style = {'fill': 'Red'}
Explanation: Basic Market Map
End of explanation
gdp_data = pd.read_csv('../../data_files/gdp_per_capita.csv', index_col=[0], parse_dates=True)
gdp_data.fillna(method='backfill', inplace=True)
gdp_data.fillna(method='ffill', inplace=True)
col = ColorScale(scheme='Greens')
continents = data['Continent'].values
ax_c = ColorAxis(scale=col, label='GDP per Capita', visible=False)
data['GDP'] = gdp_data.iloc[-1]
market_map = MarketMap(names=country_codes, groups=continents, # Basic data which needs to set for each map
cols=25, row_groups=3, # Properties for the visualization
ref_data=data, # Data frame used for different properties of the map
tooltip_fields=['Name', 'Continent', 'GDP'], # Columns from data frame to be displayed as tooltip
tooltip_formats=['', '', '.1f'],
scales={'color': col}, axes=[ax_c],
layout=Layout(min_width='800px', min_height='600px')) # Axis and scale for color data
deb_output = Label()
def selected_index_changed(change):
deb_output.value = str(change.new)
market_map.observe(selected_index_changed, 'selected')
VBox([deb_output, market_map])
# Attribute to show the names of the groups, in this case the continents
market_map.show_groups = True
# Setting the selected countries
market_map.show_groups = False
market_map.selected = ['PAN', 'FRA', 'PHL']
# changing selected stroke and hovered stroke variable
market_map.selected_stroke = 'yellow'
market_map.hovered_stroke = 'violet'
Explanation: GDP data with grouping by continent
World Bank national accounts data, and OECD National Accounts data files. (The World Bank: GDP per capita (current US$))
End of explanation
# Adding data for color and making color axis visible
market_map.colors=['#ccc']
market_map.color = data['GDP']
ax_c.visible = True
Explanation: Setting the color based on data
End of explanation
# Creating the figure to be displayed as the tooltip
sc_x = DateScale()
sc_y = LinearScale()
ax_x = Axis(scale=sc_x, grid_lines='dashed', label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', grid_lines='dashed',
label='GDP', label_location='end', label_offset='-1em')
line = Lines(x= gdp_data.index.values, y=[], scales={'x': sc_x, 'y': sc_y}, colors=['orange'])
fig_tooltip = Figure(marks=[line], axes=[ax_x, ax_y])
market_map = MarketMap(names=country_codes, groups=continents,
cols=25, row_groups=3,
color=data['GDP'], scales={'color': col}, axes=[ax_c],
ref_data=data, tooltip_widget=fig_tooltip,
freeze_tooltip_location=True,
colors=['#ccc'],
layout=Layout(min_width='900px', min_height='600px'))
# Update the tooltip chart
hovered_symbol = ''
def hover_handler(self, content):
global hovered_symbol
symbol = content.get('data', '')
if(symbol != hovered_symbol):
hovered_symbol = symbol
if(gdp_data.get(hovered_symbol) is not None):
line.y = gdp_data[hovered_symbol].values
fig_tooltip.title = content.get('ref_data', {}).get('Name', '')
# Custom msg sent when a particular cell is hovered on
market_map.on_hover(hover_handler)
market_map
Explanation: Adding a widget as tooltip
End of explanation |
13,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial attemps at profiling had very confusing results; possibly because of module loading and i/o
Here, gypsy will be run and profiled on one plot, with no module loading/io recorded in profiling
Characterize what is happening
In several places, we append data to a data frame
Step1: Either in the way we do it, or by its nature, it is a slow operation.
Step2: There is nothing very clear about performance from the documentation. It may be worth examining the source, and of course googling append performance.
python - Improve Row Append Performance On Pandas DataFrames - Stack Overflow
http
Step3: Speedup of nearly 1 order of magnitude
Revise the code
Go on. Do it.
Review code changes
Step4: Tests
There are some issues with the tests - the data does not match the old output data to within 3 or even 2 decimal places. The mismatch is always
Step5: Run profiling
Step6: Compare performance visualizations
Now use either of these commands to visualize the profiling
```
pyprof2calltree -k -i forward-sim-1.prof forward-sim-1.txt
or
dc run --service-ports snakeviz notebooks/forward-sim-1.prof
```
Old
New
Summary of performance improvements
forward_simulation is now 4x faster due to the changes outlined in the code review section above
on my hardware, this takes 1000 plots to ~8 minutes
on carol's hardware, this takes 1000 plots to ~25 minutes
For 1 million plots, we're looking at 5 to 17 days on desktop hardware
Caveat
this isn't dealing with i/o. reading the plot table in is not a huge problem, especially if we declare the field types, but writing the growth curves for each plot will be time consuming. threads may be necessary
Identify new areas to optimize
need to find another order of magnitude improvement to get to 2.4-15 hours
pandas indexing .ix (get and set item) is taking 6 and 19% respectively
collectively, the lambdas being applied to output data frame are taking 19%
BAFromZeroToDataAw is slow (50% of total time) because of (in order) | Python Code:
%%bash
grep --colour -nr append ../gypsy/*.py
Explanation: Initial attemps at profiling had very confusing results; possibly because of module loading and i/o
Here, gypsy will be run and profiled on one plot, with no module loading/io recorded in profiling
Characterize what is happening
In several places, we append data to a data frame
End of explanation
import pandas as pd
help(pd.DataFrame.append)
Explanation: Either in the way we do it, or by its nature, it is a slow operation.
End of explanation
%%timeit
d = pd.DataFrame(columns=['A'])
for i in xrange(1000):
d.append({'A': i}, ignore_index=True)
%%timeit
d = pd.DataFrame(columns=['A'], index=xrange(1000))
for i in xrange(1000):
d.loc[i,'A'] = i
1.39/.150
Explanation: There is nothing very clear about performance from the documentation. It may be worth examining the source, and of course googling append performance.
python - Improve Row Append Performance On Pandas DataFrames - Stack Overflow
http://stackoverflow.com/questions/27929472/improve-row-append-performance-on-pandas-dataframes
python - Pandas: Why should appending to a dataframe of floats and ints be slower than if its full of NaN - Stack Overflow
http://stackoverflow.com/questions/17141828/pandas-why-should-appending-to-a-dataframe-of-floats-and-ints-be-slower-than-if
python - Creating large Pandas DataFrames: preallocation vs append vs concat - Stack Overflow
http://stackoverflow.com/questions/31690076/creating-large-pandas-dataframes-preallocation-vs-append-vs-concat
python - efficient appending to pandas dataframes - Stack Overflow
http://stackoverflow.com/questions/32746248/efficient-appending-to-pandas-dataframes
python - Pandas append perfomance concat/append using "larger" DataFrames - Stack Overflow
http://stackoverflow.com/questions/31860671/pandas-append-perfomance-concat-append-using-larger-dataframes
pandas.DataFrame.append — pandas 0.18.1 documentation
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html
Decide on the action
Do not append in a loop. It makes a copy each time and the memory allocation is poor. Should have known; it's interesting to see it demonstrated in the wild!
Pre-allocate the dataframe length by giving it an index and assigning to the index
MWE
End of explanation
%%bash
git log --since 2016-11-07 --oneline | head -n 8
! git diff HEAD~7 ../gypsy
Explanation: Speedup of nearly 1 order of magnitude
Revise the code
Go on. Do it.
Review code changes
End of explanation
%%bash
git log --since '2016-11-08' --oneline | grep tests
Explanation: Tests
There are some issues with the tests - the data does not match the old output data to within 3 or even 2 decimal places. The mismatch is always:
(mismatch 0.221052631579%)
It was resolved in fe82864:
End of explanation
from gypsy.forward_simulation import simulate_forwards_df
data = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10)
%%prun -D forward-sim-1.prof -T forward-sim-1.txt -q
result = simulate_forwards_df(data)
!head forward-sim-1.txt
!diff -y forward-sim-1.txt forward-sim.txt
Explanation: Run profiling
End of explanation
!cat forward-sim-1.txt | grep -i fromzero
Explanation: Compare performance visualizations
Now use either of these commands to visualize the profiling
```
pyprof2calltree -k -i forward-sim-1.prof forward-sim-1.txt
or
dc run --service-ports snakeviz notebooks/forward-sim-1.prof
```
Old
New
Summary of performance improvements
forward_simulation is now 4x faster due to the changes outlined in the code review section above
on my hardware, this takes 1000 plots to ~8 minutes
on carol's hardware, this takes 1000 plots to ~25 minutes
For 1 million plots, we're looking at 5 to 17 days on desktop hardware
Caveat
this isn't dealing with i/o. reading the plot table in is not a huge problem, especially if we declare the field types, but writing the growth curves for each plot will be time consuming. threads may be necessary
Identify new areas to optimize
need to find another order of magnitude improvement to get to 2.4-15 hours
pandas indexing .ix (get and set item) is taking 6 and 19% respectively
collectively, the lambdas being applied to output data frame are taking 19%
BAFromZeroToDataAw is slow (50% of total time) because of (in order):
pandas init (dict)
baincrementnonspatial
pandas setting
parallel (3 cores) gets us to 2 - 6 days - save for last
AWS with 36 cores gets us to 4 - 12 hours ($6.70 - $20.10 USD on a c4.8xlarge instance in US West Region)
End of explanation |
13,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MatPlotLib Basics
Draw a line graph
Step1: Mutiple Plots on One Graph
Step2: Save it to a File
Step3: Adjust the Axes
Step4: Add a Grid
Step5: Change Line Types and Colors
Step6: Labeling Axes and Adding a Legend
Step7: XKCD Style
Step8: Pie Chart
Step9: Bar Chart
Step10: Scatter Plot
Step11: Histogram
Step12: Box & Whisker Plot
Useful for visualizing the spread & skew of data.
The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles.
So, half of the data exists within the box.
The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range.
This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100 | Python Code:
%matplotlib inline
from scipy.stats import norm
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-3, 3, 0.01)
plt.plot(x, norm.pdf(x))
plt.show()
Explanation: MatPlotLib Basics
Draw a line graph
End of explanation
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
Explanation: Mutiple Plots on One Graph
End of explanation
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.savefig('C:\\Users\\Frank\\MyPlot.png', format='png')
Explanation: Save it to a File
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
Explanation: Adjust the Axes
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x))
plt.plot(x, norm.pdf(x, 1.0, 0.5))
plt.show()
Explanation: Add a Grid
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.show()
Explanation: Change Line Types and Colors
End of explanation
axes = plt.axes()
axes.set_xlim([-5, 5])
axes.set_ylim([0, 1.0])
axes.set_xticks([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5])
axes.set_yticks([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
axes.grid()
plt.xlabel('Greebles')
plt.ylabel('Probability')
plt.plot(x, norm.pdf(x), 'b-')
plt.plot(x, norm.pdf(x, 1.0, 0.5), 'r:')
plt.legend(['Sneetches', 'Gacks'], loc=4)
plt.show()
Explanation: Labeling Axes and Adding a Legend
End of explanation
plt.xkcd()
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10))
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
Explanation: XKCD Style :)
End of explanation
# Remove XKCD mode:
plt.rcdefaults()
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
explode = [0, 0, 0.2, 0, 0]
labels = ['India', 'United States', 'Russia', 'China', 'Europe']
plt.pie(values, colors= colors, labels=labels, explode = explode)
plt.title('Student Locations')
plt.show()
Explanation: Pie Chart
End of explanation
values = [12, 55, 4, 32, 14]
colors = ['r', 'g', 'b', 'c', 'm']
plt.bar(range(0,5), values, color= colors)
plt.show()
Explanation: Bar Chart
End of explanation
from pylab import randn
X = randn(500)
Y = randn(500)
plt.scatter(X,Y)
plt.show()
Explanation: Scatter Plot
End of explanation
incomes = np.random.normal(27000, 15000, 10000)
plt.hist(incomes, 50)
plt.show()
Explanation: Histogram
End of explanation
uniformSkewed = np.random.rand(100) * 100 - 40
high_outliers = np.random.rand(10) * 50 + 100
low_outliers = np.random.rand(10) * -50 - 100
data = np.concatenate((uniformSkewed, high_outliers, low_outliers))
plt.boxplot(data)
plt.show()
Explanation: Box & Whisker Plot
Useful for visualizing the spread & skew of data.
The red line represents the median of the data, and the box represents the bounds of the 1st and 3rd quartiles.
So, half of the data exists within the box.
The dotted-line "whiskers" indicate the range of the data - except for outliers, which are plotted outside the whiskers. Outliers are 1.5X or more the interquartile range.
This example below creates uniformly distributed random numbers between -40 and 60, plus a few outliers above 100 and below -100:
End of explanation |
13,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step6: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step8: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
Step10: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step12: Problem 3
Another check
Step13: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
Step14: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step15: Problem 4
Convince yourself that the data is still good after shuffling!
Step16: Finally, let's save the data for later reuse
Step17: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions
Step18: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
data_root = '.' # Change me to store data elsewhere
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 5% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
dest_filename = os.path.join(data_root, filename)
if force or not os.path.exists(dest_filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(dest_filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', dest_filename)
else:
raise Exception(
'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')
return dest_filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall(data_root)
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
def display_random_image(folder, num_to_display):
Display a number of images from a folder.
files = os.listdir(folder)
files_sample = np.random.choice(files, num_to_display)
for file_sample in files_sample:
display(Image(filename=os.path.join(folder, file_sample)))
for folder in train_folders:
display_random_image(folder, 5)
for folder in test_folders:
display_random_image(folder, 5)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
def display_random_matrix(pickle_file, num_to_display):
Display a number of images from a folder.
with open(pickle_file, 'rb') as f:
dataset = pickle.load(f)
index_samples = np.random.choice(range(0, dataset.shape[0]), num_to_display)
for index_sample in index_samples:
plt.figure()
plt.imshow(dataset[index_sample, :, :])
for pickle_file in train_datasets:
display_random_matrix(pickle_file, 3)
for pickle_file in test_datasets:
display_random_matrix(pickle_file, 1)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
def display_class_size(pickle_file):
Number of samples in each class.
with open(pickle_file, 'rb') as f:
dataset = pickle.load(f)
print("Dataset {0} is of size: {1}".format(pickle_file, dataset.shape[0]))
for pickle_file in train_datasets:
display_class_size(pickle_file)
for pickle_file in test_datasets:
display_class_size(pickle_file)
Explanation: Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
def show_random_samples(dataset, labels, num_to_display):
index_samples = np.random.choice(range(0, dataset.shape[0]), num_to_display)
for index_sample in index_samples:
print(labels[index_sample])
plt.matshow(dataset[index_sample, :, :])
show_random_samples(train_dataset, train_labels, 5)
show_random_samples(test_dataset, test_labels, 3)
show_random_samples(valid_dataset, valid_labels, 3)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
End of explanation
pickle_file = os.path.join(data_root, 'notMNIST.pickle')
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
Explanation: Finally, let's save the data for later reuse:
End of explanation
train_dataset[455, :, :].shape
overlap_train_validation = 0
overlap_train_test = 0
overlap_validation_test = 0
for train_index in range(0, train_dataset.shape[0]):
image_train = train_dataset[train_index, :, :]
for valid_index in range(0, valid_dataset.shape[0]):
image_valid = valid_dataset[valid_index, :, :]
if np.array_equal(image_train, image_valid):
overlap_train_validation += 1
for test_index in range(0, test_dataset.shape[0]):
image_test = test_dataset[test_index, :, :]
if np.array_equal(image_train, image_test):
overlap_train_test += 1
for valid_index in range(0, valid_dataset.shape[0]):
image_valid = valid_dataset[valid_index, :, :]
for test_index in range(0, test_dataset.shape[0]):
image_test = test_dataset[test_index, :, :]
if np.array_equal(image_valid, image_test):
overlap_validation_test += 1
print("Overlap between training and validation datasets = {0}".format(overlap_train_validation))
print("Overlap between training and test datasets = {0}".format(overlap_train_test))
print("Overlap between test and validation datasets = {0}".format(overlap_validation_test))
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
def model_test(train_dataset, train_labels, test_dataset, test_labels, number_of_samples=None):
if not number_of_samples or number_of_samples > train_dataset.shape[0]:
samples = range(0, train_dataset.shape[0])
else:
samples = np.random.choice(range(0, train_dataset.shape[0]), number_of_samples)
logreg = LogisticRegression()
training_data = train_dataset[samples, :, :]
training_labels = train_labels[samples]
X = np.ndarray((training_data.shape[0], training_data.shape[1] * training_data.shape[2]), dtype=np.float32)
Z = np.ndarray((test_dataset.shape[0], test_dataset.shape[1] * test_dataset.shape[2]), dtype=np.float32)
# Need to convert data to 2D rather than 3D array - convert image into 1D
for train_index in range(0, training_data.shape[0]):
X[train_index] = np.ndarray.flatten(training_data[train_index, :, :])
for test_index in range(0, test_dataset.shape[0]):
Z[test_index] = np.ndarray.flatten(test_dataset[test_index, :, :])
logreg.fit(X, training_labels)
predicted = logreg.predict(Z)
print("With {0} training samples, accuracy is {1}".format(number_of_samples, np.mean(predicted == test_labels)*100))
model_test(train_dataset, train_labels, test_dataset, test_labels, number_of_samples=50)
model_test(train_dataset, train_labels, test_dataset, test_labels, number_of_samples=100)
model_test(train_dataset, train_labels, test_dataset, test_labels, number_of_samples=1000)
model_test(train_dataset, train_labels, test_dataset, test_labels, number_of_samples=5000)
model_test(train_dataset, train_labels, test_dataset, test_labels)
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation |
13,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Raw Data Download
This scripts downloads all of the stage 1 raw data for the Kaggle Data Science Bowl 2017 (https
Step1: Defining function to download and extract all raw data
Step2: Grabbing all the stored raw data urls | Python Code:
from urllib import request
import zipfile, io
from pathlib import Path
import os
import re
from pyunpack import Archive
Explanation: Raw Data Download
This scripts downloads all of the stage 1 raw data for the Kaggle Data Science Bowl 2017 (https://www.kaggle.com/c/data-science-bowl-2017)
We are pulling the raw data from the data page
https://www.kaggle.com/c/data-science-bowl-2017/data
End of explanation
def extract_files(url, orig_dir=os.getcwd()):
os.chdir('../..')
file_name = 'data/raw/' + re.search('(\w+)(\.\w+)+(?!.*(\w+)(\.\w+)+)', url).group(0)
request.urlretrieve(url, filename=file_name)
Archive(file_name).extractall('')
os.chdir(orig_dir)
#, filename='data/raw/' + re.findall('(\w+)(\.\w+)+(?!.*(\w+)(\.\w+)+)', url)[0][0]
Explanation: Defining function to download and extract all raw data
End of explanation
with open('.dataurl') as file:
urls = file.read().splitlines()
for url in urls:
extract_files(url)
Explanation: Grabbing all the stored raw data urls
End of explanation |
13,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents
We train an LSTM with gumbel-sigmoid gates on a toy language modelling problem.
Such LSTM can than be binarized to reach signifficantly greater speed.
Step1: Generate mtg cards
Regular RNN language modelling done by LSTM with "binary" gates
Step2: Text processing
Step3: Cast everything from symbols into identifiers
Step4: Input variables
Step5: Build NN
You'll be building a model that takes token sequence and predicts next tokens at each tick
This is basically equivalent to how rnn step was described in the lecture
Step6: Loss && Training
Step7: generation
here we re-wire the recurrent network so that it's output is fed back to it's input
Step8: Model training
Here you can tweak parameters or insert your generation function
Once something word-like starts generating, try increasing seq_length | Python Code:
%env THEANO_FLAGS="device=gpu3"
import numpy as np
import theano
import theano.tensor as T
import lasagne
import os
Explanation: Contents
We train an LSTM with gumbel-sigmoid gates on a toy language modelling problem.
Such LSTM can than be binarized to reach signifficantly greater speed.
End of explanation
start_token = " "
with open("mtg_card_names.txt") as f:
names = f.read()[:-1].split('\n')
names = [start_token+name for name in names]
print 'n samples = ',len(names)
for x in names[::1000]:
print x
Explanation: Generate mtg cards
Regular RNN language modelling done by LSTM with "binary" gates
End of explanation
#all unique characters go here
token_set = set()
for name in names:
for letter in name:
token_set.add(letter)
tokens = list(token_set)
print 'n_tokens = ',len(tokens)
#!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>
token_to_id = {t:i for i,t in enumerate(tokens) }
#!id_to_token = < dictionary of symbol identifier -> symbol itself>
id_to_token = {i:t for i,t in enumerate(tokens)}
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(map(len,names),bins=25);
# truncate names longer than MAX_LEN characters.
MAX_LEN = min([60,max(list(map(len,names)))])
#ADJUST IF YOU ARE UP TO SOMETHING SERIOUS
Explanation: Text processing
End of explanation
names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names))
#crop long names and pad short ones
for i in range(len(names_ix)):
names_ix[i] = names_ix[i][:MAX_LEN] #crop too long
if len(names_ix[i]) < MAX_LEN:
names_ix[i] += [token_to_id[" "]]*(MAX_LEN - len(names_ix[i])) #pad too short
assert len(set(map(len,names_ix)))==1
names_ix = np.array(names_ix)
Explanation: Cast everything from symbols into identifiers
End of explanation
from agentnet import Recurrence
from lasagne.layers import *
from agentnet.memory import *
from agentnet.resolver import ProbabilisticResolver
from gumbel_sigmoid import GumbelSigmoid
sequence = T.matrix('token sequence','int64')
inputs = sequence[:,:-1]
targets = sequence[:,1:]
l_input_sequence = InputLayer(shape=(None, None),input_var=inputs)
Explanation: Input variables
End of explanation
###One step of rnn
class rnn:
n_hid = 100
#inputs
inp = InputLayer((None,),name='current character')
prev_cell = InputLayer((None,n_hid),name='previous lstm cell')
prev_hid = InputLayer((None,n_hid),name='previous ltsm output')
#recurrent part
emb = EmbeddingLayer(inp, len(tokens), 30,name='emb')
new_cell,new_hid = LSTMCell(prev_cell,prev_hid,emb,
name="rnn")
next_token_probas = DenseLayer(new_hid,len(tokens),nonlinearity=T.nnet.softmax)
#pick next token from predicted probas
next_token = ProbabilisticResolver(next_token_probas)
Explanation: Build NN
You'll be building a model that takes token sequence and predicts next tokens at each tick
This is basically equivalent to how rnn step was described in the lecture
End of explanation
training_loop = Recurrence(
state_variables={rnn.new_hid:rnn.prev_hid,
rnn.new_cell:rnn.prev_cell},
input_sequences={rnn.inp:l_input_sequence},
tracked_outputs=[rnn.next_token_probas,],
unroll_scan=False,
)
# Model weights
weights = lasagne.layers.get_all_params(training_loop,trainable=True)
print weights
predicted_probabilities = lasagne.layers.get_output(training_loop[rnn.next_token_probas])
#If you use dropout do not forget to create deterministic version for evaluation
loss = lasagne.objectives.categorical_crossentropy(predicted_probabilities.reshape((-1,len(tokens))),
targets.reshape((-1,))).mean()
#<Loss function - a simple categorical crossentropy will do, maybe add some regularizer>
updates = lasagne.updates.adam(loss,weights)
#training
train_step = theano.function([sequence], loss,
updates=training_loop.get_automatic_updates()+updates)
Explanation: Loss && Training
End of explanation
n_steps = T.scalar(dtype='int32')
feedback_loop = Recurrence(
state_variables={rnn.new_cell:rnn.prev_cell,
rnn.new_hid:rnn.prev_hid,
rnn.next_token:rnn.inp},
tracked_outputs=[rnn.next_token_probas,],
batch_size=1,
n_steps=n_steps,
unroll_scan=False,
)
generated_tokens = get_output(feedback_loop[rnn.next_token])
generate_sample = theano.function([n_steps],generated_tokens,updates=feedback_loop.get_automatic_updates())
def generate_string(length=MAX_LEN):
output_indices = generate_sample(length)[0]
return ''.join(tokens[i] for i in output_indices)
generate_string()
Explanation: generation
here we re-wire the recurrent network so that it's output is fed back to it's input
End of explanation
def sample_batch(data, batch_size):
rows = data[np.random.randint(0,len(data),size=batch_size)]
return rows
print("Training ...")
#total N iterations
n_epochs=100
# how many minibatches are there in the epoch
batches_per_epoch = 500
#how many training sequences are processed in a single function call
batch_size=32
loss_history = []
for epoch in xrange(n_epochs):
avg_cost = 0;
for _ in range(batches_per_epoch):
avg_cost += train_step(sample_batch(names_ix,batch_size))
loss_history.append(avg_cost)
print("\n\nEpoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch))
print "Generated names"
for i in range(10):
print generate_string(),
plt.plot(loss_history)
Explanation: Model training
Here you can tweak parameters or insert your generation function
Once something word-like starts generating, try increasing seq_length
End of explanation |
13,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generación de trayectorias por medio de LSPB
El objetivo de esta práctica es generar una trayectoria para un robot manipulador, de tal manera que no tenga cambios bruscos de posición o velocidad.
El algoritmo general que vamos a usar se llama LSPB (Linear segment with parabolic blend), en el cual vamos a tener una velocidad de crucero para el manipulador, así como un periodo en el que el manipulador acelerará constantemente y otro periodo en el que desacelererá.
Una trayectoria generada por este método se ve asi
Step1: Vamos a hacer una prueba en primer lugar
Step2: En la gráfica anterior podemos ver no solo la posición en el primer cuadro, si no tambien la velocidad y la aceleración en el segundo y tercero, de tal manera que nos damos una mejor idea de la trayectoria.
Vamos a generar un conjunto de trayectorias para un ejemplo, primero empecemos importando pi
Step3: Vamos a generar una trayectoria en la que en los primero dos segundos se mueva de $0^o$ a $90^o$, en los segundos dos segundos de $90^o$ a $-60^o$ y en los ultimos seis segundos de $-60^o$ a $240^o$
Step4: Si quiero concatenar todos estos arreglos que generé, tan solo tengo que sumarlos
Step5: Esta trayectoria la podemos graficar
Step6: Pero mas importante, puedo generar una animación, tomando en cuenta que es un pendulo simple | Python Code:
from generacion_trayectorias import grafica_trayectoria
%matplotlib inline
Explanation: Generación de trayectorias por medio de LSPB
El objetivo de esta práctica es generar una trayectoria para un robot manipulador, de tal manera que no tenga cambios bruscos de posición o velocidad.
El algoritmo general que vamos a usar se llama LSPB (Linear segment with parabolic blend), en el cual vamos a tener una velocidad de crucero para el manipulador, así como un periodo en el que el manipulador acelerará constantemente y otro periodo en el que desacelererá.
Una trayectoria generada por este método se ve asi:
En la primer sección se tiene una aceleración constante, en la segunda una velocidad constante y en la tercera una aceleración constante de signo contrario a la primera.
El método por el que se generó no es particularmente dificil, tan solo es un poco engorroso de programar, por lo que para facilidad de esta práctica, este método ya esta programado, tan solo hay que importar el código:
End of explanation
ts, qs, q̇s, q̈s = grafica_trayectoria(0, 2, 0, 1, 1000)
Explanation: Vamos a hacer una prueba en primer lugar:
End of explanation
from numpy import pi
τ = 2*pi
Explanation: En la gráfica anterior podemos ver no solo la posición en el primer cuadro, si no tambien la velocidad y la aceleración en el segundo y tercero, de tal manera que nos damos una mejor idea de la trayectoria.
Vamos a generar un conjunto de trayectorias para un ejemplo, primero empecemos importando pi:
End of explanation
ts, q1, q̇1, q̈1 = grafica_trayectoria(0, 2, 0, τ/4, 100)
ts, q2, q̇2, q̈2 = grafica_trayectoria(2, 4, τ/4, -τ/6, 100)
ts, q3, q̇3, q̈3 = grafica_trayectoria(4, 10, -τ/6, 2*τ/3, 300)
Explanation: Vamos a generar una trayectoria en la que en los primero dos segundos se mueva de $0^o$ a $90^o$, en los segundos dos segundos de $90^o$ a $-60^o$ y en los ultimos seis segundos de $-60^o$ a $240^o$:
End of explanation
qs = q1 + q2 + q3
Explanation: Si quiero concatenar todos estos arreglos que generé, tan solo tengo que sumarlos:
End of explanation
from matplotlib.pyplot import figure, style
from numpy import linspace
fig = figure(figsize=(17, 5))
ax = fig.gca()
ts = linspace(0, 10, 500)
ax.plot(ts, qs)
Explanation: Esta trayectoria la podemos graficar:
End of explanation
# Se importa la funcion animation para crear la animacion, y rc para poder mostrar el video
# directamente en el notebook
from matplotlib import animation, rc
rc('animation', html='html5')
# Se importan las funciones necesarias para calcular la cinematica directa e inversa
from numpy import sin, cos, arange
# Se define una funcion para calcular la cinematica directa del sistema
def cinematica_directa_pendulo(q1):
# Se definen constantes utilizadas para graficar el sistema
l1, l2 = 1, 1
xs = [0, l1*cos(q1)]
ys = [0, l1*sin(q1)]
return xs, ys
# Se define el tamaño de la figura
fig = figure(figsize=(8, 8))
# Se define una sola grafica en la figura y se dan los limites de los ejes x y y
axi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.1, 1.1), ylim=(-1.1, 1.1))
# Se utilizan graficas de linea para el eslabon del pendulo
linea, = axi.plot([], [], "-o", lw=2, color='gray')
def inicializacion():
'''Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema'''
# Se inicializa la linea vacia para evitar que al principio exista una linea en la grafica
linea.set_data([], [])
return linea
def animacion(i):
'''Esta funcion se ejecuta para cada cuadro del GIF'''
# Se obtienen las coordenadas x y y para el eslabon
xs, ys = cinematica_directa_pendulo(qs[i])
# Se actualiza el estado de la linea con las coordenadas calculadas
linea.set_data(xs, ys)
return linea
# Se hace la animacion dandole la funcion que se debe ejecutar para cada cuadro, el numero de cuadros
# que se debe de hacer, el periodo de cada cuadro y la funcion inicial
ani = animation.FuncAnimation(fig, animacion, arange(1, len(qs)), interval=20, init_func=inicializacion)
ani
Explanation: Pero mas importante, puedo generar una animación, tomando en cuenta que es un pendulo simple:
End of explanation |
13,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UCI SECOM Dataset
Semiconductor manufacturing process dataset
2018/7/17 Wayne Nixalo
0. Setup
Step1: 1. EDA
Step2: 50 random signals
Step3: All failures (104)
Step4: Random 100 passes
Step5: Eyeing it isn't going to work.
2. Data split
train / val
Step6: Since there are only 104 negative examples to 1463 positives, I want to ensure there's a similar ratio in the split datasets.
Step7: I could try resampling negative examples to artificially balance the dataset, although I won't attempt to generatively create new examples here.
2.1 Data preprocessing
Step8: Separate data into inputs and labels
Step9: Preprocessing
Step10: 3. Linear Models 1
Step11: An R2 score (what the Linear Regressor is using as its scoring metric) gives a value of 1 for a perfect score, and 0 for taking the average; anything below a zero is worse than just taking the average of the dataset..
I wonder if I was just misusing this model. Though I guess fitting a simple line to this dataset and generalizing would be difficult.
4. Linear Models 2
Step12: This gives more-expected results.
5. Support Vector Machine
Step13: 6. Simple Neural Network - exploring issues
Step14: One-Hot Encode -1/+1 pass/fail
Step15: Normalizing to [0,1]
... after sklearn scaling
Step16: Clipping to [0,1]
Step17: No clipping; only sklearn scaling | Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing, svm
from sklearn.linear_model import LinearRegression, LogisticRegression
PATH = Path('data/datasets/paresh2047/uci-semcom')
df = pd.read_csv(PATH/'uci-secom.csv')
Explanation: UCI SECOM Dataset
Semiconductor manufacturing process dataset
2018/7/17 Wayne Nixalo
0. Setup
End of explanation
df.head() # -1 pass; +1 fail
df
df.values.shape
col = df.columns[-1]
col
passes = df.loc[df[col]==-1]
fails = df.loc[df[col]== 1]
plt.style.use('seaborn')
def plot_row(df, rows=0, show_nans=False, figsize=None, alpha=1.):
if figsize is not None:
fig = plt.figure(figsize=(figsize))
if type(rows) == int:
rows = [rows]
for row in rows:
row = df.values[row][1:]
if show_nans:
nans = np.where(pd.isnull(row))
ymax,ymin = max(row)/5, -max(row)/5
plt.vlines(nans, ymin=ymin, ymax=ymax, linewidth=.5, color='firebrick')
plt.plot(range(len(row)), row, alpha=alpha);
plot_row(df, figsize=(12,8), show_nans=True)
Explanation: 1. EDA
End of explanation
plot_row(df, np.random.randint(len(df), size=50), figsize=(12,8), alpha=0.1)
Explanation: 50 random signals:
End of explanation
plot_row(fails, rows=range(len(fails)), figsize=(12,8), alpha=0.1)
Explanation: All failures (104)
End of explanation
plot_row(passes, rows=np.random.randint(len(passes), size=100), figsize=(12,8), alpha=0.1)
Explanation: Random 100 passes
End of explanation
def train_val_idxs(data, p=0.2):
idxs = np.random.permutation(len(data))
n_val = int(len(data)*p)
return idxs[n_val:], idxs[:n_val]
train_idxs, val_idxs = train_val_idxs(df)
train.columns
train = df.iloc[train_idxs]
valid = df.iloc[val_idxs]
# remove the first 'timestamp' column
train = train.drop(columns=['Time'])
valid = valid.drop(columns=['Time'])
len(train), len(valid)
Explanation: Eyeing it isn't going to work.
2. Data split
train / val : 80 / 20
End of explanation
pos, neg = len(passes), len(fails)
pos, neg, neg/pos
pos, neg = len(valid.loc[valid[col]==-1]), len(valid.loc[valid[col]== 1])
pos, neg, neg/pos
pos, neg = len(train.loc[train[col]==-1]), len(train.loc[train[col]== 1])
pos, neg, neg/pos
Explanation: Since there are only 104 negative examples to 1463 positives, I want to ensure there's a similar ratio in the split datasets.
End of explanation
# replacing NaNs with the mean of each row
for rdx in range(len(train)):
train.iloc[rdx] = train.iloc[rdx].fillna(train.iloc[rdx].mean())
for rdx in range(len(valid)):
valid.iloc[rdx] = valid.iloc[rdx].fillna(valid.iloc[rdx].mean())
Explanation: I could try resampling negative examples to artificially balance the dataset, although I won't attempt to generatively create new examples here.
2.1 Data preprocessing
End of explanation
x_train = train.drop([col], 1).values
y_train = train[col].values
x_valid = valid.drop([col], 1).values
y_valid = valid[col].values
Explanation: Separate data into inputs and labels:
End of explanation
x_train = preprocessing.scale(x_train)
x_valid = preprocessing.scale(x_valid)
Explanation: Preprocessing: Center to Mean and Scale to Unit Variance
End of explanation
clsfr = LinearRegression()
clsfr.fit(x_train, y_train)
# clsfr.fit(x_valid, y_valid)
clsfr.score(x_train, y_train)
clsfr.score(x_valid, y_valid)
Explanation: 3. Linear Models 1: Linear Regression
Classifier:
End of explanation
clsfr = LogisticRegression()
clsfr.fit(x_train, y_train)
clsfr.score(x_train, y_train)
clsfr.score(x_valid, y_valid)
Explanation: An R2 score (what the Linear Regressor is using as its scoring metric) gives a value of 1 for a perfect score, and 0 for taking the average; anything below a zero is worse than just taking the average of the dataset..
I wonder if I was just misusing this model. Though I guess fitting a simple line to this dataset and generalizing would be difficult.
4. Linear Models 2: Logistic Regression
End of explanation
clsfr = svm.LinearSVC()
clsfr.fit(x_train, y_train)
clsfr.score(x_train, y_train)
clsfr.score(x_valid, y_valid)
Explanation: This gives more-expected results.
5. Support Vector Machine
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
from fastai.learner import *
from fastai.dataloader import DataLoader
import torchvision
class SimpleNet(nn.Module):
def __init__(self, in_size):
super().__init__()
self.fc0 = nn.Linear(in_size, 80)
self.fc1 = nn.Linear(80, 2)
def forward(self, x):
x = F.relu(self.fc0(x))
x = F.log_softmax(self.fc1(x))
return x
class SignalDataset(Dataset):
def __init__(self, x, y, transform=None):
self.x = np.copy(x)
self.y = np.copy(y)
self.transform = transform
def __len__(self):
return len(self.x)
def __getitem__(self, i):
x = self.x[i]
y = self.y[i]
if self.transform is not None:
x = self.transform(x)
return (x, y)
Explanation: 6. Simple Neural Network - exploring issues
End of explanation
y_train.shape
def one_hot_y(y_data):
y = np.zeros((y_data.shape[0], 2))
for i,yi in enumerate(y_data):
y[i][int((yi + 1)/2)] = 1
return y
Explanation: One-Hot Encode -1/+1 pass/fail
End of explanation
train_dataset = SignalDataset(x_train, one_hot_y(y_train))
valid_dataset = SignalDataset(x_valid, one_hot_y(y_valid))
train_dataset.x
minval = abs(np.min(train_dataset.x))
train_dataset.x += minval
train_dataset.x /= np.max(train_dataset.x)
minval = abs(np.min(valid_dataset.x))
valid_dataset.x += minval
valid_dataset.x /= np.max(valid_dataset.x)
train_dataloader = DataLoader(train_dataset)
valid_dataloader = DataLoader(valid_dataset)
mdata = ModelData(PATH, train_dataloader, valid_dataloader)
network = SimpleNet(len(train_dataset.x[0]))
network
learner = Learner.from_model_data(network, mdata)
learner.lr_find()
learner.sched.plot()
learner.fit(1e-4, n_cycle=5, wds=1e-6)
log_preds = learner.predict()
np.exp(log_preds)[:40]
Explanation: Normalizing to [0,1]
... after sklearn scaling
End of explanation
train_dataset = SignalDataset(x_train, one_hot_y(y_train))
valid_dataset = SignalDataset(x_valid, one_hot_y(y_valid))
train_dataset.x
train_dataset.x = np.clip(train_dataset.x, 0.0, 1.0)
valid_dataset.x = np.clip(valid_dataset.x, 0.0, 1.0)
train_dataloader = DataLoader(train_dataset)
valid_dataloader = DataLoader(valid_dataset)
mdata = ModelData(PATH, train_dataloader, valid_dataloader)
network = SimpleNet(len(train_dataset.x[0]))
network
learner = Learner.from_model_data(network, mdata)
learner.lr_find()
learner.sched.plot()
learner.fit(1e-4, n_cycle=5, wds=1e-6)
log_preds = learner.predict()
np.exp(log_preds)[:40]
Explanation: Clipping to [0,1]
End of explanation
train_dataset = SignalDataset(x_train, one_hot_y(y_train))
valid_dataset = SignalDataset(x_valid, one_hot_y(y_valid))
train_dataset.x
train_dataloader = DataLoader(train_dataset)
valid_dataloader = DataLoader(valid_dataset)
mdata = ModelData(PATH, train_dataloader, valid_dataloader)
network = SimpleNet(len(train_dataset.x[0]))
network
learner = Learner.from_model_data(network, mdata)
learner.lr_find()
learner.sched.plot()
learner.fit(5e-4, n_cycle=5, wds=1e-6)
log_preds = learner.predict()
np.exp(log_preds)[:40]
Explanation: No clipping; only sklearn scaling
End of explanation |
13,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Careful, these constants may be different for you
Step1: First import the training and testing sets
Step2: Fit the training data.
Step3: Sanity checks
One variable
First we plot the expected estimated density as a function of each of the five variables. We expect the function and the histogram to match.
Step4: Two variables
We also plot densities for each pair of variables. | Python Code:
DATA_PATH = '~/Desktop/sdss_dr7_photometry_source.csv.gz'
import itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.neighbors
%matplotlib inline
PSF_COLS = ('psfMag_u', 'psfMag_g', 'psfMag_r', 'psfMag_i', 'psfMag_z')
Explanation: Careful, these constants may be different for you:
End of explanation
def load_data(x_cols=PSF_COLS,
class_col='class',
class_val='Galaxy',
train_samples_num=1000000):
# Cast x_cols to list so Pandas doesn't complain…
x_cols_l = list(x_cols)
data_iter = pd.read_csv(
DATA_PATH,
iterator=True,
chunksize=100000,
usecols=x_cols_l + [class_col])
# Filter out anything that is not a galaxy without loading the whole file into memory.
data = pd.concat(chunk[chunk[class_col] == class_val]
for chunk in data_iter)
train_X = data[:train_samples_num][x_cols_l].as_matrix()
assert train_X.shape == (train_samples_num, len(x_cols))
return train_X
data = load_data()
Explanation: First import the training and testing sets
End of explanation
def fit(train_X,
bandwidth=1, # By experimentation.
kernel='epanechnikov', # Resembles Gaussian within short distance, but is faster.
leaf_size=400, # For speed.
rtol=1e-3): # Decreased accuracy, but better speed.
estimator = sklearn.neighbors.KernelDensity(bandwidth=bandwidth,
kernel=kernel,
leaf_size=leaf_size,
rtol=rtol)
estimator.fit(train_X)
return estimator
kde = fit(data)
Explanation: Fit the training data.
End of explanation
def make_5D_grid(train_X,
grid_samples_per_axis=20): # Careful! This code is O(n^5) in this variable
mins = np.min(train_X, axis=0)
maxs = np.max(train_X, axis=0)
assert mins.shape == maxs.shape == (train_X.shape[1],)
# Produce the 5D grid. This is surprisingly nontrivial.
# http://stackoverflow.com/questions/28825219/
linspaces = [np.linspace(i, j, grid_samples_per_axis)
for i, j in zip(mins, maxs)]
mesh_grids = np.meshgrid(*linspaces,
indexing='ij') # Otherwise numpy swaps the first two dimensions… 😕
sample_points = np.array(mesh_grids)
return sample_points
grid = make_5D_grid(data)
def evaluate_density_at_sample_points(estimator, sample_points):
dims = sample_points.shape[0]
samples_per_axis = sample_points.shape[1]
assert sample_points.shape[1:] == (samples_per_axis,) * dims
sample_points = np.reshape(sample_points, (dims, samples_per_axis ** dims))
densities = estimator.score_samples(sample_points.T)
densities = np.reshape(densities, (samples_per_axis,) * dims)
# Convert from log densities
densities = np.exp(densities)
return densities
grid_densities = evaluate_density_at_sample_points(kde, grid)
def plot_against_one_variable(train_X, sample_points, densities,
bands=PSF_COLS,
bins=1000,
scale_coeff=2500):
dims = len(bands)
assert train_X.shape[1] == sample_points.shape[0] == dims
assert sample_points.shape[1:] == densities.shape
for i in range(dims):
fig, axes = plt.subplots()
# Make histogram.
axes.hist(train_X[:,i], # We only care about one of the five dimensions.
bins=bins,
label='Actual density')
# Make plot of estimated densities.
x_indices = tuple(0 if a != i else slice(None) # Linspace over
for a in range(dims)) # i-th dimension.
x_indices = (i,) + x_indices # Only take i-th dimension. Due to the
# above others are constant anyway.
x = sample_points[x_indices]
assert len(x.shape) == 1 # Sanity check to ensure it is 1D.
y_sum_axes = tuple(a for a in range(dims) if a != i) # Sum over all dimensions except i.
y = np.sum(densities, axis=y_sum_axes)
y *= scale_coeff
assert y.shape == x.shape
axes.plot(x, y, label='Estimated density')
# Labels
plt.ylabel('Count')
plt.xlabel('Magnitude')
plt.title(bands[i])
plt.legend()
plot_against_one_variable(data, grid, grid_densities)
Explanation: Sanity checks
One variable
First we plot the expected estimated density as a function of each of the five variables. We expect the function and the histogram to match.
End of explanation
def plot_against_two_variables(train_X, sample_points, densities,
bands=PSF_COLS,
bins=1000):
dims = len(bands)
assert train_X.shape[1] == sample_points.shape[0] == dims
assert sample_points.shape[1:] == densities.shape
mins = sample_points[(slice(None),) + (0,) * dims]
maxs = sample_points[(slice(None),) + (-1,) * dims]
plt.figure(figsize=(10, 40))
upto = 1
for i in range(dims):
for j in range(i + 1, dims):
plt.subplot((dims ** 2 - dims) // 2, 2, upto)
upto += 1
z_sum_axes = tuple(a for a in range(dims) if a != i and a != j) # Sum over all dimensions except i.
z = np.sum(densities, axis=z_sum_axes)
extent = [mins[i], maxs[i], mins[j], maxs[j]]
# plt.axis(extent)
plt.imshow(z.T,
cmap='hot',
interpolation='nearest',
extent=extent,
aspect='auto',
origin='lower')
plt.xlabel(bands[i])
plt.ylabel(bands[j])
plt.title('Estimated')
plt.xlim((16, 26))
plt.ylim((16, 24))
plt.subplot((dims ** 2 - dims) // 2, 2, upto)
upto += 1
plt.hexbin(train_X[:,i], train_X[:,j], gridsize=100)
plt.xlabel(bands[i])
plt.ylabel(bands[j])
plt.title('Actual')
plt.xlim((16, 26))
plt.ylim((16, 24))
plot_against_two_variables(data, grid, grid_densities)
Explanation: Two variables
We also plot densities for each pair of variables.
End of explanation |
13,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 3
Step3: Heat currents
Following Ref. [2], we consider two possible definitions of the heat currents from the qubits into the baths.
The so-called bath heat currents are $j_{\text{B}}^K = \partial_t \langle H_{\text{B}}^K \rangle$ and the system heat currents are $j_{\text{S}}^K = \mathrm i\, \langle [H_{\text{S}}, Q_K] X_{\text{B}}^K \rangle$.
As shown in Ref. [2], they can be expressed in terms of the HEOM ADOs as follows
Step4: Note that at long times, we expect $j_{\text{B}}^1 = -j_{\text{B}}^2$ and $j_{\text{S}}^1 = -j_{\text{S}}^2$ due to energy conservation. At long times, we also expect $j_{\text{B}}^1 = j_{\text{S}}^1$ and $j_{\text{B}}^2 = j_{\text{S}}^2$ since the coupling operators commute, $[Q_1, Q_2] = 0$. Hence, all four currents should agree in the long-time limit (up to a sign). This long-time value is what was analyzed in Ref. [2].
Simulations
For our simulations, we will represent the bath spectral densities using the first term of their Padé decompositions, and we will use $7$ levels of the HEOM hierarchy.
Step5: Time Evolution
We fix $J_{12} = 0.1 \epsilon$ (as in Fig. 3(a-ii) of Ref. [2]) and choose the fixed coupling strength $\lambda_1 = \lambda_2 = J_{12}\, /\, (2\epsilon)$ (corresponding to $\bar\zeta = 1$ in Ref. [2]).
Using these values, we will study the time evolution of the system state and the heat currents.
Step6: We first plot $\langle \sigma_z^1 \rangle$ to see the time evolution of the system state
Step7: We find a rather quick thermalization of the system state. For the heat currents, however, it takes a somewhat longer time until they converge to their long-time values
Step8: Steady-state currents
Here, we try to reproduce the HEOM curves in Fig. 3(a) of Ref. [1] by varying the coupling strength and finding the steady state for each coupling strength.
Step9: Create Plot | Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import qutip as qt
from qutip.nonmarkov.heom import HEOMSolver, DrudeLorentzPadeBath, BathExponent
from ipywidgets import IntProgress
from IPython.display import display
# Qubit parameters
epsilon = 1
# System operators
H1 = epsilon / 2 * qt.tensor(qt.sigmaz() + qt.identity(2), qt.identity(2))
H2 = epsilon / 2 * qt.tensor(qt.identity(2), qt.sigmaz() + qt.identity(2))
H12 = lambda J12 : J12 * (qt.tensor(qt.sigmap(), qt.sigmam()) + qt.tensor(qt.sigmam(), qt.sigmap()))
Hsys = lambda J12 : H1 + H2 + H12(J12)
# Cutoff frequencies
gamma1 = 2
gamma2 = 2
# Temperatures
Tbar = 2
Delta_T = 0.01 * Tbar
T1 = Tbar + Delta_T
T2 = Tbar - Delta_T
# Coupling operators
Q1 = qt.tensor(qt.sigmax(), qt.identity(2))
Q2 = qt.tensor(qt.identity(2), qt.sigmax())
Explanation: Example 3: Quantum Heat Transport
Setup
In this notebook, we apply the QuTiP HEOM solver to a quantum system coupled to two bosonic baths and demonstrate how to extract information about the system-bath heat currents from the auxiliary density operators (ADOs).
We consider the setup described in Ref. [1], which consists of two coupled qubits, each connected to its own heat bath.
The Hamiltonian of the qubits is given by
$$ \begin{aligned} H_{\text{S}} &= H_1 + H_2 + H_{12} , \quad\text{ where }\
H_K &= \frac{\epsilon}{2} \bigl(\sigma_z^K + 1\bigr) \quad (K=1,2) \quad\text{ and }\quad H_{12} = J_{12} \bigl( \sigma_+^1 \sigma_-^2 + \sigma_-^1 \sigma_+^2 \bigr) . \end{aligned} $$
Here, $\sigma^K_{x,y,z,\pm}$ denotes the usual Pauli matrices for the K-th qubit, $\epsilon$ is the eigenfrequency of the qubits and $J_{12}$ the coupling constant.
Each qubit is coupled to its own bath; therefore, the total Hamiltonian is
$$ H_{\text{tot}} = H_{\text{S}} + \sum_{K=1,2} \bigl( H_{\text{B}}^K + Q_K \otimes X_{\text{B}}^K \bigr) , $$
where $H_{\text{B}}^K$ is the free Hamiltonian of the K-th bath and $X_{\text{B}}^K$ its coupling operator, and $Q_K = \sigma_x^K$ are the system coupling operators.
We assume that the bath spectral densities are given by Drude distributions
$$ J_K(\omega) = \frac{2 \lambda_K \gamma_K \omega}{\omega^2 + \gamma_K^2} , $$
where $\lambda_K$ is the free coupling strength and $\gamma_K$ the cutoff frequency.
We begin by defining the system and bath parameters.
We use the parameter values from Fig. 3(a) of Ref. [1].
Note that we set $\hbar$ and $k_B$ to one and we will measure all frequencies and energies in units of $\epsilon$.
[1] Kato and Tanimura, J. Chem. Phys. 143, 064107 (2015).
End of explanation
def bath_heat_current(bath_tag, ado_state, hamiltonian, coupling_op, delta=0):
Bath heat current from the system into the heat bath with the given tag.
Parameters
----------
bath_tag : str, tuple or any other object
Tag of the heat bath corresponding to the current of interest.
ado_state : HierarchyADOsState
Current state of the system and the environment (encoded in the ADOs).
hamiltonian : Qobj
System Hamiltonian at the current time.
coupling_op : Qobj
System coupling operator at the current time.
delta : float
The prefactor of the \delta(t) term in the correlation function (the Ishizaki-Tanimura terminator).
l1_labels = ado_state.filter(level=1, tags=[bath_tag])
a_op = 1j * (hamiltonian * coupling_op - coupling_op * hamiltonian)
result = 0
cI0 = 0 # imaginary part of bath auto-correlation function (t=0)
for label in l1_labels:
[exp] = ado_state.exps(label)
result += exp.vk * (coupling_op * ado_state.extract(label)).tr()
if exp.type == BathExponent.types['I']:
cI0 += exp.ck
elif exp.type == BathExponent.types['RI']:
cI0 += exp.ck2
result -= 2 * cI0 * (coupling_op * coupling_op * ado_state.rho).tr()
if delta != 0:
result -= 1j * delta * ((a_op * coupling_op - coupling_op * a_op) * ado_state.rho).tr()
return result
def system_heat_current(bath_tag, ado_state, hamiltonian, coupling_op, delta=0):
System heat current from the system into the heat bath with the given tag.
Parameters
----------
bath_tag : str, tuple or any other object
Tag of the heat bath corresponding to the current of interest.
ado_state : HierarchyADOsState
Current state of the system and the environment (encoded in the ADOs).
hamiltonian : Qobj
System Hamiltonian at the current time.
coupling_op : Qobj
System coupling operator at the current time.
delta : float
The prefactor of the \delta(t) term in the correlation function (the Ishizaki-Tanimura terminator).
l1_labels = ado_state.filter(level=1, tags=[bath_tag])
a_op = 1j * (hamiltonian * coupling_op - coupling_op * hamiltonian)
result = 0
for label in l1_labels:
result += (a_op * ado_state.extract(label)).tr()
if delta != 0:
result -= 1j * delta * ((a_op * coupling_op - coupling_op * a_op) * ado_state.rho).tr()
return result
Explanation: Heat currents
Following Ref. [2], we consider two possible definitions of the heat currents from the qubits into the baths.
The so-called bath heat currents are $j_{\text{B}}^K = \partial_t \langle H_{\text{B}}^K \rangle$ and the system heat currents are $j_{\text{S}}^K = \mathrm i\, \langle [H_{\text{S}}, Q_K] X_{\text{B}}^K \rangle$.
As shown in Ref. [2], they can be expressed in terms of the HEOM ADOs as follows:
$$ \begin{aligned} \mbox{} \
j_{\text{B}}^K &= !!\sum_{\substack{\mathbf n\ \text{Level 1}\ \text{Bath $K$}}}!! \nu[\mathbf n] \operatorname{tr}\bigl[ Q_K \rho_{\mathbf n} \bigr] - 2 C_I^K(0) \operatorname{tr}\bigl[ Q_k^2 \rho \bigr] + \Gamma_{\text{T}}^K \operatorname{tr}\bigl[ [[H_{\text{S}}, Q_K], Q_K]\, \rho \bigr] , \[.5em]
j_{\text{S}}^K &= \mathrm i!! \sum_{\substack{\mathbf n\ \text{Level 1}\ \text{Bath $k$}}}!! \operatorname{tr}\bigl[ [H_{\text{S}}, Q_K]\, \rho_{\mathbf n} \bigr] + \Gamma_{\text{T}}^K \operatorname{tr}\bigl[ [[H_{\text{S}}, Q_K], Q_K]\, \rho \bigr] . \ \mbox{}
\end{aligned} $$
The sums run over all level-$1$ multi-indices $\mathbf n$ with one excitation corresponding to the K-th bath, $\nu[\mathbf n]$ is the corresponding (negative) exponent of the bath auto-correlation function $C^K(t)$, and $\Gamma_{\text{T}}^K$ is the Ishizaki-Tanimura terminator (i.e., a correction term accounting for the error introduced by approximating the correlation function with a finite sum of exponential terms).
In the expression for the bath heat currents, we left out terms involving $[Q_1, Q_2]$, which is zero in this example.
[2] Kato and Tanimura, J. Chem. Phys. 145, 224105 (2016).
In QuTiP, these currents can be conveniently calculated as follows:
End of explanation
Nk = 1
NC = 7
options = qt.Options(nsteps=1500, store_states=False, atol=1e-12, rtol=1e-12)
Explanation: Note that at long times, we expect $j_{\text{B}}^1 = -j_{\text{B}}^2$ and $j_{\text{S}}^1 = -j_{\text{S}}^2$ due to energy conservation. At long times, we also expect $j_{\text{B}}^1 = j_{\text{S}}^1$ and $j_{\text{B}}^2 = j_{\text{S}}^2$ since the coupling operators commute, $[Q_1, Q_2] = 0$. Hence, all four currents should agree in the long-time limit (up to a sign). This long-time value is what was analyzed in Ref. [2].
Simulations
For our simulations, we will represent the bath spectral densities using the first term of their Padé decompositions, and we will use $7$ levels of the HEOM hierarchy.
End of explanation
# fix qubit-qubit and qubit-bath coupling strengths
J12 = 0.1
lambda1 = J12 / 2
lambda2 = J12 / 2
# choose arbitrary initial state
rho0 = qt.tensor(qt.identity(2), qt.identity(2)) / 4
# simulation time span
tlist = np.linspace(0, 50, 250)
bath1 = DrudeLorentzPadeBath(Q1, lambda1, gamma1, T1, Nk, tag='bath 1')
bath2 = DrudeLorentzPadeBath(Q2, lambda2, gamma2, T2, Nk, tag='bath 2')
b1delta, b1term = bath1.terminator()
b2delta, b2term = bath2.terminator()
solver = HEOMSolver(qt.liouvillian(Hsys(J12)) + b1term + b2term,
[bath1, bath2], max_depth=NC, options=options)
result = solver.run(rho0, tlist, e_ops=[qt.tensor(qt.sigmaz(), qt.identity(2)),
lambda t, ado: bath_heat_current('bath 1', ado, Hsys(J12), Q1, b1delta),
lambda t, ado: bath_heat_current('bath 2', ado, Hsys(J12), Q2, b2delta),
lambda t, ado: system_heat_current('bath 1', ado, Hsys(J12), Q1, b1delta),
lambda t, ado: system_heat_current('bath 2', ado, Hsys(J12), Q2, b2delta)])
Explanation: Time Evolution
We fix $J_{12} = 0.1 \epsilon$ (as in Fig. 3(a-ii) of Ref. [2]) and choose the fixed coupling strength $\lambda_1 = \lambda_2 = J_{12}\, /\, (2\epsilon)$ (corresponding to $\bar\zeta = 1$ in Ref. [2]).
Using these values, we will study the time evolution of the system state and the heat currents.
End of explanation
fig, axes = plt.subplots(figsize=(8,8))
axes.plot(tlist, result.expect[0], 'r', linewidth=2)
axes.set_xlabel('t', fontsize=28)
axes.set_ylabel(r"$\langle \sigma_z^1 \rangle$", fontsize=28)
pass
Explanation: We first plot $\langle \sigma_z^1 \rangle$ to see the time evolution of the system state:
End of explanation
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 8))
ax1.plot(tlist, -np.real(result.expect[1]), color='darkorange', label='BHC (bath 1 -> system)')
ax1.plot(tlist, np.real(result.expect[2]), '--', color='darkorange', label='BHC (system -> bath 2)')
ax1.plot(tlist, -np.real(result.expect[3]), color='dodgerblue', label='SHC (bath 1 -> system)')
ax1.plot(tlist, np.real(result.expect[4]), '--', color='dodgerblue', label='SHC (system -> bath 2)')
ax1.set_xlabel('t', fontsize=28)
ax1.set_ylabel('j', fontsize=28)
ax1.set_ylim((-0.05, 0.05))
ax1.legend(loc=0, fontsize=12)
ax2.plot(tlist, -np.real(result.expect[1]), color='darkorange', label='BHC (bath 1 -> system)')
ax2.plot(tlist, np.real(result.expect[2]), '--', color='darkorange', label='BHC (system -> bath 2)')
ax2.plot(tlist, -np.real(result.expect[3]), color='dodgerblue', label='SHC (bath 1 -> system)')
ax2.plot(tlist, np.real(result.expect[4]), '--', color='dodgerblue', label='SHC (system -> bath 2)')
ax2.set_xlabel('t', fontsize=28)
ax2.set_xlim((20, 50))
ax2.set_ylim((0, 0.0002))
ax2.legend(loc=0, fontsize=12)
pass
Explanation: We find a rather quick thermalization of the system state. For the heat currents, however, it takes a somewhat longer time until they converge to their long-time values:
End of explanation
def heat_currents(J12, zeta_bar):
bath1 = DrudeLorentzPadeBath(Q1, zeta_bar * J12 / 2, gamma1, T1, Nk, tag='bath 1')
bath2 = DrudeLorentzPadeBath(Q2, zeta_bar * J12 / 2, gamma2, T2, Nk, tag='bath 2')
b1delta, b1term = bath1.terminator()
b2delta, b2term = bath2.terminator()
solver = HEOMSolver(qt.liouvillian(Hsys(J12)) + b1term + b2term,
[bath1, bath2], max_depth=NC, options=options)
_, steady_ados = solver.steady_state()
return bath_heat_current('bath 1', steady_ados, Hsys(J12), Q1, b1delta), \
bath_heat_current('bath 2', steady_ados, Hsys(J12), Q2, b2delta), \
system_heat_current('bath 1', steady_ados, Hsys(J12), Q1, b1delta), \
system_heat_current('bath 2', steady_ados, Hsys(J12), Q2, b2delta)
# Define number of points to use for final plot
plot_points = 100
progress = IntProgress(min=0, max=(3*plot_points))
display(progress)
zeta_bars = []
j1s = [] # J12 = 0.01
j2s = [] # J12 = 0.1
j3s = [] # J12 = 0.5
# --- J12 = 0.01 ---
NC = 7
# xrange chosen so that 20 is maximum, centered around 1 on a log scale
for zb in np.logspace(-np.log(20), np.log(20), plot_points, base=np.e):
j1, _, _, _ = heat_currents(0.01, zb) # the four currents are identical in the steady state
zeta_bars.append(zb)
j1s.append(j1)
progress.value += 1
# --- J12 = 0.1 ---
for zb in zeta_bars:
# higher HEOM cut-off is necessary for large coupling strength
if zb < 10:
NC = 7
else:
NC = 12
j2, _, _, _ = heat_currents(0.1, zb)
j2s.append(j2)
progress.value += 1
# --- J12 = 0.5 ---
for zb in zeta_bars:
if zb < 5:
NC = 7
elif zb < 10:
NC = 15
else:
NC = 20
j3, _, _, _ = heat_currents(0.5, zb)
j3s.append(j3)
progress.value += 1
progress.close()
np.save('data/qhb_zb.npy', zeta_bars)
np.save('data/qhb_j1.npy', j1s)
np.save('data/qhb_j2.npy', j2s)
np.save('data/qhb_j3.npy', j3s)
Explanation: Steady-state currents
Here, we try to reproduce the HEOM curves in Fig. 3(a) of Ref. [1] by varying the coupling strength and finding the steady state for each coupling strength.
End of explanation
zeta_bars = np.load('data/qhb_zb.npy')
j1s = np.load('data/qhb_j1.npy')
j2s = np.load('data/qhb_j2.npy')
j3s = np.load('data/qhb_j3.npy')
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
fig, axes = plt.subplots(figsize=(12,7))
axes.plot(zeta_bars, -1000 * 100 * np.real(j1s), 'b', linewidth=2, label=r"$J_{12} = 0.01\, \epsilon$")
axes.plot(zeta_bars, -1000 * 10 * np.real(j2s), 'r--', linewidth=2, label=r"$J_{12} = 0.1\, \epsilon$")
axes.plot(zeta_bars, -1000 * 2 * np.real(j3s), 'g-.', linewidth=2, label=r"$J_{12} = 0.5\, \epsilon$")
axes.set_xscale('log')
axes.set_xlabel(r"$\bar\zeta$", fontsize=30)
axes.set_xlim((zeta_bars[0], zeta_bars[-1]))
axes.set_ylabel(r"$j_{\mathrm{ss}}\; /\; (\epsilon J_{12}) \times 10^3$", fontsize=30)
axes.set_ylim((0, 2))
axes.legend(loc=0)
#fig.savefig("figures/figHeat.pdf")
pass
from qutip.ipynbtools import version_table
version_table()
Explanation: Create Plot
End of explanation |
13,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial / How to use
In this tutorial we create a (simplified) synthetic galaxy image from scratch, along with its associated segmentation map, and then run the statmorph code on it.
Setting up
We import some Python packages first. If you are missing any of these, please see the the Installation section of the README.
Step1: Creating a model galaxy image
We assume that the image size is 240x240 pixels, and that the "true" light distribution is described by a 2D Sersic profile with the following parameters
Step2: Convolving with a PSF
In practice, every astronomical image is the convolution of a "true" image with a point spread function (PSF), which depends on the optics of the telescope, atmospheric conditions, etc. Here we assume that the PSF is a simple 2D Gaussian distribution
Step3: Now we convolve the image with the PSF.
Step4: Adding noise
Here we add homogeneous Gaussian background noise, optimistically assuming that the signal-to-noise ratio (S/N) is 100 at the effective radius (where we had defined the Sérsic profile amplitude as 1.0). For simplicity, we do not consider Poisson noise associated with the source itself.
Step5: Gain and weight maps
The code will ask for one of two input arguments
Step6: Creating a segmentation map
Besides the image itself and the weight map/gain, the only other required argument is the segmentation map, which labels the pixels belonging to different sources. It is usually generated by specialized tools such as SExtractor, but here we create it using photutils
Step7: Although statmorph is designed to process all the sources labeled by the segmentation map, in this example we only focus on the main (largest) source found in the image.
Step8: We regularize a bit the shape of the segmentation map
Step9: Running statmorph
Measuring morphological parameters
Now that we have all the required data, we are ready to measure the morphology of the source just created. Note that we include the PSF as a keyword argument. In principle, this results in more correct Sersic profile fits, although it can also make the code run slower, depending on the size of the PSF.
Step10: In general, source_morphs is a list of objects, each corresponding to a labeled source in the image. Here we focus on the first (and only) labeled source.
Step11: Now we print some of the morphological properties just calculated
Step12: Note that the fitted Sersic profile is in pretty good agreement with the "true" Sersic profile that we originally defined (n=2.5, r_eff=20, etc.). However, such agreement tends to deteriorate somewhat at higher noise levels and larger Sersic indices (not to mention that real galaxies are not always well described by Sersic profiles).
Other morphological measurements that are more general and more robust to noise, which are also calculated by statmorph, include the Gini-M20 (Lotz et al. 2004), CAS (Conselice 2003) and MID (Freeman et al. 2013) statistics, as well as the outer asymmetry (Wen et al. 2014) and shape asymmetry (Pawlik et al. 2016).
Also note that statmorph calculates two different "bad measurement" flags (where 0 means good measurement and 1 means bad)
Step13: Examining other morphological diagnostics
For convenience, we also provide a make_figure function that can be used to visualize some of the basic morphological measurements carried out by statmorph. This creates a multi-panel figure analogous to Fig. 4 from Rodriguez-Gomez et al. (2019). | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage as ndi
from astropy.visualization import simple_norm
from astropy.modeling import models
from astropy.convolution import convolve
import photutils
import time
import statmorph
%matplotlib inline
Explanation: Tutorial / How to use
In this tutorial we create a (simplified) synthetic galaxy image from scratch, along with its associated segmentation map, and then run the statmorph code on it.
Setting up
We import some Python packages first. If you are missing any of these, please see the the Installation section of the README.
End of explanation
ny, nx = 240, 240
y, x = np.mgrid[0:ny, 0:nx]
sersic_model = models.Sersic2D(
amplitude=1, r_eff=20, n=2.5, x_0=120.5, y_0=96.5,
ellip=0.5, theta=-0.5)
image = sersic_model(x, y)
plt.imshow(image, cmap='gray', origin='lower',
norm=simple_norm(image, stretch='log', log_a=10000))
Explanation: Creating a model galaxy image
We assume that the image size is 240x240 pixels, and that the "true" light distribution is described by a 2D Sersic profile with the following parameters:
End of explanation
size = 20 # on each side from the center
sigma_psf = 2.0
y, x = np.mgrid[-size:size+1, -size:size+1]
psf = np.exp(-(x**2 + y**2)/(2.0*sigma_psf**2))
psf /= np.sum(psf)
plt.imshow(psf, origin='lower', cmap='gray')
Explanation: Convolving with a PSF
In practice, every astronomical image is the convolution of a "true" image with a point spread function (PSF), which depends on the optics of the telescope, atmospheric conditions, etc. Here we assume that the PSF is a simple 2D Gaussian distribution:
End of explanation
image = convolve(image, psf)
plt.imshow(image, cmap='gray', origin='lower',
norm=simple_norm(image, stretch='log', log_a=10000))
Explanation: Now we convolve the image with the PSF.
End of explanation
np.random.seed(1)
snp = 100.0
image += (1.0 / snp) * np.random.standard_normal(size=(ny, nx))
plt.imshow(image, cmap='gray', origin='lower',
norm=simple_norm(image, stretch='log', log_a=10000))
Explanation: Adding noise
Here we add homogeneous Gaussian background noise, optimistically assuming that the signal-to-noise ratio (S/N) is 100 at the effective radius (where we had defined the Sérsic profile amplitude as 1.0). For simplicity, we do not consider Poisson noise associated with the source itself.
End of explanation
gain = 10000.0
Explanation: Gain and weight maps
The code will ask for one of two input arguments: (1) a weight map, which is a 2D array (of the same size as the input image) representing one standard deviation at each pixel value, or (2) the gain, a scalar that can be multiplied by the science image to obtain the number of electron counts per pixel. The gain parameter is used internally by statmorph to calculate the weight map.
Here we assume, also somewhat optimistically, that there is an average of 10,000 electron counts/pixel at the effective radius (where we had defined the amplitude as 1.0), so that the gain is 10,000.
End of explanation
threshold = photutils.detect_threshold(image, 1.5)
npixels = 5 # minimum number of connected pixels
segm = photutils.detect_sources(image, threshold, npixels)
Explanation: Creating a segmentation map
Besides the image itself and the weight map/gain, the only other required argument is the segmentation map, which labels the pixels belonging to different sources. It is usually generated by specialized tools such as SExtractor, but here we create it using photutils:
End of explanation
# Keep only the largest segment
label = np.argmax(segm.areas) + 1
segmap = segm.data == label
plt.imshow(segmap, origin='lower', cmap='gray')
Explanation: Although statmorph is designed to process all the sources labeled by the segmentation map, in this example we only focus on the main (largest) source found in the image.
End of explanation
segmap_float = ndi.uniform_filter(np.float64(segmap), size=10)
segmap = segmap_float > 0.5
plt.imshow(segmap, origin='lower', cmap='gray')
Explanation: We regularize a bit the shape of the segmentation map:
End of explanation
start = time.time()
source_morphs = statmorph.source_morphology(
image, segmap, gain=gain, psf=psf)
print('Time: %g s.' % (time.time() - start))
Explanation: Running statmorph
Measuring morphological parameters
Now that we have all the required data, we are ready to measure the morphology of the source just created. Note that we include the PSF as a keyword argument. In principle, this results in more correct Sersic profile fits, although it can also make the code run slower, depending on the size of the PSF.
End of explanation
morph = source_morphs[0]
Explanation: In general, source_morphs is a list of objects, each corresponding to a labeled source in the image. Here we focus on the first (and only) labeled source.
End of explanation
print('xc_centroid =', morph.xc_centroid)
print('yc_centroid =', morph.yc_centroid)
print('ellipticity_centroid =', morph.ellipticity_centroid)
print('elongation_centroid =', morph.elongation_centroid)
print('orientation_centroid =', morph.orientation_centroid)
print('xc_asymmetry =', morph.xc_asymmetry)
print('yc_asymmetry =', morph.yc_asymmetry)
print('ellipticity_asymmetry =', morph.ellipticity_asymmetry)
print('elongation_asymmetry =', morph.elongation_asymmetry)
print('orientation_asymmetry =', morph.orientation_asymmetry)
print('rpetro_circ =', morph.rpetro_circ)
print('rpetro_ellip =', morph.rpetro_ellip)
print('rhalf_circ =', morph.rhalf_circ)
print('rhalf_ellip =', morph.rhalf_ellip)
print('r20 =', morph.r20)
print('r80 =', morph.r80)
print('Gini =', morph.gini)
print('M20 =', morph.m20)
print('F(G, M20) =', morph.gini_m20_bulge)
print('S(G, M20) =', morph.gini_m20_merger)
print('sn_per_pixel =', morph.sn_per_pixel)
print('C =', morph.concentration)
print('A =', morph.asymmetry)
print('S =', morph.smoothness)
print('sersic_amplitude =', morph.sersic_amplitude)
print('sersic_rhalf =', morph.sersic_rhalf)
print('sersic_n =', morph.sersic_n)
print('sersic_xc =', morph.sersic_xc)
print('sersic_yc =', morph.sersic_yc)
print('sersic_ellip =', morph.sersic_ellip)
print('sersic_theta =', morph.sersic_theta)
print('sky_mean =', morph.sky_mean)
print('sky_median =', morph.sky_median)
print('sky_sigma =', morph.sky_sigma)
print('flag =', morph.flag)
print('flag_sersic =', morph.flag_sersic)
Explanation: Now we print some of the morphological properties just calculated:
End of explanation
ny, nx = image.shape
y, x = np.mgrid[0:ny, 0:nx]
fitted_model = statmorph.ConvolvedSersic2D(
amplitude=morph.sersic_amplitude,
r_eff=morph.sersic_rhalf,
n=morph.sersic_n,
x_0=morph.sersic_xc,
y_0=morph.sersic_yc,
ellip=morph.sersic_ellip,
theta=morph.sersic_theta)
fitted_model.set_psf(psf) # required when using ConvolvedSersic2D
image_model = fitted_model(x, y)
bg_noise = (1.0 / snp) * np.random.standard_normal(size=(ny, nx))
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(131)
ax.imshow(image, cmap='gray', origin='lower',
norm=simple_norm(image, stretch='log', log_a=10000))
ax.set_title('Original image')
ax = fig.add_subplot(132)
ax.imshow(image_model + bg_noise, cmap='gray', origin='lower',
norm=simple_norm(image, stretch='log', log_a=10000))
ax.set_title('Fitted model')
ax = fig.add_subplot(133)
residual = image - image_model
ax.imshow(residual, cmap='gray', origin='lower',
norm=simple_norm(residual, stretch='linear'))
ax.set_title('Residual')
Explanation: Note that the fitted Sersic profile is in pretty good agreement with the "true" Sersic profile that we originally defined (n=2.5, r_eff=20, etc.). However, such agreement tends to deteriorate somewhat at higher noise levels and larger Sersic indices (not to mention that real galaxies are not always well described by Sersic profiles).
Other morphological measurements that are more general and more robust to noise, which are also calculated by statmorph, include the Gini-M20 (Lotz et al. 2004), CAS (Conselice 2003) and MID (Freeman et al. 2013) statistics, as well as the outer asymmetry (Wen et al. 2014) and shape asymmetry (Pawlik et al. 2016).
Also note that statmorph calculates two different "bad measurement" flags (where 0 means good measurement and 1 means bad):
flag : indicates a problem with the basic morphological measurements.
flag_sersic : indicates if there was a problem/warning during the Sersic profile fitting.
In general, flag==0 should always be enforced, while flag_sersic==0 should only be used when interested in Sersic fits (which might fail for merging galaxies and other "irregular" objects).
Examining the fitted Sersic profile
Finally, we can reconstruct the fitted Sersic profile and examine its residual. Here we used the ConvolvedSersic2D class defined in statmorph.
End of explanation
from statmorph.utils.image_diagnostics import make_figure
fig = make_figure(morph)
fig.savefig('tutorial.png', dpi=150)
plt.close(fig)
Explanation: Examining other morphological diagnostics
For convenience, we also provide a make_figure function that can be used to visualize some of the basic morphological measurements carried out by statmorph. This creates a multi-panel figure analogous to Fig. 4 from Rodriguez-Gomez et al. (2019).
End of explanation |
13,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
13,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
After each scan, use DMA scatter chain to write the converted ADC values to a
separate output array for each ADC channel. The length of the output array to
allocate for each ADC channel is determined by the sample_count in the
example below.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
N.B., Only the trigger for the first ADC channel is an explicit
software trigger. All remaining triggers occur through minor-loop DMA
channel linking from channel $ii$ to channel $i$.
After each scan through all ADC channels is complete, the ADC readings are
scattered using the selected "scatter" DMA channel through a major-loop link
between DMA channel $ii$ and the "scatter" channel.
<img src="multi-channel_ADC_multi-samples_using_DMA.jpg" style="max-height
Step1: Configure ADC sample rate, etc.
Step2: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
Step3: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
Step4: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
Step5: Configure DMA channel $i$
Step6: Configure DMA channel $ii$
Step7: Trigger sample scan across selected ADC channels | Python Code:
from arduino_rpc.protobuf import resolve_field_values
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as DMA
import teensy_minimal_rpc.ADC as ADC
# Disconnect from existing proxy (if available)
try:
del proxy
except NameError:
pass
proxy = SerialProxy()
dma_channel_scatter = 0
dma_channel_i = 1
dma_channel_ii = 2
Explanation: Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
After each scan, use DMA scatter chain to write the converted ADC values to a
separate output array for each ADC channel. The length of the output array to
allocate for each ADC channel is determined by the sample_count in the
example below.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
N.B., Only the trigger for the first ADC channel is an explicit
software trigger. All remaining triggers occur through minor-loop DMA
channel linking from channel $ii$ to channel $i$.
After each scan through all ADC channels is complete, the ADC readings are
scattered using the selected "scatter" DMA channel through a major-loop link
between DMA channel $ii$ and the "scatter" channel.
<img src="multi-channel_ADC_multi-samples_using_DMA.jpg" style="max-height: 600px" />
Device
Connect to device
End of explanation
import arduino_helpers.hardware.teensy as teensy
# Set ADC parameters
proxy.setAveraging(16, teensy.ADC_0)
proxy.setResolution(16, teensy.ADC_0)
proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.update_adc_registers(
teensy.ADC_0,
ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B)))
Explanation: Configure ADC sample rate, etc.
End of explanation
DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h`
DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h`
# DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
# DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
# DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
proxy.update_dma_mux_chcfg(dma_channel_ii,
DMA.MUX_CHCFG(SOURCE=DMAMUX_SOURCE_ADC0,
TRIG=False,
ENBL=True))
# DMA request input signals and this enable request flag
# must be asserted before a channel’s hardware service
# request is accepted (21.3.3/394).
# DMA_SERQ = i
proxy.update_dma_registers(DMA.Registers(SERQ=dma_channel_ii))
proxy.enableDMA(teensy.ADC_0)
proxy.DMA_registers().loc['']
dmamux = DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(dma_channel_ii).tostring())
resolve_field_values(dmamux)[['full_name', 'value']]
adc0 = ADC.Registers.FromString(proxy.read_adc_registers(teensy.ADC_0).tostring())
resolve_field_values(adc0)[['full_name', 'value']].loc[['CFG2', 'SC1A', 'SC3']]
Explanation: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
End of explanation
import re
import numpy as np
import pandas as pd
import arduino_helpers.hardware.teensy.adc as adc
# The number of samples to record for each ADC channel.
sample_count = 10
teensy_analog_channels = ['A0', 'A1', 'A0', 'A3', 'A0']
sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)])
for v in dir(teensy) if re.search(r'^A\d+', v)]))
channel_sc1as = np.array(sc1a_pins[teensy_analog_channels].tolist(), dtype='uint32')
Explanation: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
End of explanation
proxy.free_all()
N = np.dtype('uint16').itemsize * channel_sc1as.size
# Allocate source array
adc_result_addr = proxy.mem_alloc(N)
# Fill result array with zeros
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Copy channel SC1A configurations to device memory
adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channel_sc1as.view('uint8'))
# Allocate source array
samples_addr = proxy.mem_alloc(sample_count * N)
tcds_addr = proxy.mem_aligned_alloc(32, sample_count * 32)
hw_tcds_addr = 0x40009000
tcd_addrs = [tcds_addr + 32 * i for i in xrange(sample_count)]
hw_tcd_addrs = [hw_tcds_addr + 32 * i for i in xrange(sample_count)]
# Fill result array with zeros
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Create Transfer Control Descriptor configuration for first chunk, encoded
# as a Protocol Buffer message.
tcd0_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=channel_sc1as.size * 2,
SADDR=int(adc_result_addr),
SOFF=2,
SLAST=-channel_sc1as.size * 2,
DADDR=int(samples_addr),
DOFF=2 * sample_count,
DLASTSGA=int(tcd_addrs[1]),
CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=True))
# Convert Protocol Buffer encoded TCD to bytes structure.
tcd0 = proxy.tcd_msg_to_struct(tcd0_msg)
# Create binary TCD struct for each TCD protobuf message and copy to device
# memory.
for i in xrange(sample_count):
tcd_i = tcd0.copy()
tcd_i['SADDR'] = adc_result_addr
tcd_i['DADDR'] = samples_addr + 2 * i
tcd_i['DLASTSGA'] = tcd_addrs[(i + 1) % len(tcd_addrs)]
tcd_i['CSR'] |= (1 << 4)
proxy.mem_cpy_host_to_device(tcd_addrs[i], tcd_i.tostring())
# Load initial TCD in scatter chain to DMA channel chosen to handle scattering.
proxy.mem_cpy_host_to_device(hw_tcd_addrs[dma_channel_scatter],
tcd0.tostring())
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print 'Analog pins:', proxy.mem_cpy_device_to_host(adc_sda1s_addr, len(channel_sc1as) *
channel_sc1as.dtype.itemsize).view('uint32')
Explanation: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
End of explanation
ADC0_SC1A = 0x4003B000 # ADC status and control registers 1
sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT,
DSIZE=DMA.R_TCD_ATTR._32_BIT),
NBYTES_MLNO=4,
SADDR=int(adc_sda1s_addr),
SOFF=4,
SLAST=-channel_sc1as.size * 4,
DADDR=int(ADC0_SC1A),
DOFF=0,
DLASTSGA=0,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(dma_channel_i, sda1_tcd_msg)
Explanation: Configure DMA channel $i$
End of explanation
ADC0_RA = 0x4003B010 # ADC data result register
ADC0_RB = 0x4003B014 # ADC data result register
tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=2,
SADDR=ADC0_RA,
SOFF=0,
SLAST=0,
DADDR=int(adc_result_addr),
DOFF=2,
DLASTSGA=-channel_sc1as.size * 2,
CSR=DMA.R_TCD_CSR(START=0, DONE=False,
MAJORELINK=True,
MAJORLINKCH=dma_channel_scatter))
proxy.update_dma_TCD(dma_channel_ii, tcd_msg)
Explanation: Configure DMA channel $ii$
End of explanation
# Clear output array to zero.
proxy.mem_fill_uint8(adc_result_addr, 0, N)
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Software trigger channel $i$ to copy *first* SC1A configuration, which
# starts ADC conversion for the first channel.
#
# Conversions for subsequent ADC channels are triggered through minor-loop
# linking from DMA channel $ii$ to DMA channel $i$ (*not* through explicit
# software trigger).
print 'ADC results:'
for i in xrange(sample_count):
proxy.update_dma_registers(DMA.Registers(SSRT=dma_channel_i))
# Display converted ADC values (one value per channel in `channel_sd1as` list).
print ' Iteration %s:' % i, proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print ''
print 'Samples by channel:'
# Trigger once per chunk
# for i in xrange(sample_count):
# proxy.update_dma_registers(DMA.Registers(SSRT=0))
device_dst_data = proxy.mem_cpy_device_to_host(samples_addr, sample_count * N)
pd.DataFrame(device_dst_data.view('uint16').reshape(-1, sample_count).T,
columns=teensy_analog_channels)
Explanation: Trigger sample scan across selected ADC channels
End of explanation |
13,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1
Step3: Transforming Text into Numbers
Step4: Project 2
Step5: Project 3
Step6: Understanding Neural Noise
Step7: Project 4
Step8: Analyzing Inefficiencies in our Network | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: Transforming Text into Numbers
End of explanation
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
Explanation: Project 2: Creating the Input/Output Data
End of explanation
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
Explanation: Understanding Neural Noise
End of explanation
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
self.layer_1 = np.zeros((1, self.hidden_nodes))
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
self.layer_1 += self.weights_0_1[self.word2index[word]]
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
#layer_1 = self.layer_0.dot(self.weights_0_1)
layer_1 = self.layer_1
if (i==2):
print(layer_1.shape)
print(layer_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Project 4: Reducing Noise in our Input Data
End of explanation
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
layer_1
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
Explanation: Analyzing Inefficiencies in our Network
End of explanation |
13,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kinetic Curve Simulation Fit
<p class=lead>This notebook performs fits of simulated Kinetic Curves for different kinetics parameters. A single [template notebook](Simulated Kinetic Curve Fit - Template.ipynb) is executed several times, once for each set of parameters.
<p>
## Boilerplate
The module `nbrun.py` needs to be in the current folder
Step1: Execute notebooks
Step2: 8-spot kinetics simulation
Step3: 1-spot kinetics simulation
The empirical variance of 1-spot measurements are (see 1-spot bubble-bubble kinetics - Summary) | Python Code:
from nbrun import run_notebook
Explanation: Kinetic Curve Simulation Fit
<p class=lead>This notebook performs fits of simulated Kinetic Curves for different kinetics parameters. A single [template notebook](Simulated Kinetic Curve Fit - Template.ipynb) is executed several times, once for each set of parameters.
<p>
## Boilerplate
The module `nbrun.py` needs to be in the current folder:
End of explanation
nb_name = 'Simulated Kinetic Curve Fit - Template'
out_path = 'out_notebooks'
import numpy as np
Explanation: Execute notebooks
End of explanation
params = dict(
sigma = 0.016, # experimental 8-spot std. dev.
time_window = 30,
time_step = 5,
time_start = -900,
time_stop = 900,
decimation = 20,
t0_vary = True,
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0), # time origin
num_sim_cycles = 1000,
taus = (5, 10, 30, 60))
run_notebook(nb_name, nb_suffix='-out-multi-spot-t0_vary', out_path=out_path,
nb_kwargs=params)
params = dict(
sigma = 0.016, # experimental 8-spot std. dev.
time_window = 30,
time_step = 5,
time_start = -900,
time_stop = 900,
decimation = 20,
t0_vary = False,
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0), # time origin
num_sim_cycles = 1000,
taus = (5, 10, 30, 60))
run_notebook(nb_name, nb_suffix='-out-multi-spot-t0_novary', out_path=out_path,
nb_kwargs=params)
Explanation: 8-spot kinetics simulation
End of explanation
evar = [6.62, 3.94, 5.3]
np.mean(evar)
params = dict(
sigma = 0.053, # noise std. dev.
time_window = 180,
time_step = 10,
time_start = -900,
time_stop = 900,
decimation = 20,
t0_vary = True,
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0), # time origin
num_sim_cycles = 1000,
taus = (30, 60, 120, 240))
run_notebook(nb_name, nb_suffix='-out-single-spot-t0_vary', out_path=out_path,
nb_kwargs=params)
params = dict(
sigma = 0.053, # noise std. dev.
time_window = 180,
time_step = 10,
time_start = -900,
time_stop = 900,
decimation = 20,
t0_vary = False,
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0), # time origin
num_sim_cycles = 1000,
taus = (30, 60, 120, 240))
run_notebook(nb_name, nb_suffix='-out-single-spot-t0_novary', out_path=out_path,
nb_kwargs=params)
Explanation: 1-spot kinetics simulation
The empirical variance of 1-spot measurements are (see 1-spot bubble-bubble kinetics - Summary):
End of explanation |
13,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution of Lahti et al. 2014
Write a function that takes as input a dictionary of constraints and returns a dictionary tabulating the BMI group for all the records matching the constraints. For example, calling
Step2: Now write the function. For each row in the file, you need to make sure all the constraints are matching the desired ones. If so, keep track of the BMI group using a dictionary.
Step3: Write a function that takes as input the constraints (as above), and a bacterial "genus". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling
Step4: Repeat this analysis for all genera, and for the records having Time = 0.
A simple function for extracting all the genera in the database
Step5: Testing
Step6: Now use the function we wrote above to print the results for all genera | Python Code:
import csv
Explanation: Solution of Lahti et al. 2014
Write a function that takes as input a dictionary of constraints and returns a dictionary tabulating the BMI group for all the records matching the constraints. For example, calling:
get_BMI_count({'Age': '28', 'Sex': 'female'})
should return:
{'NA': 3, 'lean': 8, 'overweight': 2, 'underweight': 1}
Import csv for reading the file.
End of explanation
def get_BMI_count(dict_constraints):
Take as input a dictionary of constraints
for example, {'Age': '28', 'Sex': 'female'}
And return the count of the various groups of BMI
# We use a dictionary to store the results
BMI_count = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_count.keys():
# If we've seen it before, add one record to the count
BMI_count[my_BMI] = BMI_count[my_BMI] + 1
else:
# If not, initialize at 1
BMI_count[my_BMI] = 1
return BMI_count
get_BMI_count({'Nationality': 'US', 'Sex': 'female'})
Explanation: Now write the function. For each row in the file, you need to make sure all the constraints are matching the desired ones. If so, keep track of the BMI group using a dictionary.
End of explanation
import scipy # For log10
def get_abundance_by_BMI(dict_constraints, genus = 'Aerococcus'):
# We use a dictionary to store the results
BMI_IDs = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_IDs.keys():
# If we've seen it before, add the SampleID
BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]
else:
# If not, initialize
BMI_IDs[my_BMI] = [row['SampleID']]
# Now let's open the other file, and keep track of the abundance of the genus for each
# BMI group
abundance = {}
with open('../data/Lahti2014/HITChip.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check whether we need this SampleID
matching = False
for g in BMI_IDs:
if row['SampleID'] in BMI_IDs[g]:
if g in abundance.keys():
abundance[g][0] = abundance[g][0] + float(row[genus])
abundance[g][1] = abundance[g][1] + 1
else:
abundance[g] = [float(row[genus]), 1]
# we have found it, so move on
break
# Finally, calculate means, and print results
print("____________________________________________________________________")
print("Abundance of " + genus + " In sub-population:")
print("____________________________________________________________________")
for key, value in dict_constraints.items():
print(key, "->", value)
print("____________________________________________________________________")
for ab in ['NA', 'underweight', 'lean', 'overweight',
'obese', 'severeobese', 'morbidobese']:
if ab in abundance.keys():
abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])
print(round(abundance[ab][0], 2), '\t', ab)
print("____________________________________________________________________")
print("")
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'},
'Clostridium difficile et rel.')
Explanation: Write a function that takes as input the constraints (as above), and a bacterial "genus". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling:
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'}, 'Clostridium difficile et rel.')
should return:
```
Abundance of Clostridium difficile et rel. In sub-population:
Nationality -> US
Time -> 0
3.08 NA
3.31 underweight
3.84 lean
2.89 overweight
3.31 obese
3.45 severeobese
```
End of explanation
def get_all_genera():
with open('../data/Lahti2014/HITChip.tab') as f:
header = f.readline().strip()
genera = header.split('\t')[1:]
return genera
Explanation: Repeat this analysis for all genera, and for the records having Time = 0.
A simple function for extracting all the genera in the database:
End of explanation
get_all_genera()[:6]
Explanation: Testing:
End of explanation
for g in get_all_genera()[:5]:
get_abundance_by_BMI({'Time': '0'}, g)
Explanation: Now use the function we wrote above to print the results for all genera:
End of explanation |
13,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Replace NaN with mode
Use sample builtin function to create sample from matrix
Count of Matching Values in two Matrices/Vectors
Cross Validation
Value-based join of two Matrices
Filter Matrix to include only Frequent Column Values
Construct (sparse) Matrix from (rowIndex, colIndex, values) triplets
Find and remove duplicates in columns or rows
Set based Indexing
Group by Aggregate using Linear Algebra
Cumulative Summation with Decay Multiplier
Invert Lower Triangular Matrix
Step2: Replace NaN with mode<a id="NaN2Mode" />
This functions replaces NaN in column with mode of column
Step4: Use sample builtin function to create sample from matrix<a id="sample" />
Use sample() function, create permutation matrix using table(), and pull sample from X.
Step6: Count of Matching Values in two Matrices/Vectors<a id="MatchingRows" />
Given two matrices/vectors X and Y, get a count of the rows where X and Y have the same value.
Step8: Cross Validation<a id="CrossValidation" />
Perform kFold cross validation by running in parallel fold creation, training algorithm, test algorithm, and evaluation.
Step10: Value-based join of two Matrices<a id="JoinMatrices"/>
Given matrix M1 and M2, join M1 on column 2 with M2 on column 2, and return matching rows of M1.
Step12: Filter Matrix to include only Frequent Column Values <a id="FilterMatrix"/>
Given a matrix, filter the matrix to only include rows with column values that appear more often than MinFreq.
Step14: Construct (sparse) Matrix from (rowIndex, colIndex, values) triplets<a id="Construct_sparse_Matrix"></a>
Given rowIndex, colIndex, and values as column vectors, construct (sparse) matrix.
Step16: Find and remove duplicates in columns or rows<a id="Find_and_remove_duplicates"></a>
Assuming values are sorted.
Step18: No assumptions on values.
Step20: Order the values and then remove duplicates.
Step22: Set based Indexing<a id="Set_based_Indexing"></a>
Given a matrix X, and a indicator matrix J with indices into X.
Use J to perform operation on X, e.g. add value 10 to cells in X indicated by J.
Step24: Group by Aggregate using Linear Algebra<a id="Multi_column_Sorting"></a>
Given a matrix PCV as (Position, Category, Value), sort PCV by category, and within each category by value in descending order. Create indicator vector for category changes, create distinct categories, and perform linear algebra operations.
Step26: Cumulative Summation with Decay Multiplier<a id="CumSum_Product"></a>
Given matrix X, compute
Step28: In this example we use cumsum_prod for cumulative summation with "breaks", that is, multiple cumulative summations in one.
Step30: In this example, we copy selected rows downward to all consecutive non-selected rows.
Step32: This is a naive implementation of cumulative summation with decay multiplier.
Step35: There is a significant performance difference between the <b>naive</b> implementation and the <b>tricky</b> implementation.
Step38: Invert Lower Triangular Matrix<a id="Invert_Lower_Triangular_Matrix"></a>
In this example, we invert a lower triangular matrix using a the following divide-and-conquer approach. Given lower triangular matrix L, we compute its inverse X which is also lower triangular by splitting both matrices in the middle into 4 blocks (in a 2x2 fashion), and multiplying them together to get the identity matrix
Step40: This is a naive implementation of inverting a lower triangular matrix.
Step42: The naive implementation is significantly slower than the divide-and-conquer implementation. | Python Code:
from systemml import MLContext, dml, jvm_stdout
ml = MLContext(sc)
print (ml.buildTime())
Explanation: Replace NaN with mode
Use sample builtin function to create sample from matrix
Count of Matching Values in two Matrices/Vectors
Cross Validation
Value-based join of two Matrices
Filter Matrix to include only Frequent Column Values
Construct (sparse) Matrix from (rowIndex, colIndex, values) triplets
Find and remove duplicates in columns or rows
Set based Indexing
Group by Aggregate using Linear Algebra
Cumulative Summation with Decay Multiplier
Invert Lower Triangular Matrix
End of explanation
prog=
# Function for NaN-aware replacement with mode
replaceNaNwithMode = function (matrix[double] X, integer colId)
return (matrix[double] X)
{
Xi = replace (target=X[,colId], pattern=0/0, replacement=max(X[,colId])+1) # replace NaN with largest value + 1
agg = aggregate (target=Xi, groups=Xi, fn="count") # count each distinct value
mode = as.scalar (rowIndexMax(t(agg[1:nrow(agg)-1, ]))) # mode is max frequent value except last value
X[,colId] = replace (target=Xi, pattern=max(Xi), replacement=mode) # fill in mode
}
X = matrix('1 NaN 1 NaN 1 2 2 1 1 2', rows = 5, cols = 2)
Y = replaceNaNwithMode (X, 2)
print ("Before: \n" + toString(X))
print ("After: \n" + toString(Y))
with jvm_stdout(True):
ml.execute(dml(prog))
Explanation: Replace NaN with mode<a id="NaN2Mode" />
This functions replaces NaN in column with mode of column
End of explanation
prog=
X = matrix ('2 1 8 3 5 6 7 9 4 4', rows = 5, cols = 2 )
nbrSamples = 2
sv = order (target = sample (nrow (X), nbrSamples, FALSE)) # samples w/o replacement, and order
P = table (seq (1, nbrSamples), sv, nbrSamples, nrow(X)) # permutation matrix
samples = P %*% X; # apply P to perform selection
print ("X: \n" + toString(X))
print ("sv: \n" + toString(sv))
print ("samples: \n" + toString(samples))
with jvm_stdout(True):
ml.execute(dml(prog))
Explanation: Use sample builtin function to create sample from matrix<a id="sample" />
Use sample() function, create permutation matrix using table(), and pull sample from X.
End of explanation
prog=
X = matrix('8 4 5 4 9 10', rows = 6, cols = 1)
Y = matrix('4 9 5 1 9 7 ', rows = 6, cols = 1)
matches = sum (X == Y)
print ("t(X): " + toString(t(X)))
print ("t(Y): " + toString(t(Y)))
print ("Number of Matches: " + matches + "\n")
with jvm_stdout(True):
ml.execute(dml(prog))
Explanation: Count of Matching Values in two Matrices/Vectors<a id="MatchingRows" />
Given two matrices/vectors X and Y, get a count of the rows where X and Y have the same value.
End of explanation
prog =
holdOut = 1/3
kFolds = 1/holdOut
nRows = 6; nCols = 3;
X = matrix(seq(1, nRows * nCols), rows = nRows, cols = nCols) # X data
y = matrix(seq(1, nRows), rows = nRows, cols = 1) # y label data
Xy = cbind (X,y) # Xy Data for CV
sv = rand (rows = nRows, cols = 1, min = 0.0, max = 1.0, pdf = "uniform") # sv selection vector for fold creation
sv = (order(target=sv, by=1, index.return=TRUE)) %% kFolds + 1 # with numbers between 1 .. kFolds
stats = matrix(0, rows=kFolds, cols=1) # stats per kFolds model on test data
parfor (i in 1:kFolds)
{
# Skip empty training data or test data.
if ( sum (sv == i) > 0 & sum (sv == i) < nrow(X) )
{
Xyi = removeEmpty(target = Xy, margin = "rows", select = (sv == i)) # Xyi fold, i.e. 1/k of rows (test data)
Xyni = removeEmpty(target = Xy, margin = "rows", select = (sv != i)) # Xyni data, i.e. (k-1)/k of rows (train data)
# Skip extreme label inbalance
distinctLabels = aggregate( target = Xyni[,1], groups = Xyni[,1], fn = "count")
if ( nrow(distinctLabels) > 1)
{
wi = trainAlg (Xyni[ ,1:ncol(Xy)-1], Xyni[ ,ncol(Xy)]) # wi Model for i-th training data
pi = testAlg (Xyi [ ,1:ncol(Xy)-1], wi) # pi Prediction for i-th test data
ei = evalPrediction (pi, Xyi[ ,ncol(Xy)]) # stats[i,] evaluation of prediction of i-th fold
stats[i,] = ei
print ( "Test data Xyi" + i + "\n" + toString(Xyi)
+ "\nTrain data Xyni" + i + "\n" + toString(Xyni)
+ "\nw" + i + "\n" + toString(wi)
+ "\nstats" + i + "\n" + toString(stats[i,])
+ "\n")
}
else
{
print ("Training data for fold " + i + " has only " + nrow(distinctLabels) + " distinct labels. Needs to be > 1.")
}
}
else
{
print ("Training data or test data for fold " + i + " is empty. Fold not validated.")
}
}
print ("SV selection vector:\n" + toString(sv))
trainAlg = function (matrix[double] X, matrix[double] y)
return (matrix[double] w)
{
w = t(X) %*% y
}
testAlg = function (matrix[double] X, matrix[double] w)
return (matrix[double] p)
{
p = X %*% w
}
evalPrediction = function (matrix[double] p, matrix[double] y)
return (matrix[double] e)
{
e = as.matrix(sum (p - y))
}
with jvm_stdout(True):
ml.execute(dml(prog))
Explanation: Cross Validation<a id="CrossValidation" />
Perform kFold cross validation by running in parallel fold creation, training algorithm, test algorithm, and evaluation.
End of explanation
prog =
M1 = matrix ('1 1 2 3 3 3 4 4 5 3 6 4 7 1 8 2 9 1', rows = 9, cols = 2)
M2 = matrix ('1 1 2 8 3 3 4 3 5 1', rows = 5, cols = 2)
I = rowSums (outer (M1[,2], t(M2[,2]), "==")) # I : indicator matrix for M1
M12 = removeEmpty (target = M1, margin = "rows", select = I) # apply filter to retrieve join result
print ("M1 \n" + toString(M1))
print ("M2 \n" + toString(M2))
print ("M1[,2] joined with M2[,2], and return matching M1 rows\n" + toString(M12))
with jvm_stdout():
ml.execute(dml(prog))
Explanation: Value-based join of two Matrices<a id="JoinMatrices"/>
Given matrix M1 and M2, join M1 on column 2 with M2 on column 2, and return matching rows of M1.
End of explanation
prog =
MinFreq = 3 # minimum frequency of tokens
M = matrix ('1 1 2 3 3 3 4 4 5 3 6 4 7 1 8 2 9 1', rows = 9, cols = 2)
gM = aggregate (target = M[,2], groups = M[,2], fn = "count") # gM: group by and count (grouped matrix)
gv = cbind (seq(1,nrow(gM)), gM) # gv: add group values to counts (group values)
fg = removeEmpty (target = gv * (gv[,2] >= MinFreq), margin = "rows") # fg: filtered groups
I = rowSums (outer (M[,2] ,t(fg[,1]), "==")) # I : indicator of size M with filtered groups
fM = removeEmpty (target = M, margin = "rows", select = I) # FM: filter matrix
print (toString(M))
print (toString(fM))
with jvm_stdout():
ml.execute(dml(prog))
Explanation: Filter Matrix to include only Frequent Column Values <a id="FilterMatrix"/>
Given a matrix, filter the matrix to only include rows with column values that appear more often than MinFreq.
End of explanation
prog =
I = matrix ("1 3 3 4 5", rows = 5, cols = 1)
J = matrix ("2 3 4 1 6", rows = 5, cols = 1)
V = matrix ("10 20 30 40 50", rows = 5, cols = 1)
M = table (I, J, V)
print (toString (M))
ml.execute(dml(prog).output('M')).get('M').toNumPy()
Explanation: Construct (sparse) Matrix from (rowIndex, colIndex, values) triplets<a id="Construct_sparse_Matrix"></a>
Given rowIndex, colIndex, and values as column vectors, construct (sparse) matrix.
End of explanation
prog =
X = matrix ("1 2 3 3 3 4 5 10", rows = 8, cols = 1)
I = rbind (matrix (1,1,1), (X[1:nrow (X)-1,] != X[2:nrow (X),])); # compare current with next value
res = removeEmpty (target = X, margin = "rows", select = I); # select where different
ml.execute(dml(prog).output('res')).get('res').toNumPy()
Explanation: Find and remove duplicates in columns or rows<a id="Find_and_remove_duplicates"></a>
Assuming values are sorted.
End of explanation
prog =
X = matrix ("3 2 1 3 3 4 5 10", rows = 8, cols = 1)
I = aggregate (target = X, groups = X[,1], fn = "count") # group and count duplicates
res = removeEmpty (target = seq (1, max (X[,1])), margin = "rows", select = (I != 0)); # select groups
ml.execute(dml(prog).output('res')).get('res').toNumPy()
Explanation: No assumptions on values.
End of explanation
prog =
X = matrix ("3 2 1 3 3 4 5 10", rows = 8, cols = 1)
X = order (target = X, by = 1) # order values
I = rbind (matrix (1,1,1), (X[1:nrow (X)-1,] != X[2:nrow (X),]));
res = removeEmpty (target = X, margin = "rows", select = I);
ml.execute(dml(prog).output('res')).get('res').toNumPy()
Explanation: Order the values and then remove duplicates.
End of explanation
prog =
X = matrix (1, rows = 1, cols = 100)
J = matrix ("10 20 25 26 28 31 50 67 79", rows = 1, cols = 9)
res = X + table (matrix (1, rows = 1, cols = ncol (J)), J, 10)
print (toString (res))
ml.execute(dml(prog).output('res')).get('res').toNumPy()
Explanation: Set based Indexing<a id="Set_based_Indexing"></a>
Given a matrix X, and a indicator matrix J with indices into X.
Use J to perform operation on X, e.g. add value 10 to cells in X indicated by J.
End of explanation
prog =
C = matrix ('50 40 20 10 30 20 40 20 30', rows = 9, cols = 1) # category data
V = matrix ('20 11 49 33 94 29 48 74 57', rows = 9, cols = 1) # value data
PCV = cbind (cbind (seq (1, nrow (C), 1), C), V); # PCV representation
PCV = order (target = PCV, by = 3, decreasing = TRUE, index.return = FALSE);
PCV = order (target = PCV, by = 2, decreasing = FALSE, index.return = FALSE);
# Find all rows of PCV where the category has a new value, in comparison to the previous row
is_new_C = matrix (1, rows = 1, cols = 1);
if (nrow (C) > 1) {
is_new_C = rbind (is_new_C, (PCV [1:nrow(C) - 1, 2] < PCV [2:nrow(C), 2]));
}
# Associate each category with its index
index_C = cumsum (is_new_C); # cumsum
# For each category, compute:
# - the list of distinct categories
# - the maximum value for each category
# - 0-1 aggregation matrix that adds records of the same category
distinct_C = removeEmpty (target = PCV [, 2], margin = "rows", select = is_new_C);
max_V_per_C = removeEmpty (target = PCV [, 3], margin = "rows", select = is_new_C);
C_indicator = table (index_C, PCV [, 1], max (index_C), nrow (C)); # table
sum_V_per_C = C_indicator %*% V
res = ml.execute(dml(prog).output('PCV','distinct_C', 'max_V_per_C', 'C_indicator', 'sum_V_per_C'))
print (res.get('PCV').toNumPy())
print (res.get('distinct_C').toNumPy())
print (res.get('max_V_per_C').toNumPy())
print (res.get('C_indicator').toNumPy())
print (res.get('sum_V_per_C').toNumPy())
Explanation: Group by Aggregate using Linear Algebra<a id="Multi_column_Sorting"></a>
Given a matrix PCV as (Position, Category, Value), sort PCV by category, and within each category by value in descending order. Create indicator vector for category changes, create distinct categories, and perform linear algebra operations.
End of explanation
cumsum_prod_def =
cumsum_prod = function (Matrix[double] X, Matrix[double] C, double start)
return (Matrix[double] Y)
# Computes the following recurrence in log-number of steps:
# Y [1, ] = X [1, ] + C [1, ] * start;
# Y [i+1, ] = X [i+1, ] + C [i+1, ] * Y [i, ]
{
Y = X; P = C; m = nrow(X); k = 1;
Y [1,] = Y [1,] + C [1,] * start;
while (k < m) {
Y [k + 1:m,] = Y [k + 1:m,] + Y [1:m - k,] * P [k + 1:m,];
P [k + 1:m,] = P [1:m - k,] * P [k + 1:m,];
k = 2 * k;
}
}
Explanation: Cumulative Summation with Decay Multiplier<a id="CumSum_Product"></a>
Given matrix X, compute:
Y[i] = X[i]
+ X[i-1] * C[i]
+ X[i-2] * C[i] * C[i-1]
+ X[i-3] * C[i] * C[i-1] * C[i-2]
+ ...
End of explanation
prog = cumsum_prod_def +
X = matrix ("1 2 3 4 5 6 7 8 9", rows = 9, cols = 1);
#Zeros in C cause "breaks" that restart the cumulative summation from 0
C = matrix ("0 1 1 0 1 1 1 0 1", rows = 9, cols = 1);
Y = cumsum_prod (X, C, 0);
print (toString(Y))
with jvm_stdout():
ml.execute(dml(prog))
Explanation: In this example we use cumsum_prod for cumulative summation with "breaks", that is, multiple cumulative summations in one.
End of explanation
prog = cumsum_prod_def +
X = matrix ("1 2 3 4 5 6 7 8 9", rows = 9, cols = 1);
# Ones in S represent selected rows to be copied, zeros represent non-selected rows
S = matrix ("1 0 0 1 0 0 0 1 0", rows = 9, cols = 1);
Y = cumsum_prod (X * S, 1 - S, 0);
print (toString(Y))
with jvm_stdout():
ml.execute(dml(prog))
Explanation: In this example, we copy selected rows downward to all consecutive non-selected rows.
End of explanation
cumsum_prod_naive_def =
cumsum_prod_naive = function (Matrix[double] X, Matrix[double] C, double start)
return (Matrix[double] Y)
{
Y = matrix (0, rows = nrow(X), cols = ncol(X));
Y [1,] = X [1,] + C [1,] * start;
for (i in 2:nrow(X))
{
Y [i,] = X [i,] + C [i,] * Y [i - 1,]
}
}
Explanation: This is a naive implementation of cumulative summation with decay multiplier.
End of explanation
prog = cumsum_prod_def + cumsum_prod_naive_def +
X = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
C = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
Y1 = cumsum_prod_naive (X, C, 0.123);
with jvm_stdout():
ml.execute(dml(prog))
prog = cumsum_prod_def + cumsum_prod_naive_def +
X = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
C = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
Y2 = cumsum_prod (X, C, 0.123);
with jvm_stdout():
ml.execute(dml(prog))
Explanation: There is a significant performance difference between the <b>naive</b> implementation and the <b>tricky</b> implementation.
End of explanation
invert_lower_triangular_def =
invert_lower_triangular = function (Matrix[double] LI)
return (Matrix[double] LO)
{
n = nrow (LI);
LO = matrix (0, rows = n, cols = n);
LO = LO + diag (1 / diag (LI));
k = 1;
while (k < n)
{
LPF = matrix (0, rows = n, cols = n);
parfor (p in 0:((n - 1) / (2 * k)), check = 0)
{
i = 2 * k * p;
j = i + k;
q = min (n, j + k);
if (j + 1 <= q) {
L1 = LO [i + 1:j, i + 1:j];
L2 = LI [j + 1:q, i + 1:j];
L3 = LO [j + 1:q, j + 1:q];
LPF [j + 1:q, i + 1:j] = -L3 %*% L2 %*% L1;
}
}
LO = LO + LPF;
k = 2 * k;
}
}
prog = invert_lower_triangular_def +
n = 1000;
A = rand (rows = n, cols = n, min = -1, max = 1, pdf = "uniform", sparsity = 1.0);
Mask = cumsum (diag (matrix (1, rows = n, cols = 1)));
L = (A %*% t(A)) * Mask; # Generate L for stability of the inverse
X = invert_lower_triangular (L);
print ("Maximum difference between X %*% L and Identity = " + max (abs (X %*% L - diag (matrix (1, rows = n, cols = 1)))));
with jvm_stdout():
ml.execute(dml(prog))
Explanation: Invert Lower Triangular Matrix<a id="Invert_Lower_Triangular_Matrix"></a>
In this example, we invert a lower triangular matrix using a the following divide-and-conquer approach. Given lower triangular matrix L, we compute its inverse X which is also lower triangular by splitting both matrices in the middle into 4 blocks (in a 2x2 fashion), and multiplying them together to get the identity matrix:
\begin{equation}
L \text{ %% } X = \left(\begin{matrix} L_1 & 0 \ L_2 & L_3 \end{matrix}\right)
\text{ %% } \left(\begin{matrix} X_1 & 0 \ X_2 & X_3 \end{matrix}\right)
= \left(\begin{matrix} L_1 X_1 & 0 \ L_2 X_1 + L_3 X_2 & L_3 X_3 \end{matrix}\right)
= \left(\begin{matrix} I & 0 \ 0 & I \end{matrix}\right)
\nonumber
\end{equation}
If we multiply blockwise, we get three equations:
$
\begin{equation}
L1 \text{ %% } X1 = 1\
L3 \text{ %% } X3 = 1\
L2 \text{ %% } X1 + L3 \text{ %% } X2 = 0\
\end{equation}
$
Solving these equation gives the following formulas for X:
$
\begin{equation}
X1 = inv(L1) \
X3 = inv(L3) \
X2 = - X3 \text{ %% } L2 \text{ %% } X1 \
\end{equation}
$
If we already recursively inverted L1 and L3, we can invert L2. This suggests an algorithm that starts at the diagonal and iterates away from the diagonal, involving bigger and bigger blocks (of size 1, 2, 4, 8, etc.) There is a logarithmic number of steps, and inside each step, the inversions can be performed in parallel using a parfor-loop.
Function "invert_lower_triangular" occurs within more general inverse operations and matrix decompositions. The divide-and-conquer idea allows to derive more efficient algorithms for other matrix decompositions.
End of explanation
invert_lower_triangular_naive_def =
invert_lower_triangular_naive = function (Matrix[double] LI)
return (Matrix[double] LO)
{
n = nrow (LI);
LO = diag (matrix (1, rows = n, cols = 1));
for (i in 1:n - 1)
{
LO [i,] = LO [i,] / LI [i, i];
LO [i + 1:n,] = LO [i + 1:n,] - LI [i + 1:n, i] %*% LO [i,];
}
LO [n,] = LO [n,] / LI [n, n];
}
Explanation: This is a naive implementation of inverting a lower triangular matrix.
End of explanation
prog = invert_lower_triangular_naive_def +
n = 1000;
A = rand (rows = n, cols = n, min = -1, max = 1, pdf = "uniform", sparsity = 1.0);
Mask = cumsum (diag (matrix (1, rows = n, cols = 1)));
L = (A %*% t(A)) * Mask; # Generate L for stability of the inverse
X = invert_lower_triangular_naive (L);
print ("Maximum difference between X %*% L and Identity = " + max (abs (X %*% L - diag (matrix (1, rows = n, cols = 1)))));
with jvm_stdout():
ml.execute(dml(prog))
Explanation: The naive implementation is significantly slower than the divide-and-conquer implementation.
End of explanation |
13,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2> Let's import a couple datasets and take them for a spin</h2>
Step1: <h2> Looks like There aren't too many ppm m/z overlaps </h2>
Step2: <h2> So, about 1/4 of the mass-matches have potential isomers in the other dataset...? </h2>
Notice how there are more matches to the malaria set, which has more peaks. Makes sense - more peaks either means more actual molecules,
or more adducts that could be mistakenly matched as molecules
<h2> Get masses of all hmdb serum metabolites </h2>
parse the xml file
Step3: <h2> So, we've got 6,315,000 pairs of molecules that could be isomers at 1 ppm </h2>
That's about 10% of possible pairs from 25,000 molecules
Step4: <h2> Looks like there are more isomers than 1 | Python Code:
### import two datasets
def reindex_xcms_by_mzrt(df):
df.index = (df.loc[:,'mz'].astype('str') +
':' + df.loc[:, 'rt'].astype('str'))
return df
# alzheimers
local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/'\
'projects'
alzheimers_path = local_path + '/revo_healthcare/data/processed/MTBLS72/positive_mode/'\
'mtbls_no_retcor_bw2.csv'
## Import the data and remove extraneous columns
df_alzheimers = pd.read_csv(alzheimers_path, index_col=0)
df_alzheimers = reindex_xcms_by_mzrt(df_alzheimers)
# malaria
malaria_path = local_path + ('/revo_healthcare/data/processed/MTBLS315/'+
'uhplc_pos/xcms_result_4.csv')
df_malaria = pd.read_csv(malaria_path, index_col=0)
df_malaria = reindex_xcms_by_mzrt(df_malaria)
ppm_alz_v_malaria = ppm_matrix(df_malaria['mz'],
df_alzheimers['mz'])
rt_alz_v_malaria = pairwise_difference(df_malaria['rt'],
df_alzheimers['rt'])
Explanation: <h2> Let's import a couple datasets and take them for a spin</h2>
End of explanation
sns.heatmap(np.log10(ppm_alz_v_malaria))
plt.title('Log10 ppm difference')
plt.show()
# How many for differences at 30ppm?
ppm_window = 30
within_ppm = (ppm_alz_v_malaria[ppm_alz_v_malaria < 30]
.dropna(axis=0, how='all')
.dropna(axis=1, how='all')
)
print 'shape', ppm_alz_v_malaria.shape
print ('ppm within {ppm} ppm: '.format(ppm=ppm_window) +
'{num}'.format(num=(ppm_alz_v_malaria < 30).sum().sum()))
# Get indexes
print 'shape of htose within 30ppm:, ', within_ppm.shape
# How many m/z from one dataset could be m/z isomers from
# other dataset?
print ('\n\nMass matches between datasets (isomers and 1:1 matches)',
(within_ppm < 30).sum().sum())
print '\nAlzheimers "isomers" in other dataset that are match >1 feature in other set', ((within_ppm < 30).sum(axis=0)>1).sum()
print 'Alzheimers total', df_alzheimers['rt'].shape
print '\n\nMalaria "isomers in other dataset that match >1 feature in other set', ((within_ppm < 30).sum(axis=1) > 1).sum()
print 'Malaria total', df_malaria['rt'].shape
# Show distribution of # of isomers per feature in both malaria and fever datasets
print (within_ppm < 30).sum(axis=0).hist(bins=30)
plt.title('Alzheimers isomers in malaria dataset')
plt.show()
(within_ppm < 30).sum(axis=1).hist(bins=30)
plt.title('Malaria isomers in alzheimers dataset')
plt.show()
Explanation: <h2> Looks like There aren't too many ppm m/z overlaps </h2>
End of explanation
local = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/'
xml_file = local + 'revo_healthcare/data/external/toy_database.xml'
xml_file = local + 'revo_healthcare/data/external/serum_metabolites.xml'
#xml_tree = etree.iterparse(xml_file, tag='metabolite')
#
# namespace - at the top of file. fucks with every tag.
# very annoying, so name all tags ns + tag
ns = '{http://www.hmdb.ca}'
nsmap = {None : ns}
# If you're within a metabolite tag
count = 0
seen_mass = 0
d = {}
for event, element in etree.iterparse(xml_file, tag=ns+'metabolite'):
tree = etree.ElementTree(element)
# Aggregate info into a dictionary of
# {HMDB_ID: iso_mass}
accession = []
# Get accession number and masses for each metabolite
# Could be multiple accessions. Grab all of them,
# sort to make unique identifier
for elem in tree.iter():
if elem.tag == ns+'accession':
accession.append(elem.text)
# If you just saw a 'mono_mass' entry,
# get the mass value and reset, saying you
# havent seen 'mono_mass' in the text of next metabolite
if (elem.tag == ns+'value') & (seen_mass == 1):
mass = float(elem.text)
seen_mass = 0
if elem.text == 'mono_mass':
seen_mass = 1
elem.clear()
# sort accession numbers and join with '_'
accession_key = '_'.join(sorted(accession))
# add to dictionary
if mass:
d[accession_key] = mass
# reset mass - only add feature if mass listed
mass = None
# reset accession numbers
accession = []
element.clear()
count += 1
if count % 1000 == 0:
print('Made it through ' + str(count) + ' metabolites')
#pickle.dump(d, open('serumdb_dict.p', 'wb'))
print 'Number of metabolites: %s' % len(d.keys())
serumdb_masses = pd.Series(d, dtype='float32')
serumdb_ppm_matrix = ppm_matrix(serumdb_masses, serumdb_masses)*10**6
#df = pd.DataFrame(serumdb_ppm_matrix, index=serumdb_masses.index,
# columns=serumdb_masses.index)*10**6
# Forget about using a dataframe - uses too much memory
Explanation: <h2> So, about 1/4 of the mass-matches have potential isomers in the other dataset...? </h2>
Notice how there are more matches to the malaria set, which has more peaks. Makes sense - more peaks either means more actual molecules,
or more adducts that could be mistakenly matched as molecules
<h2> Get masses of all hmdb serum metabolites </h2>
parse the xml file
End of explanation
top_ppm = 30
pairs = np.full((top_ppm), np.nan)
print(pairs)
for i in range(1,top_ppm):
# div by two, b/c half matrix is redundant
# subtract length of diagonal of matrix, too
num = ((serumdb_ppm_matrix < i).sum() / 2) - serumdb_ppm_matrix.shape[0]
pairs[i] = num
plt.scatter(x=range(1,30), y=pairs[1:])
plt.title('Number of pairs of molecules that could overlap in human serum database\n')
plt.show()
Explanation: <h2> So, we've got 6,315,000 pairs of molecules that could be isomers at 1 ppm </h2>
That's about 10% of possible pairs from 25,000 molecules
End of explanation
# how to plot the number of overlaps per molecule?
num_below_1ppm = (serumdb_ppm_matrix < 1).sum(axis=1) - 1
plt.hist((serumdb_ppm_matrix < 1).sum(axis=1) - 1 )
plt.title('Pairs of overlapping mz at ppm 1')
plt.show()
num_below_1ppm
Explanation: <h2> Looks like there are more isomers than 1:1 pairings, by a lot </h2>
Less than 6000 of
End of explanation |
13,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form solution as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estimated above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
sales.head()
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form solution as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
# compute total inputs
total_N = input_feature.size()
# compute the sum of input_feature and output
sum_yi = output.sum()
sum_xi = input_feature.sum()
# compute the product of the output and the input_feature and its sum
product_yi_xi = output * input_feature
sum_product_yi_xi = product_yi_xi.sum()
# compute the squared value of the input_feature and its sum
squared_xi = input_feature * input_feature
sum_squared_xi = squared_xi.sum()
# use the formula for the slope
slope = float(sum_product_yi_xi - (float(sum_yi * sum_xi) / total_N)) / (sum_squared_xi - (float(sum_xi * sum_xi) / total_N))
# use the formula for the intercept
intercept = float(sum_yi - (slope * sum_xi)) / total_N
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = intercept + slope * input_feature
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
Explanation: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estimated above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
prediction = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residual = output - prediction
# square the residuals and add them up
residual_squared = residual * residual
RSS = residual_squared.sum()
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = float(output - intercept) / slope
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print "Intercept: " + str(bedrooms_intercept)
print "Slope: " + str(bedrooms_slope)
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
rss_prices_on_bedrooms_test = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], bedrooms_intercept, bedrooms_slope)
print 'The RSS of predicting Prices based on Bedrooms on TEST Data is : ' + str(rss_prices_on_bedrooms_test)
# Compute RSS when using squarefeet on TEST data:
rss_prices_on_sqft_test = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet on TEST Data is : ' + str(rss_prices_on_sqft_test)
print min(rss_prices_on_bedrooms_test, rss_prices_on_sqft_test)
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
13,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background information on filtering
Here we give some background information on filtering in general,
and how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus [1] and
Ifeachor and Jervis [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. 2015 [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the tut_artifacts_filter tutorial.
Problem statement
The practical issues with filtering electrophysiological data are covered
well by Widmann et al. in [7]_, in a follow-up to an article where they
conclude with this statement
Step1: Take for example an ideal low-pass filter, which would give a value of 1 in
the pass-band (up to frequency $f_p$) and a value of 0 in the stop-band
(down to frequency $f_s$) such that $f_p=f_s=40$ Hz here
(shown to a lower limit of -60 dB for simplicity)
Step2: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontunity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in frequency is actually sinc_ function
in time, which requires an infinite number of samples, and thus infinite
time, to represent. So although this filter has ideal frequency suppression,
it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 sec, and look
at the filter itself in the time domain and the frequency domain
Step3: This is not so good! Making the filter 10 times longer (1 sec) gets us a
bit better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here
Step4: Let's make the stop-band tighter still with a longer filter (10 sec),
with a resulting larger x-axis
Step5: Now we have very sharp frequency suppression, but our filter rings for the
entire second. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include
Step6: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a
smoother slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 sec filter
Step7: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 sec) and still get okay
stop-band attenuation
Step8: But then if we shorten the filter too much (2 cycles of 10 Hz = 0.2 sec),
our effective stop frequency gets pushed out past 60 Hz
Step9: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 sec = 5 cycles @ 25 Hz)
Step10: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random + line). Note that the original, clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
Step11: Filter it with a shallow cutoff, linear-phase FIR and compensate for
the delay
Step12: This is actually set to become the default type of filter used in MNE-Python
in 0.14 (see tut_filtering_in_python).
Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice
Step13: Finally, Let's also filter it with the
MNE-C default, which is a long-duration steep-slope FIR filter designed
using frequency-domain techniques
Step14: Both the MNE-Python 0.13 and MNE-C filhters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially on signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter
Step15: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from
Step16: The falloff of this filter is not very steep.
<div class="alert alert-danger"><h4>Warning</h4><p>For brevity, we do not show the phase of these filters here.
In the FIR case, we can design linear-phase filters, and
compensate for the delay (making the filter acausal) if
necessary. This cannot be done
with IIR filters, as they have a non-linear phase.
As the filter order increases, the
phase distortion near and in the transition band worsens.
However, if acausal (forward-backward) filtering can be used,
e.g. with
Step17: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for
Step18: And if we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale)
Step19: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before
Step20: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are acausal (zero-phase), can make
activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen 2011 [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to smulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet 2012 [5].
Perhaps more revealing, it was noted in Widmann & Schröger 2012 [6] that
the problematic low-pass filters from VanRullen 2011 [3]
Step21: Similarly, in a P300 paradigm reported by Kappenman & Luck 2010 [12]_,
they found that applying a 1 Hz high-pass decreased the probaility of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 HZ or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving ch_sample_data,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. 2015 [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. 2016 [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. [10]_ rebutted that baseline correction can correct for
problems with filtering.
To see what they mean, consider again our old simulated signal x from
before
Step22: In respose, Maess et al. 2016 [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multielectrode recordings
the topology (i.e., spatial pattiern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period | Python Code:
import numpy as np
from scipy import signal, fftpack
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
import mne
sfreq = 1000.
f_p = 40.
ylim = [-60, 10] # for dB plots
xlim = [2, sfreq / 2.]
blue = '#1f77b4'
Explanation: Background information on filtering
Here we give some background information on filtering in general,
and how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus [1] and
Ifeachor and Jervis [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. 2015 [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the tut_artifacts_filter tutorial.
Problem statement
The practical issues with filtering electrophysiological data are covered
well by Widmann et al. in [7]_, in a follow-up to an article where they
conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011) [[3]_].
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase SNR, but if it is not used carefully,
it can distort data. Here we hope to cover some filtering basics so
users can better understand filtering tradeoffs, and why MNE-Python has
chosen particular defaults.
Filtering basics
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
\begin{align}H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + ... + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + ... + a_N z^{-M}} \
&= \frac{\sum_0^Mb_kz^{-k}}{\sum_1^Na_kz^{-k}}\end{align}
In the time domain, the numerator coefficients $b_k$ and denominator
coefficients $a_k$ can be used to obtain our output data
$y(n)$ in terms of our input data $x(n)$ as:
\begin{align}:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + ... + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - ... - a_N y(n - N)\\
&= \sum_0^M b_k x(n-k) - \sum_1^N a_k y(n-k)\end{align}
In other words, the output at time $n$ is determined by a sum over:
1. The numerator coefficients $b_k$, which get multiplied by
the previous input $x(n-k)$ values, and
2. The denominator coefficients $a_k$, which get multiplied by
the previous output $y(n-k)$ values.
Note that these summations in :eq:summations correspond nicely to
(1) a weighted moving average and (2) an autoregression.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients $b_k$ ($\forall k, a_k=0$), and thus each output
value of $y(n)$ depends only on the $M$ previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in [1]_, FIR and IIR have different tradeoffs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann et al.
2015 [7]_:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required (Ifeachor and Jervis, 2002[2]_, p. 321),
...FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always tradeoffs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency tradeoff, and it will
show up below.
FIR Filters
First we will focus first on FIR filters, which are the default filters used by
MNE-Python.
Designing FIR filters
Here we'll try designing a low-pass filter, and look at trade-offs in terms
of time- and frequency-domain filter characteristics. Later, in
tut_effect_on_signals, we'll look at how such filters can affect
signals when they are used.
First let's import some useful tools for filtering, and set some default
values for our data that are reasonable for M/EEG data.
End of explanation
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
def box_off(ax):
ax.grid(zorder=0)
for key in ('top', 'right'):
ax.spines[key].set_visible(False)
def plot_ideal(freq, gain, ax):
freq = np.maximum(freq, xlim[0])
xs, ys = list(), list()
my_freq, my_gain = list(), list()
for ii in range(len(freq)):
xs.append(freq[ii])
ys.append(ylim[0])
if ii < len(freq) - 1 and gain[ii] != gain[ii + 1]:
xs += [freq[ii], freq[ii + 1]]
ys += [ylim[1]] * 2
my_freq += np.linspace(freq[ii], freq[ii + 1], 20,
endpoint=False).tolist()
my_gain += np.linspace(gain[ii], gain[ii + 1], 20,
endpoint=False).tolist()
else:
my_freq.append(freq[ii])
my_gain.append(gain[ii])
my_gain = 10 * np.log10(np.maximum(my_gain, 10 ** (ylim[0] / 10.)))
ax.fill_between(xs, ylim[0], ys, color='r', alpha=0.1)
ax.semilogx(my_freq, my_gain, 'r--', alpha=0.5, linewidth=4, zorder=3)
xticks = [1, 2, 4, 10, 20, 40, 100, 200, 400]
ax.set(xlim=xlim, ylim=ylim, xticks=xticks, xlabel='Frequency (Hz)',
ylabel='Amplitude (dB)')
ax.set(xticklabels=xticks)
box_off(ax)
half_height = np.array(plt.rcParams['figure.figsize']) * [1, 0.5]
ax = plt.subplots(1, figsize=half_height)[1]
plot_ideal(freq, gain, ax)
ax.set(title='Ideal %s Hz lowpass' % f_p)
mne.viz.tight_layout()
plt.show()
Explanation: Take for example an ideal low-pass filter, which would give a value of 1 in
the pass-band (up to frequency $f_p$) and a value of 0 in the stop-band
(down to frequency $f_s$) such that $f_p=f_s=40$ Hz here
(shown to a lower limit of -60 dB for simplicity):
End of explanation
n = int(round(0.1 * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
def plot_filter(h, title, freq, gain, show=True):
if h.ndim == 2: # second-order sections
sos = h
n = mne.filter.estimate_ringing_samples(sos)
h = np.zeros(n)
h[0] = 1
h = signal.sosfilt(sos, h)
H = np.ones(512, np.complex128)
for section in sos:
f, this_H = signal.freqz(section[:3], section[3:])
H *= this_H
else:
f, H = signal.freqz(h)
fig, axs = plt.subplots(2)
t = np.arange(len(h)) / sfreq
axs[0].plot(t, h, color=blue)
axs[0].set(xlim=t[[0, -1]], xlabel='Time (sec)',
ylabel='Amplitude h(n)', title=title)
box_off(axs[0])
f *= sfreq / (2 * np.pi)
axs[1].semilogx(f, 10 * np.log10((H * H.conj()).real), color=blue,
linewidth=2, zorder=4)
plot_ideal(freq, gain, axs[1])
mne.viz.tight_layout()
if show:
plt.show()
plot_filter(h, 'Sinc (0.1 sec)', freq, gain)
Explanation: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontunity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in frequency is actually sinc_ function
in time, which requires an infinite number of samples, and thus infinite
time, to represent. So although this filter has ideal frequency suppression,
it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 sec, and look
at the filter itself in the time domain and the frequency domain:
End of explanation
n = int(round(1. * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, 'Sinc (1.0 sec)', freq, gain)
Explanation: This is not so good! Making the filter 10 times longer (1 sec) gets us a
bit better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here:
End of explanation
n = int(round(10. * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, 'Sinc (10.0 sec)', freq, gain)
Explanation: Let's make the stop-band tighter still with a longer filter (10 sec),
with a resulting larger x-axis:
End of explanation
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=half_height)[1]
plot_ideal(freq, gain, ax)
ax.set(title='%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth))
mne.viz.tight_layout()
plt.show()
Explanation: Now we have very sharp frequency suppression, but our filter rings for the
entire second. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include:
1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
2. Windowed FIR design (:func:`scipy.signal.firwin2`, `MATLAB fir2`_)
3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
4. Frequency-domain design (construct filter in Fourier
domain and use an :func:`IFFT <scipy.fftpack.ifft>` to invert it)
<div class="alert alert-info"><h4>Note</h4><p>Remez and least squares designs have advantages when there are
"do not care" regions in our frequency response. However, we want
well controlled responses in all frequency regions.
Frequency-domain construction is good when an arbitrary response
is desired, but generally less clean (due to sampling issues) than
a windowed approach for more straightfroward filter applications.
Since our filters (low-pass, high-pass, band-pass, band-stop)
are fairly simple and we require precisel control of all frequency
regions, here we will use and explore primarily windowed FIR
design.</p></div>
If we relax our frequency-domain filter requirements a little bit, we can
use these functions to construct a lowpass filter that instead has a
transition band, or a region between the pass frequency $f_p$
and stop frequency $f_s$, e.g.:
End of explanation
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 10-Hz transition (1.0 sec)', freq, gain)
Explanation: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a
smoother slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 sec filter:
End of explanation
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 10-Hz transition (0.5 sec)', freq, gain)
Explanation: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 sec) and still get okay
stop-band attenuation:
End of explanation
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 10-Hz transition (0.2 sec)', freq, gain)
Explanation: But then if we shorten the filter too much (2 cycles of 10 Hz = 0.2 sec),
our effective stop frequency gets pushed out past 60 Hz:
End of explanation
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 50-Hz transition (0.2 sec)', freq, gain)
Explanation: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 sec = 5 cycles @ 25 Hz):
End of explanation
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur))
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
Explanation: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random + line). Note that the original, clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
End of explanation
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
x_shallow = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, 'MNE-Python 0.14 default', freq, gain)
Explanation: Filter it with a shallow cutoff, linear-phase FIR and compensate for
the delay:
End of explanation
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
x_steep = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
plot_filter(h, 'MNE-Python 0.13 default', freq, gain)
Explanation: This is actually set to become the default type of filter used in MNE-Python
in 0.14 (see tut_filtering_in_python).
Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice:
End of explanation
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, 'MNE-C default', freq, gain)
Explanation: Finally, Let's also filter it with the
MNE-C default, which is a long-duration steep-slope FIR filter designed
using frequency-domain techniques:
End of explanation
axs = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
t = np.arange(len(x)) / sfreq
axs[0].plot(t, x + offset)
axs[0].set(xlabel='Time (sec)', xlim=t[[0, -1]])
box_off(axs[0])
X = fftpack.fft(x)
freqs = fftpack.fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axs[1].plot(freqs, 20 * np.log10(np.abs(X)))
axs[1].set(xlim=xlim)
yticks = np.arange(5) / -30.
yticklabels = ['Original', 'Noisy', 'FIR-shallow (0.14)', 'FIR-steep (0.13)',
'FIR-steep (MNE-C)']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
plot_signal(x_mne_c, offset=yticks[4])
axs[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.150, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axs[0].get_yticklabels():
text.set(rotation=45, size=8)
axs[1].set(xlim=flim, ylim=ylim, xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
box_off(axs[0])
box_off(axs[1])
mne.viz.tight_layout()
plt.show()
Explanation: Both the MNE-Python 0.13 and MNE-C filhters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially on signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter:
End of explanation
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(sos, 'Butterworth order=2', freq, gain)
# Eventually this will just be from scipy signal.sosfiltfilt, but 0.18 is
# not widely adopted yet (as of June 2016), so we use our wrapper...
sosfiltfilt = mne.fixes.get_sosfiltfilt()
x_shallow = sosfiltfilt(sos, x)
Explanation: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from :mod:scipy.signal. Specifically, we use the general-purpose
functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign,
which provide unified interfaces to IIR filter design.
Designing IIR filters
Let's continue with our design of a 40 Hz low-pass filter, and look at
some trade-offs of different IIR filters.
Often the default IIR filter is a Butterworth filter_, which is designed
to have a maximally flat pass-band. Let's look at a few orders of filter,
i.e., a few different number of coefficients used and therefore steepness
of the filter:
End of explanation
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(sos, 'Butterworth order=8', freq, gain)
x_steep = sosfiltfilt(sos, x)
Explanation: The falloff of this filter is not very steep.
<div class="alert alert-danger"><h4>Warning</h4><p>For brevity, we do not show the phase of these filters here.
In the FIR case, we can design linear-phase filters, and
compensate for the delay (making the filter acausal) if
necessary. This cannot be done
with IIR filters, as they have a non-linear phase.
As the filter order increases, the
phase distortion near and in the transition band worsens.
However, if acausal (forward-backward) filtering can be used,
e.g. with :func:`scipy.signal.filtfilt`, these phase issues
can be mitigated.</p></div>
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using :func:`scipy.signal.sosfilt` and, under the
hood, :func:`scipy.signal.zpk2sos` when passing the
``output='sos'`` keyword argument to
:func:`scipy.signal.iirfilter`. The filter definitions
given in tut_filtering_basics_ use the polynomial
numerator/denominator (sometimes called "tf") form ``(b, a)``,
which are theoretically equivalent to the SOS form used here.
In practice, however, the SOS form can give much better results
due to issues with numerical precision (see
:func:`scipy.signal.sosfilt` for an example), so SOS should be
used when possible to do IIR filtering.</p></div>
Let's increase the order, and note that now we have better attenuation,
with a longer impulse response:
End of explanation
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='cheby1', output='sos',
rp=1) # dB of acceptable pass-band ripple
plot_filter(sos, 'Chebychev-1 order=8, ripple=1 dB', freq, gain)
Explanation: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for :func:scipy.signal.iirdesign. Let's
try a Chebychev (type I) filter, which trades off ripple in the pass-band
to get better attenuation in the stop-band:
End of explanation
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='cheby1', output='sos',
rp=6)
plot_filter(sos, 'Chebychev-1 order=8, ripple=6 dB', freq, gain)
Explanation: And if we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale):
End of explanation
axs = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axs[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axs[0].get_yticklabels():
text.set(rotation=45, size=8)
axs[1].set(xlim=flim, ylim=ylim, xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
box_off(axs[0])
box_off(axs[1])
mne.viz.tight_layout()
plt.show()
Explanation: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before:
End of explanation
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = 'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axs = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axs, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
box_off(ax)
mne.viz.tight_layout()
plt.show()
Explanation: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are acausal (zero-phase), can make
activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen 2011 [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to smulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet 2012 [5].
Perhaps more revealing, it was noted in Widmann & Schröger 2012 [6] that
the problematic low-pass filters from VanRullen 2011 [3]:
Used a least-squares design (like :func:scipy.signal.firls) that
included "do-not-care" transition regions, which can lead to
uncontrolled behavior.
Had a filter length that was independent of the transition bandwidth,
which can cause excessive ringing and signal distortion.
High-pass problems
When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
were found in Acunzo et al. 2012 [4]_ to:
"...generate a systematic bias easily leading to misinterpretations of
neural activity.”
In a related paper, Widmann et al. 2015 [7] also came to suggest a 0.1 Hz
highpass. And more evidence followed in Tanner et al. 2015 [8] of such
distortions. Using data from language ERP studies of semantic and syntactic
processing (i.e., N400 and P600), using a high-pass above 0.3 Hz caused
significant effects to be introduced implausibly early when compared to the
unfiltered data. From this, the authors suggested the optimal high-pass
value for language processing to be 0.1 Hz.
We can recreate a problematic simulation from Tanner et al. 2015 [8]_:
"The simulated component is a single-cycle cosine wave with an amplitude
of 5µV, onset of 500 ms poststimulus, and duration of 800 ms. The
simulated component was embedded in 20 s of zero values to avoid
filtering edge effects... Distortions [were] caused by 2 Hz low-pass and
high-pass filters... No visible distortion to the original waveform
[occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
(12 dB/octave roll-off)."
<div class="alert alert-info"><h4>Note</h4><p>This simulated signal contains energy not just within the
pass-band, but also within the transition and stop-bands -- perhaps
most easily understood because the signal has a non-zero DC value,
but also because it is a shifted cosine that has been
*windowed* (here multiplied by a rectangular window), which
makes the cosine and DC frequencies spread to other frequencies
(multiplication in time is convolution in frequency, so multiplying
by a rectangular window in the time domain means convolving a sinc
function with the impulses at DC and the cosine frequency in the
frequency domain).</p></div>
End of explanation
def baseline_plot(x):
all_axs = plt.subplots(3, 2)[1]
for ri, (axs, freq) in enumerate(zip(all_axs, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axs):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
box_off(ax)
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
Explanation: Similarly, in a P300 paradigm reported by Kappenman & Luck 2010 [12]_,
they found that applying a 1 Hz high-pass decreased the probaility of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 HZ or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving ch_sample_data,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. 2015 [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. 2016 [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. [10]_ rebutted that baseline correction can correct for
problems with filtering.
To see what they mean, consider again our old simulated signal x from
before:
End of explanation
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
Explanation: In respose, Maess et al. 2016 [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multielectrode recordings
the topology (i.e., spatial pattiern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period:
End of explanation |
13,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-Nearest Neighbors (KNN)
by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http
Step1: Let us plot the first five examples of the train data (first row) and test data (second row).
Step2: Then we import shogun components and convert the data to shogun objects
Step3: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
Step4: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time
Step5: We have the prediction for each of the 13 k's now and can quickly compute the accuracies
Step6: So k=3 seems to have been the optimal choice.
Accellerating KNN
Obviously applying KNN is very costly
Step7: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN
Step8: Evaluate KNN with and without Cover Tree. This takes a few seconds
Step9: Generate plots with the data collected in the evaluation
Step10: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Comparison to Multiclass Support Vector Machines
In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.
Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
Step11: Let's apply the SVM to the same test data set to compare results
Step12: Since the SVM performs way better on this task - let's apply it to all data we did not use in training. | Python Code:
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat, savemat
from numpy import random
from os import path
mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = mat['data']
Yall = np.array(mat['label'].squeeze(), dtype=np.double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
Xtrain = Xall[:, subset[:5000]]
Ytrain = Yall[subset[:5000]]
Xtest = Xall[:, subset[5000:6000]]
Ytest = Yall[subset[5000:6000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
print(Xtest.shape)
Explanation: K-Nearest Neighbors (KNN)
by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.org/wiki/Cover_tree">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href="http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM">Multiclass Support Vector Machines</a> is shown.
The basics
The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias.
In SHOGUN, you can use CKNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard CEuclideanDistance, but in general, any subclass of CDistance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K.
First we load and init data split:
End of explanation
%matplotlib inline
import pylab as P
def plot_example(dat, lab):
for i in range(5):
ax=P.subplot(1,5,i+1)
P.title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtrain, Ytrain)
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtest, Ytest)
Explanation: Let us plot the first five examples of the train data (first row) and test data (second row).
End of explanation
from shogun import MulticlassLabels, features
from shogun import KNN, EuclideanDistance
labels = MulticlassLabels(Ytrain)
feats = features(Xtrain)
k=3
dist = EuclideanDistance()
knn = KNN(k, dist, labels)
labels_test = MulticlassLabels(Ytest)
feats_test = features(Xtest)
knn.train(feats)
pred = knn.apply_multiclass(feats_test)
print("Predictions", pred.get_int_labels()[:5])
print("Ground Truth", Ytest[:5])
from shogun import MulticlassAccuracy
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(pred, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Explanation: Then we import shogun components and convert the data to shogun objects:
End of explanation
idx=np.where(pred != Ytest)[0]
Xbad=Xtest[:,idx]
Ybad=Ytest[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
Explanation: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
End of explanation
knn.put('k', 13)
multiple_k=knn.classify_for_multiple_k()
print(multiple_k.shape)
Explanation: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:
End of explanation
for k in range(13):
print("Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)))
Explanation: We have the prediction for each of the 13 k's now and can quickly compute the accuracies:
End of explanation
from shogun import Time, KNN_COVER_TREE, KNN_BRUTE
start = Time.get_curtime()
knn.put('k', 3)
knn.put('knn_solver', KNN_BRUTE)
pred = knn.apply_multiclass(feats_test)
print("Standard KNN took %2.1fs" % (Time.get_curtime() - start))
start = Time.get_curtime()
knn.put('k', 3)
knn.put('knn_solver', KNN_COVER_TREE)
pred = knn.apply_multiclass(feats_test)
print("Covertree KNN took %2.1fs" % (Time.get_curtime() - start))
Explanation: So k=3 seems to have been the optimal choice.
Accellerating KNN
Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:
End of explanation
def evaluate(labels, feats, use_cover_tree=False):
from shogun import MulticlassAccuracy, CrossValidationSplitting
import time
split = CrossValidationSplitting(labels, Nsplit)
split.build_subsets()
accuracy = np.zeros((Nsplit, len(all_ks)))
acc_train = np.zeros(accuracy.shape)
time_test = np.zeros(accuracy.shape)
for i in range(Nsplit):
idx_train = split.generate_subset_inverse(i)
idx_test = split.generate_subset_indices(i)
for j, k in enumerate(all_ks):
#print "Round %d for k=%d..." % (i, k)
feats.add_subset(idx_train)
labels.add_subset(idx_train)
dist = EuclideanDistance(feats, feats)
knn = KNN(k, dist, labels)
knn.set_store_model_features(True)
if use_cover_tree:
knn.put('knn_solver', KNN_COVER_TREE)
else:
knn.put('knn_solver', KNN_BRUTE)
knn.train()
evaluator = MulticlassAccuracy()
pred = knn.apply_multiclass()
acc_train[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
feats.add_subset(idx_test)
labels.add_subset(idx_test)
t_start = time.clock()
pred = knn.apply_multiclass(feats)
time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()
accuracy[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
return {'eout': accuracy, 'ein': acc_train, 'time': time_test}
Explanation: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:
End of explanation
labels = MulticlassLabels(Ytest)
feats = features(Xtest)
print("Evaluating KNN...")
wo_ct = evaluate(labels, feats, use_cover_tree=False)
wi_ct = evaluate(labels, feats, use_cover_tree=True)
print("Done!")
Explanation: Evaluate KNN with and without Cover Tree. This takes a few seconds:
End of explanation
import matplotlib
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')
P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')
P.legend(["Test Accuracy", "Training Accuracy"])
P.xlabel('K')
P.ylabel('Accuracy')
P.title('KNN Accuracy')
P.tight_layout()
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')
P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')
P.xlabel("K")
P.ylabel("time")
P.title('KNN time')
P.legend(["Plain KNN", "CoverTree KNN"], loc='center right')
P.tight_layout()
Explanation: Generate plots with the data collected in the evaluation:
End of explanation
from shogun import GaussianKernel, GMNPSVM
width=80
C=1
gk=GaussianKernel()
gk.set_width(width)
svm=GMNPSVM(C, gk, labels)
_=svm.train(feats)
Explanation: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Comparison to Multiclass Support Vector Machines
In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.
Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
End of explanation
out=svm.apply(feats_test)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Explanation: Let's apply the SVM to the same test data set to compare results:
End of explanation
Xrem=Xall[:,subset[6000:]]
Yrem=Yall[subset[6000:]]
feats_rem=features(Xrem)
labels_rem=MulticlassLabels(Yrem)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
Explanation: Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
End of explanation |
13,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natasha
Natasha solves basic NLP tasks for Russian language
Step1: Getting started
Doc
Doc aggregates annotators, initially it has just text field defined
Step2: After applying segmenter two new fields appear sents and tokens
Step3: After applying morph_tagger and syntax_parser, tokens get 5 new fields id, pos, feats, head_id, rel — annotation in <a href="https
Step4: After applying ner_tagger doc gets spans field with PER, LOC, ORG annotation
Step5: Visualizations
Natasha wraps <a href="https
Step6: Lemmatization
Tokens have lemmatize method, it uses pos and feats assigned by morph_tagger to get word normal form. morph_vocab is just a wrapper for <a href="https
Step7: Phrase normalization
Consider phrase "Организации украинских националистов", one can not just inflect every word independently to get normal form
Step8: Fact extraction
To split names like "Виктор Ющенко", "Бандера" and "Йоэль Лион" into parts use names_extractor and spans method extract_fact
Step9: Reference
One may use Natasha components independently. It is not mandatory to construct Doc object.
Segmenter
Segmenter is just a wrapper for <a href="https
Step10: MorphVocab
MorphVocab is a wrapper for <a href="pymorphy2.readthedocs.io/en/latest/">Pymorphy2</a>. MorphVocab adds cache and adapts grammems to Universal Dependencies format
Step11: Also MorphVocab adds method lemmatize. Given pos and feats it selects the most suitable morph form and returns its normal field
Step12: Embedding
Embedding is a wrapper for <a href="https
Step13: MorphTagger
MorphTagger wraps <a href="https
Step14: SyntaxParser
SyntaxParser wraps <a href="https
Step15: NERTagger
NERTagger wraps <a href="https
Step16: Extractor
In addition to names_extractor Natasha bundles several other extractors
Step17: MoneyExtractor
Step18: NamesExtractor
names_extractor should be applied only to spans of text. To extract single fact use method find
Step19: AddrExtractor | Python Code:
from natasha import (
Segmenter,
MorphVocab,
NewsEmbedding,
NewsMorphTagger,
NewsSyntaxParser,
NewsNERTagger,
PER,
NamesExtractor,
DatesExtractor,
MoneyExtractor,
AddrExtractor,
Doc
)
segmenter = Segmenter()
morph_vocab = MorphVocab()
emb = NewsEmbedding()
morph_tagger = NewsMorphTagger(emb)
syntax_parser = NewsSyntaxParser(emb)
ner_tagger = NewsNERTagger(emb)
names_extractor = NamesExtractor(morph_vocab)
dates_extractor = DatesExtractor(morph_vocab)
money_extractor = MoneyExtractor(morph_vocab)
addr_extractor = AddrExtractor(morph_vocab)
Explanation: Natasha
Natasha solves basic NLP tasks for Russian language: tokenization, sentence segmentatoin, word embedding, morphology tagging, lemmatization, phrase normalization, syntax parsing, NER tagging, fact extraction.
Library is just a wrapper for lower level tools from <a href="https://github.com/natasha">Natasha project</a>:
<a href="https://github.com/natasha/razdel">Razdel</a> — token, sentence segmentation for Russian
<a href="https://github.com/natasha/navec">Navec</a> — compact Russian embeddings
<a href="https://github.com/natasha/slovnet">Slovnet</a> — modern deep-learning techniques for Russian NLP, compact models for Russian morphology, syntax, NER.
<a href="https://github.com/natasha/yargy">Yargy</a> — rule-based fact extraction similar to Tomita parser.
<a href="https://github.com/natasha/ipymarkup">Ipymarkup</a> — NLP visualizations for NER and syntax markups.
Consider using these lower level tools for realword tasks. Natasha models are optimized for news articles, on other domains quality may be worse.
End of explanation
text = 'Посол Израиля на Украине Йоэль Лион признался, что пришел в шок, узнав о решении властей Львовской области объявить 2019 год годом лидера запрещенной в России Организации украинских националистов (ОУН) Степана Бандеры. Свое заявление он разместил в Twitter. «Я не могу понять, как прославление тех, кто непосредственно принимал участие в ужасных антисемитских преступлениях, помогает бороться с антисемитизмом и ксенофобией. Украина не должна забывать о преступлениях, совершенных против украинских евреев, и никоим образом не отмечать их через почитание их исполнителей», — написал дипломат. 11 декабря Львовский областной совет принял решение провозгласить 2019 год в регионе годом Степана Бандеры в связи с празднованием 110-летия со дня рождения лидера ОУН (Бандера родился 1 января 1909 года). В июле аналогичное решение принял Житомирский областной совет. В начале месяца с предложением к президенту страны Петру Порошенко вернуть Бандере звание Героя Украины обратились депутаты Верховной Рады. Парламентарии уверены, что признание Бандеры национальным героем поможет в борьбе с подрывной деятельностью против Украины в информационном поле, а также остановит «распространение мифов, созданных российской пропагандой». Степан Бандера (1909-1959) был одним из лидеров Организации украинских националистов, выступающей за создание независимого государства на территориях с украиноязычным населением. В 2010 году в период президентства Виктора Ющенко Бандера был посмертно признан Героем Украины, однако впоследствии это решение было отменено судом. '
doc = Doc(text)
doc
Explanation: Getting started
Doc
Doc aggregates annotators, initially it has just text field defined:
End of explanation
doc.segment(segmenter)
display(doc)
display(doc.sents[:2])
display(doc.tokens[:5])
Explanation: After applying segmenter two new fields appear sents and tokens:
End of explanation
doc.tag_morph(morph_tagger)
doc.parse_syntax(syntax_parser)
display(doc.tokens[:5])
Explanation: After applying morph_tagger and syntax_parser, tokens get 5 new fields id, pos, feats, head_id, rel — annotation in <a href="https://universaldependencies.org/">Universal Dependencies format</a>:
End of explanation
doc.tag_ner(ner_tagger)
display(doc.spans[:5])
Explanation: After applying ner_tagger doc gets spans field with PER, LOC, ORG annotation:
End of explanation
doc.ner.print()
sent = doc.sents[0]
sent.morph.print()
sent.syntax.print()
Explanation: Visualizations
Natasha wraps <a href="https://github.com/natasha/ipymarkup">Ipymarkup</a> to provide ASCII visualizations for morphology, syntax and NER annotations. doc and sents have 3 methods: morph.print(), syntax.print() and ner.print():
End of explanation
for token in doc.tokens:
token.lemmatize(morph_vocab)
{_.text: _.lemma for _ in doc.tokens[:10]}
Explanation: Lemmatization
Tokens have lemmatize method, it uses pos and feats assigned by morph_tagger to get word normal form. morph_vocab is just a wrapper for <a href="https://pymorphy2.readthedocs.io/en/latest/">Pymorphy2</a>:
End of explanation
for span in doc.spans:
span.normalize(morph_vocab)
{_.text: _.normal for _ in doc.spans}
Explanation: Phrase normalization
Consider phrase "Организации украинских националистов", one can not just inflect every word independently to get normal form: "Организация украинский националист". Spans have method normalize that uses syntax annotation by syntax_parser to inflect phrases:
End of explanation
for span in doc.spans:
if span.type == PER:
span.extract_fact(names_extractor)
{_.normal: _.fact.as_dict for _ in doc.spans if _.fact}
Explanation: Fact extraction
To split names like "Виктор Ющенко", "Бандера" and "Йоэль Лион" into parts use names_extractor and spans method extract_fact:
End of explanation
tokens = list(segmenter.tokenize('Кружка-термос на 0.5л (50/64 см³, 516;...)'))
for token in tokens[:5]:
print(token)
text = '''
- "Так в чем же дело?" - "Не ра-ду-ют".
И т. д. и т. п. В общем, вся газета
'''
sents = list(segmenter.sentenize(text))
for sent in sents:
print(sent)
Explanation: Reference
One may use Natasha components independently. It is not mandatory to construct Doc object.
Segmenter
Segmenter is just a wrapper for <a href="https://github.com/natasha/razdel">Razdel</a>, it has two methods tokenize and sentenize:
End of explanation
forms = morph_vocab('стали')
forms
morph_vocab.__call__.cache_info()
Explanation: MorphVocab
MorphVocab is a wrapper for <a href="pymorphy2.readthedocs.io/en/latest/">Pymorphy2</a>. MorphVocab adds cache and adapts grammems to Universal Dependencies format:
End of explanation
morph_vocab.lemmatize('стали', 'VERB', {})
morph_vocab.lemmatize('стали', 'X', {'Case': 'Gen'})
Explanation: Also MorphVocab adds method lemmatize. Given pos and feats it selects the most suitable morph form and returns its normal field:
End of explanation
print('Words in vocab + 2 for pad and unk: %d' % len(emb.vocab.words) )
emb['навек'][:10]
Explanation: Embedding
Embedding is a wrapper for <a href="https://github.com/natasha/navec/">Navec</a> — compact pretrained word embeddings for Russian language:
End of explanation
words = ['Европейский', 'союз', 'добавил', 'в', 'санкционный', 'список', 'девять', 'политических', 'деятелей']
markup = morph_tagger(words)
markup.print()
Explanation: MorphTagger
MorphTagger wraps <a href="https://github.com/natasha/slovnet">Slovnet morphology tagger</a>. Tagger has list of words as input and returns markup object. Markup has print method that outputs morph tags ASCII visualization:
End of explanation
words = ['Европейский', 'союз', 'добавил', 'в', 'санкционный', 'список', 'девять', 'политических', 'деятелей']
markup = syntax_parser(words)
markup.print()
Explanation: SyntaxParser
SyntaxParser wraps <a href="https://github.com/natasha/slovnet">Slovnet syntax parser</a>. Interface is similar to MorphTagger:
End of explanation
text = 'Посол Израиля на Украине Йоэль Лион признался, что пришел в шок, узнав о решении властей Львовской области объявить 2019 год годом лидера запрещенной в России Организации украинских националистов (ОУН) Степана Бандеры. Свое заявление он разместил в Twitter. 11 декабря Львовский областной совет принял решение провозгласить 2019 год в регионе годом Степана Бандеры в связи с празднованием 110-летия со дня рождения лидера ОУН (Бандера родился 1 января 1909 года).'
markup = ner_tagger(text)
markup.print()
Explanation: NERTagger
NERTagger wraps <a href="https://github.com/natasha/slovnet">Slovnet NER tagger</a>. Interface is similar to MorphTagger but has untokenized text as input:
End of explanation
text = '24.01.2017, 2015 год, 2014 г, 1 апреля, май 2017 г., 9 мая 2017 года'
list(dates_extractor(text))
Explanation: Extractor
In addition to names_extractor Natasha bundles several other extractors: dates_extractor, money_extractor and addr_extractor. All extractors are based on <a href="https://github.com/natasha/yargy">Yargy-parser</a>, meaning that they work correctly only on small predefined set of texts. For realword tasks consider writing your own grammar, see <a href="https://github.com/natasha/yargy#documentation">Yargy docs</a> for more.
DatesExtractor
End of explanation
text = '1 599 059, 38 Евро, 420 долларов, 20 млн руб, 20 т. р., 881 913 (Восемьсот восемьдесят одна тысяча девятьсот тринадцать) руб. 98 коп.'
list(money_extractor(text))
Explanation: MoneyExtractor
End of explanation
lines = [
'Мустафа Джемилев',
'О. Дерипаска',
'Фёдор Иванович Шаляпин',
'Янукович'
]
for line in lines:
display(names_extractor.find(line))
Explanation: NamesExtractor
names_extractor should be applied only to spans of text. To extract single fact use method find:
End of explanation
lines = [
'Россия, Вологодская обл. г. Череповец, пр.Победы 93 б',
'692909, РФ, Приморский край, г. Находка, ул. Добролюбова, 18',
'ул. Народного Ополчения д. 9к.3'
]
for line in lines:
display(addr_extractor.find(line))
Explanation: AddrExtractor
End of explanation |
13,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Principal Component Analysis
PCA is a dimensionality reduction technique; it lets you distill multi-dimensional data down to fewer dimensions, selecting new dimensions that preserve variance in the data as best it can.
We're not talking about Star Trek stuff here; let's make it real - a black & white image for example, contains three dimensions of data
Step1: So, this tells us our data set has 150 samples (individual flowers) in it. It has 4 dimensions - called features here, and three distinct Iris species that each flower is classified into.
While we can visualize 2 or even 3 dimensions of data pretty easily, visualizing 4D data isn't something our brains can do. So let's distill this down to 2 dimensions, and see how well it works
Step2: What we have done is distill our 4D data set down to 2D, by projecting it down to two orthogonal 4D vectors that make up the basis of our new 2D projection. We can see what those 4D vectors are, although it's not something you can really wrap your head around
Step3: Let's see how much information we've managed to preserve
Step4: That's pretty cool. Although we have thrown away two of our four dimensions, PCA has chosen the remaining two dimensions well enough that we've captured 92% of the variance in our data in a single dimension alone! The second dimension just gives us an additional 5%; altogether we've only really lost less than 3% of the variance in our data by projecting it down to two dimensions.
As promised, now that we have a 2D representation of our data, we can plot it | Python Code:
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
import pylab as pl
from itertools import cycle
iris = load_iris()
numSamples, numFeatures = iris.data.shape
print(numSamples)
print(numFeatures)
print(list(iris.target_names))
Explanation: Principal Component Analysis
PCA is a dimensionality reduction technique; it lets you distill multi-dimensional data down to fewer dimensions, selecting new dimensions that preserve variance in the data as best it can.
We're not talking about Star Trek stuff here; let's make it real - a black & white image for example, contains three dimensions of data: X position, Y position, and brightness at each point. Distilling that down to two dimensions can be useful for things like image compression and facial recognition, because it distills out the information that contributes most to the variance in the data set.
Let's do this with a simpler example: the Iris data set that comes with scikit-learn. It's just a small collection of data that has four dimensions of data for three different kinds of Iris flowers: The length and width of both the petals and sepals of many individual flowers from each species. Let's load it up and have a look:
End of explanation
X = iris.data
pca = PCA(n_components=2, whiten=True).fit(X)
X_pca = pca.transform(X)
Explanation: So, this tells us our data set has 150 samples (individual flowers) in it. It has 4 dimensions - called features here, and three distinct Iris species that each flower is classified into.
While we can visualize 2 or even 3 dimensions of data pretty easily, visualizing 4D data isn't something our brains can do. So let's distill this down to 2 dimensions, and see how well it works:
End of explanation
print(pca.components_)
Explanation: What we have done is distill our 4D data set down to 2D, by projecting it down to two orthogonal 4D vectors that make up the basis of our new 2D projection. We can see what those 4D vectors are, although it's not something you can really wrap your head around:
End of explanation
print(pca.explained_variance_ratio_)
print(sum(pca.explained_variance_ratio_))
Explanation: Let's see how much information we've managed to preserve:
End of explanation
%matplotlib inline
from pylab import *
colors = cycle('rgb')
target_ids = range(len(iris.target_names))
pl.figure()
for i, c, label in zip(target_ids, colors, iris.target_names):
pl.scatter(X_pca[iris.target == i, 0], X_pca[iris.target == i, 1],
c=c, label=label)
pl.legend()
pl.show()
Explanation: That's pretty cool. Although we have thrown away two of our four dimensions, PCA has chosen the remaining two dimensions well enough that we've captured 92% of the variance in our data in a single dimension alone! The second dimension just gives us an additional 5%; altogether we've only really lost less than 3% of the variance in our data by projecting it down to two dimensions.
As promised, now that we have a 2D representation of our data, we can plot it:
End of explanation |
13,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NOTES
Step1: Motor
Lin Engineering
http
Step2: ASI Controller
Applied Scientific Instrumentation
http
Step3: Autosipper
Step4: Communication | Python Code:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import display
import ipywidgets as widgets
from __future__ import division
%matplotlib notebook
Explanation: NOTES:
Waiting vs blocking
--> blocking holds up everything (could be selective?)
--> waiting for specific resources to reach inactive state (flags?)
Platemap vs positionmap
Axes orientation
TODO:
tip touch
get motor current position
tip touch
calibration
initialization reference
GUI
pyVISA
End of explanation
import serial as s
import time
import yaml
# TODO: get current position for relative move
class Motor:
def __init__(self, config_file, init=True):
self.serial = s.Serial() # placeholder
f = open(config_file, 'r')
self.config = yaml.load(f)
f.close()
if init:
self.initialize()
def initialize(self):
self.serial = s.Serial(**self.config['serial']) # open serial connection
# TODO set moving current
# TODO set holding current
self.set_velocity(self.config['velocity_limit']) # set velocity
self.home() # move motor to home
def cmd(self, cmd_string, block=True):
full_string = self.config['prefix'] + cmd_string + self.config['terminator']
self.serial.write(full_string)
time.sleep(0.15) # TODO: monitor for response?
response = self.serial.read(self.serial.inWaiting()).decode('utf8', 'ignore')
while block and self.is_busy():
pass
return response
def is_busy(self):
cmd_string = 'Q'
time.sleep(0.05)
response = self.cmd(cmd_string, False)
return response.rfind('`') == -1
# velocity: (usteps/sec)
def set_velocity(self, velocity):
if velocity > self.config['velocity_limit']:
velocity = self.config['velocity_limit']
print 'ERR: Desired velocity exceeds velocity_limit; velocity now set to velocity_limit'
cmd_string = 'V{}R'.format(velocity)
return self.cmd(cmd_string)
def halt(self):
cmd_string = 'T'
self.cmd(cmd_string)
def home(self):
cmd_string = 'Z{}R'.format(self.config['ustep_max'])
return self.cmd(cmd_string)
def move(self, mm, block=True):
ustep = int(self.config['conv']*mm)
if ustep > self.config['ustep_max']:
ustep = self.config['ustep_max']
print 'ERR: Desired move to {} mm exceeds max of {} mm; moving to max instead'.format(mm, self.config['ustep_max']/self.config['conv'])
if ustep < self.config['ustep_min']:
ustep = self.config['ustep_min']
print 'ERR: Desired move to {} mm exceeds min of {} mm; moving to min instead'.format(mm, self.config['ustep_min']/self.config['conv'])
cmd_string = 'A{}R'.format(ustep)
return self.cmd(cmd_string, block)
def move_relative(self, mm):
ustep = int(self.config['conv']*mm)
ustep_current = int(self.config['ustep_max']/2) # TODO: limit movement (+ and -)
if mm >= 0:
if (ustep_current + ustep) > self.config['ustep_max']:
ustep = self.config['ustep_max'] - ustep_current
print 'ERR: Desired move of +{} mm exceeds max of {} mm; moving to max instead'.format(mm, self.config['ustep_max']/self.config['conv'])
cmd_string = 'P{}R'.format(ustep)
else:
if (ustep_current + ustep) < self.config['ustep_min']:
ustep = self.config['ustep_min'] - ustep_current
print 'ERR: Desired move of {} mm exceeds min of {} mm; moving to min instead'.format(mm, self.config['ustep_min']/self.config['conv'])
ustep = -1*ustep
cmd_string = 'D{}R'.format(ustep)
return self.cmd(cmd_string)
def where(self):
cmd_string = '?0'
ustep = self.cmd(cmd_string)
retrun float(ustep)/self.config['conv']
def exit(self):
self.serial.close()
m = Motor('config/le_motor.yaml')
m.serial.write('/1Q\r')
time.sleep(0.5)
m.serial.read(m.serial.inWaiting())
m.cmd('Z1000R')
print m.move(32)
time.sleep(1)
print m.move(20)
print m.cmd('P100000D100000P100000D100000P100000D100000P100000D100000R')
print m.cmd('/1?0')
m.exit()
Explanation: Motor
Lin Engineering
http://www.linengineering.com/wp-content/uploads/downloads/Silverpak_17C/documentation/Lin_Command_Manual.pdf
Determine appropriate velocity_max = microsteps/sec
Determine motor limits
Determine conv = microsteps/mm
Determine orientation (P+; D-)
End of explanation
import serial as s
import time
import yaml
# TODO: Fix serial.read encoding
class ASI_Controller:
def __init__(self, config_file, init=True):
self.serial = s.Serial() # placeholder
f = open(config_file, 'r')
self.config = yaml.load(f)
f.close()
if init:
self.initialize()
def initialize(self):
self.serial = s.Serial(**self.config['serial']) # open serial connection
self.cmd_xy('mc x+ y+') # enable motor control for xy
self.cmd_z('mc z+') # enable motor control for z
print "Initializing stage..."
self.move_xy(2000, -2000) # move to switch limits (bottom right)
self.r_xy(-0.5, 0.5) # move from switch limits 0.5 mm
def cmd(self, cmd_string):
full_string = self.config['prefix'] + cmd_string + self.config['terminator']
self.serial.write(full_string)
time.sleep(0.05)
response = self.serial.read(self.serial.inWaiting())
return response
def halt(self):
self.halt_xy()
self.halt_z()
# XY ----------------------------------------------
def cmd_xy(self, cmd_string, block=True):
full_string = '2h ' + cmd_string
response = self.cmd(full_string)
while block and self.is_busy_xy():
time.sleep(0.05)
pass
return response
def is_busy_xy(self):
status = self.cmd('2h STATUS')[0]
return status == 'B'
def halt_xy(self):
self.cmd_xy('HALT', False)
def where_xy(self):
response = self.cmd_xy('WHERE X Y')
if response.find('A'):
pos_xy = response.split()[1:3]
pos_x = float(pos_xy[0])
pos_y = float(pos_xy[1])
return pos_x, pos_y
else:
return None, None
def move_xy(self, x_mm, y_mm):
conv = self.config['conv']
xStr = 'x=' + str(float(x_mm) * conv)
yStr = 'y=' + str(float(y_mm) * conv)
return self.cmd_xy(' '.join(['m', xStr, yStr]))
def r_xy(self, x_mm, y_mm):
conv = self.config['conv']
xStr = 'x=' + str(float(x_mm) * conv)
yStr = 'y=' + str(float(y_mm) * conv)
return self.cmd_xy(' '.join(['r', xStr, yStr]))
# Z -----------------------------------------------
def cmd_z(self, cmd_string, block=True):
while block and self.is_busy_z():
time.sleep(0.3)
full_string = '1h ' + cmd_string
return self.cmd(full_string)
def is_busy_z(self):
status = self.cmd('1h STATUS')
return status[0] == 'B'
def halt_z(self):
self.cmd_z('HALT', False)
def where_z(self):
response = self.cmd_z('WHERE Z')
if response.find('A'):
pos_z = float(response.split()[1:2])
return pos_z
else:
return None
def move_z(self, z_mm):
conv = self.config['conv']
zStr = 'z=' + str(float(z_mm) * conv)
return self.cmd_z(' '.join(['m', zStr]))
def r_z(self, z_mm):
conv = self.config['conv']
zStr = 'z=' + str(float(z_mm) * conv)
return self.cmd_z(' '.join(['r', zStr]))
def exit(self):
self.serial.close()
a = ASI_Controller('config/asi_controller.yaml')
a.exit()
Explanation: ASI Controller
Applied Scientific Instrumentation
http://www.asiimaging.com/downloads/manuals/Operations_and_Programming_Manual.pdf
Set hall effect sensors to appropriate limits
Determine orientation (X+-, Y+-)
End of explanation
from utils import lookup, read_delim_pd
import numpy as np
class Autosipper:
def __init__(self, z, xy):
self.Z = z # must be initialized first!
self.XY = xy
while True:
fp = raw_input('Type in plate map file:')
try:
self.load_platemap(fp) # load platemap
break
except IOError:
print 'No file', fp
raw_input('Place dropper above reference (press enter when done)')
self.XY.cmd_xy('here x y') # establish current position as 0,0
def load_platemap(self, filepath):
self.platemap = read_delim_pd(filepath)
def go_to(self, columns, values):
x1,y1,z1 = np.array(lookup(self.platemap, columns, values)[['x','y','z']])[0]
self.Z.home() # move needle to travel height (blocking)
self.XY.move_xy(x1,y1) # move stage (blocking)
self.Z.move(z1) # move needle to bottom of well (blocking)
def where(self):
pos_x, pos_y = XY.where_xy()
pos_z = Z.where()
return pos_x, pos_y, pos_z
def exit(self):
self.XY.exit()
self.Z.exit()
d = Autosipper(Motor('config/le_motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
d.platemap
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(d.platemap['x'], d.platemap['y'], d.platemap['z'], s=5)
plt.show()
d.Z.home
d.XY.r_xy(0,5)
d.go_to(['name'],'A12')
d.exit()
Explanation: Autosipper
End of explanation
import visa
rm = visa.ResourceManager()
rm.list_resources()
rm.list_resources_info()
Explanation: Communication: PyVISA
Install NI-VISA:
https://pyvisa.readthedocs.io/en/stable/getting_nivisa.html
End of explanation |
13,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TF-Slim Walkthrough
This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks.
Table of contents
<a href="#Install">Installation and setup</a><br>
<a href='#MLP'>Creating your first neural network with TF-Slim</a><br>
<a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br>
<a href='#CNN'>Training a convolutional neural network (CNN)</a><br>
<a href='#Pretained'>Using pre-trained models</a><br>
Installation and setup
<a id='Install'></a>
Since the stable release of TF 1.0, the latest version of slim has been available as tf.contrib.slim.
To test that your installation is working, execute the following command; it should run without raising any errors.
python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once"
Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/research/slim before running this notebook, so that these files are in your python path.
To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.
Step2: Creating your first neural network with TF-Slim
<a id='MLP'></a>
Below we give some code to create a simple multilayer perceptron (MLP) which can be used
for regression problems. The model has 2 hidden layers.
The output is a single node.
When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.)
We use variable scope to put all the nodes under a common name,
so that the graph has some hierarchical structure.
This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related
variables.
The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.)
We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time,
we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being
constructed for training or testing, since the computational graph will be different in the two cases
(although the variables, storing the model parameters, will be shared, since they have the same name/scope).
Step3: Let's create the model and examine its structure.
We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified.
Step4: Let's create some 1d regression data .
We will train and test the model on some noisy observations of a nonlinear function.
Step5: Let's fit the model to the data
The user has to specify the loss function and the optimizer, and slim does the rest.
In particular, the slim.learning.train function does the following
Step6: Training with multiple loss functions.
Sometimes we have multiple objectives we want to simultaneously optimize.
In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example,
but we show how to compute it.)
Step7: Let's load the saved model and use it for prediction.
Step8: Let's compute various evaluation metrics on the test set.
In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set.
Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries.
After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric.
Step9: Reading Data with TF-Slim
<a id='ReadingTFSlimDatasets'></a>
Reading data with TF-Slim has two main components
Step10: Display some of the data.
Step11: Convolutional neural nets (CNNs).
<a id='CNN'></a>
In this section, we show how to train an image classifier using a simple CNN.
Define the model.
Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing).
Step12: Apply the model to some randomly generated images.
Step14: Train the model on the Flowers dataset.
Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in
learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results.
Step15: Evaluate some metrics.
As we discussed above, we can compute various metrics besides the loss.
Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.)
Step16: Using pre-trained models
<a id='Pretrained'></a>
Neural nets work best when they have many parameters, making them very flexible function approximators.
However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here.
You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.
Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models
Step17: Apply Pre-trained Inception V1 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this.
Step18: Download the VGG-16 checkpoint
Step19: Apply Pre-trained VGG-16 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001.
Step21: Fine-tune the model on a different set of labels.
We will fine tune the inception model on the Flowers dataset.
Step22: Apply fine tuned model to some images. | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import math
import numpy as np
import tensorflow as tf
import time
from datasets import dataset_utils
# Main slim library
from tensorflow.contrib import slim
Explanation: TF-Slim Walkthrough
This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks.
Table of contents
<a href="#Install">Installation and setup</a><br>
<a href='#MLP'>Creating your first neural network with TF-Slim</a><br>
<a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br>
<a href='#CNN'>Training a convolutional neural network (CNN)</a><br>
<a href='#Pretained'>Using pre-trained models</a><br>
Installation and setup
<a id='Install'></a>
Since the stable release of TF 1.0, the latest version of slim has been available as tf.contrib.slim.
To test that your installation is working, execute the following command; it should run without raising any errors.
python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once"
Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/research/slim before running this notebook, so that these files are in your python path.
To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.
End of explanation
def regression_model(inputs, is_training=True, scope="deep_regression"):
Creates the regression model.
Args:
inputs: A node that yields a `Tensor` of size [batch_size, dimensions].
is_training: Whether or not we're currently training the model.
scope: An optional variable_op scope for the model.
Returns:
predictions: 1-D `Tensor` of shape [batch_size] of responses.
end_points: A dict of end points representing the hidden layers.
with tf.variable_scope(scope, 'deep_regression', [inputs]):
end_points = {}
# Set the default weight _regularizer and acvitation for each fully_connected layer.
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(0.01)):
# Creates a fully connected layer from the inputs with 32 hidden units.
net = slim.fully_connected(inputs, 32, scope='fc1')
end_points['fc1'] = net
# Adds a dropout layer to prevent over-fitting.
net = slim.dropout(net, 0.8, is_training=is_training)
# Adds another fully connected layer with 16 hidden units.
net = slim.fully_connected(net, 16, scope='fc2')
end_points['fc2'] = net
# Creates a fully-connected layer with a single hidden unit. Note that the
# layer is made linear by setting activation_fn=None.
predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction')
end_points['out'] = predictions
return predictions, end_points
Explanation: Creating your first neural network with TF-Slim
<a id='MLP'></a>
Below we give some code to create a simple multilayer perceptron (MLP) which can be used
for regression problems. The model has 2 hidden layers.
The output is a single node.
When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.)
We use variable scope to put all the nodes under a common name,
so that the graph has some hierarchical structure.
This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related
variables.
The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.)
We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time,
we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being
constructed for training or testing, since the computational graph will be different in the two cases
(although the variables, storing the model parameters, will be shared, since they have the same name/scope).
End of explanation
with tf.Graph().as_default():
# Dummy placeholders for arbitrary number of 1d inputs and outputs
inputs = tf.placeholder(tf.float32, shape=(None, 1))
outputs = tf.placeholder(tf.float32, shape=(None, 1))
# Build model
predictions, end_points = regression_model(inputs)
# Print name and shape of each tensor.
print("Layers")
for k, v in end_points.items():
print('name = {}, shape = {}'.format(v.name, v.get_shape()))
# Print name and shape of parameter nodes (values not yet initialized)
print("\n")
print("Parameters")
for v in slim.get_model_variables():
print('name = {}, shape = {}'.format(v.name, v.get_shape()))
Explanation: Let's create the model and examine its structure.
We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified.
End of explanation
def produce_batch(batch_size, noise=0.3):
xs = np.random.random(size=[batch_size, 1]) * 10
ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise)
return [xs.astype(np.float32), ys.astype(np.float32)]
x_train, y_train = produce_batch(200)
x_test, y_test = produce_batch(200)
plt.scatter(x_train, y_train)
Explanation: Let's create some 1d regression data .
We will train and test the model on some noisy observations of a nonlinear function.
End of explanation
def convert_data_to_tensors(x, y):
inputs = tf.constant(x)
inputs.set_shape([None, 1])
outputs = tf.constant(y)
outputs.set_shape([None, 1])
return inputs, outputs
# The following snippet trains the regression model using a mean_squared_error loss.
ckpt_dir = '/tmp/regression_model/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
inputs, targets = convert_data_to_tensors(x_train, y_train)
# Make the model.
predictions, nodes = regression_model(inputs, is_training=True)
# Add the loss function to the graph.
loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
# The total loss is the user's loss plus any regularization losses.
total_loss = slim.losses.get_total_loss()
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.005)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training inside a session.
final_loss = slim.learning.train(
train_op,
logdir=ckpt_dir,
number_of_steps=5000,
save_summaries_secs=5,
log_every_n_steps=500)
print("Finished training. Last batch loss:", final_loss)
print("Checkpoint saved in %s" % ckpt_dir)
Explanation: Let's fit the model to the data
The user has to specify the loss function and the optimizer, and slim does the rest.
In particular, the slim.learning.train function does the following:
For each iteration, evaluate the train_op, which updates the parameters using the optimizer applied to the current minibatch. Also, update the global_step.
Occasionally store the model checkpoint in the specified directory. This is useful in case your machine crashes - then you can simply restart from the specified checkpoint.
End of explanation
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_train, y_train)
predictions, end_points = regression_model(inputs, is_training=True)
# Add multiple loss nodes.
mean_squared_error_loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
absolute_difference_loss = slim.losses.absolute_difference(predictions, targets)
# The following two ways to compute the total loss are equivalent
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = mean_squared_error_loss + absolute_difference_loss + regularization_loss
# Regularization Loss is included in the total loss by default.
# This is good for training, but not for testing.
total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op) # Will initialize the parameters with random weights.
total_loss1, total_loss2 = sess.run([total_loss1, total_loss2])
print('Total Loss1: %f' % total_loss1)
print('Total Loss2: %f' % total_loss2)
print('Regularization Losses:')
for loss in slim.losses.get_regularization_losses():
print(loss)
print('Loss Functions:')
for loss in slim.losses.get_losses():
print(loss)
Explanation: Training with multiple loss functions.
Sometimes we have multiple objectives we want to simultaneously optimize.
In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example,
but we show how to compute it.)
End of explanation
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
# Create the model structure. (Parameters will be loaded below.)
predictions, end_points = regression_model(inputs, is_training=False)
# Make a session which restores the old parameters from a checkpoint.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
inputs, predictions, targets = sess.run([inputs, predictions, targets])
plt.scatter(inputs, targets, c='r');
plt.scatter(inputs, predictions, c='b');
plt.title('red=true, blue=predicted')
Explanation: Let's load the saved model and use it for prediction.
End of explanation
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
predictions, end_points = regression_model(inputs, is_training=False)
# Specify metrics to evaluate:
names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({
'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets),
'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets)
})
# Make a session which restores the old graph parameters, and then run eval.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
metric_values = slim.evaluation.evaluation(
sess,
num_evals=1, # Single pass over data
eval_op=names_to_update_nodes.values(),
final_op=names_to_value_nodes.values())
names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values))
for key, value in names_to_values.items():
print('%s: %f' % (key, value))
Explanation: Let's compute various evaluation metrics on the test set.
In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set.
Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries.
After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric.
End of explanation
import tensorflow as tf
from datasets import dataset_utils
url = "http://download.tensorflow.org/data/flowers.tar.gz"
flowers_data_dir = '/tmp/flowers'
if not tf.gfile.Exists(flowers_data_dir):
tf.gfile.MakeDirs(flowers_data_dir)
dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir)
Explanation: Reading Data with TF-Slim
<a id='ReadingTFSlimDatasets'></a>
Reading data with TF-Slim has two main components: A
Dataset and a
DatasetDataProvider. The former is a descriptor of a dataset, while the latter performs the actions necessary for actually reading the data. Lets look at each one in detail:
Dataset
A TF-Slim
Dataset
contains descriptive information about a dataset necessary for reading it, such as the list of data files and how to decode them. It also contains metadata including class labels, the size of the train/test splits and descriptions of the tensors that the dataset provides. For example, some datasets contain images with labels. Others augment this data with bounding box annotations, etc. The Dataset object allows us to write generic code using the same API, regardless of the data content and encoding type.
TF-Slim's Dataset works especially well when the data is stored as a (possibly sharded)
TFRecords file, where each record contains a tf.train.Example protocol buffer.
TF-Slim uses a consistent convention for naming the keys and values inside each Example record.
DatasetDataProvider
A
DatasetDataProvider is a class which actually reads the data from a dataset. It is highly configurable to read the data in various ways that may make a big impact on the efficiency of your training process. For example, it can be single or multi-threaded. If your data is sharded across many files, it can read each files serially, or from every file simultaneously.
Demo: The Flowers Dataset
For convenience, we've include scripts to convert several common image datasets into TFRecord format and have provided
the Dataset descriptor files necessary for reading them. We demonstrate how easy it is to use these dataset via the Flowers dataset below.
Download the Flowers Dataset
<a id='DownloadFlowers'></a>
We've made available a tarball of the Flowers dataset which has already been converted to TFRecord format.
End of explanation
from datasets import flowers
import tensorflow as tf
from tensorflow.contrib import slim
with tf.Graph().as_default():
dataset = flowers.get_split('train', flowers_data_dir)
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32, common_queue_min=1)
image, label = data_provider.get(['image', 'label'])
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
for i in range(4):
np_image, np_label = sess.run([image, label])
height, width, _ = np_image.shape
class_name = name = dataset.labels_to_names[np_label]
plt.figure()
plt.imshow(np_image)
plt.title('%s, %d x %d' % (name, height, width))
plt.axis('off')
plt.show()
Explanation: Display some of the data.
End of explanation
def my_cnn(images, num_classes, is_training): # is_training is not used...
with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2):
net = slim.conv2d(images, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.conv2d(net, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.flatten(net)
net = slim.fully_connected(net, 192)
net = slim.fully_connected(net, num_classes, activation_fn=None)
return net
Explanation: Convolutional neural nets (CNNs).
<a id='CNN'></a>
In this section, we show how to train an image classifier using a simple CNN.
Define the model.
Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing).
End of explanation
import tensorflow as tf
with tf.Graph().as_default():
# The model can handle any input size because the first layer is convolutional.
# The size of the model is determined when image_node is first passed into the my_cnn function.
# Once the variables are initialized, the size of all the weight matrices is fixed.
# Because of the fully connected layers, this means that all subsequent images must have the same
# input size as the first image.
batch_size, height, width, channels = 3, 28, 28, 3
images = tf.random_uniform([batch_size, height, width, channels], maxval=1)
# Create the model.
num_classes = 10
logits = my_cnn(images, num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
# Initialize all the variables (including parameters) randomly.
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
# Run the init_op, evaluate the model outputs and print the results:
sess.run(init_op)
probabilities = sess.run(probabilities)
print('Probabilities Shape:')
print(probabilities.shape) # batch_size x num_classes
print('\nProbabilities:')
print(probabilities)
print('\nSumming across all classes (Should equal 1):')
print(np.sum(probabilities, 1)) # Each row sums to 1
Explanation: Apply the model to some randomly generated images.
End of explanation
from preprocessing import inception_preprocessing
import tensorflow as tf
from tensorflow.contrib import slim
def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False):
Loads a single batch of data.
Args:
dataset: The dataset to load.
batch_size: The number of images in the batch.
height: The size of each image after preprocessing.
width: The size of each image after preprocessing.
is_training: Whether or not we're currently training or evaluating.
Returns:
images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed.
images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization.
labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes.
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32,
common_queue_min=8)
image_raw, label = data_provider.get(['image', 'label'])
# Preprocess image for usage by Inception.
image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training)
# Preprocess the image for display purposes.
image_raw = tf.expand_dims(image_raw, 0)
image_raw = tf.image.resize_images(image_raw, [height, width])
image_raw = tf.squeeze(image_raw)
# Batch it up.
images, images_raw, labels = tf.train.batch(
[image, image_raw, label],
batch_size=batch_size,
num_threads=1,
capacity=2 * batch_size)
return images, images_raw, labels
from datasets import flowers
# This might take a few minutes.
train_dir = '/tmp/tfslim_model/'
print('Will save model to %s' % train_dir)
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
# Create the model:
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
number_of_steps=1, # For speed, we just do 1 epoch
save_summaries_secs=1)
print('Finished training. Final batch loss %d' % final_loss)
Explanation: Train the model on the Flowers dataset.
Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in
learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results.
End of explanation
from datasets import flowers
# This might take a few minutes.
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.DEBUG)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False)
predictions = tf.argmax(logits, 1)
# Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5),
})
print('Running evaluation Loop...')
checkpoint_path = tf.train.latest_checkpoint(train_dir)
metric_values = slim.evaluation.evaluate_once(
master='',
checkpoint_path=checkpoint_path,
logdir=train_dir,
eval_op=names_to_updates.values(),
final_op=names_to_values.values())
names_to_values = dict(zip(names_to_values.keys(), metric_values))
for name in names_to_values:
print('%s: %f' % (name, names_to_values[name]))
Explanation: Evaluate some metrics.
As we discussed above, we can compute various metrics besides the loss.
Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.)
End of explanation
from datasets import dataset_utils
url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
Explanation: Using pre-trained models
<a id='Pretrained'></a>
Neural nets work best when they have many parameters, making them very flexible function approximators.
However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here.
You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.
Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models: Inception V1 and VGG-19 models to highlight this difference.
Download the Inception V1 checkpoint
End of explanation
import numpy as np
import os
import tensorflow as tf
try:
import urllib2 as urllib
except ImportError:
import urllib.request as urllib
from datasets import imagenet
from nets import inception
from preprocessing import inception_preprocessing
from tensorflow.contrib import slim
image_size = inception.inception_v1.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg'
image_string = urllib.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
slim.get_model_variables('InceptionV1'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index]))
Explanation: Apply Pre-trained Inception V1 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this.
End of explanation
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
Explanation: Download the VGG-16 checkpoint
End of explanation
import numpy as np
import os
import tensorflow as tf
try:
import urllib2
except ImportError:
import urllib.request as urllib
from datasets import imagenet
from nets import vgg
from preprocessing import vgg_preprocessing
from tensorflow.contrib import slim
image_size = vgg.vgg_16.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg'
image_string = urllib.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = vgg_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
# 1000 classes instead of 1001.
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
# Shift the index of a class name by one.
print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index+1]))
Explanation: Apply Pre-trained VGG-16 model to Images.
We have to convert each image to the size expected by the model checkpoint.
There is no easy way to determine this size from the checkpoint itself.
So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001.
End of explanation
# Note that this may take several minutes.
import os
from datasets import flowers
from nets import inception
from preprocessing import inception_preprocessing
from tensorflow.contrib import slim
image_size = inception.inception_v1.default_image_size
def get_init_fn():
Returns a function run by the chief worker to warm-start the training.
checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"]
exclusions = [scope.strip() for scope in checkpoint_exclude_scopes]
variables_to_restore = []
for var in slim.get_model_variables():
for exclusion in exclusions:
if var.op.name.startswith(exclusion):
break
else:
variables_to_restore.append(var)
return slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
variables_to_restore)
train_dir = '/tmp/inception_finetuned/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=2)
print('Finished training. Last batch loss %f' % final_loss)
Explanation: Fine-tune the model on a different set of labels.
We will fine tune the inception model on the Flowers dataset.
End of explanation
import numpy as np
import tensorflow as tf
from datasets import flowers
from nets import inception
from tensorflow.contrib import slim
image_size = inception.inception_v1.default_image_size
batch_size = 3
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
checkpoint_path = tf.train.latest_checkpoint(train_dir)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_path,
slim.get_variables_to_restore())
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
sess.run(tf.initialize_local_variables())
init_fn(sess)
np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels])
for i in range(batch_size):
image = np_images_raw[i, :, :, :]
true_label = np_labels[i]
predicted_label = np.argmax(np_probabilities[i, :])
predicted_name = dataset.labels_to_names[predicted_label]
true_name = dataset.labels_to_names[true_label]
plt.figure()
plt.imshow(image.astype(np.uint8))
plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name))
plt.axis('off')
plt.show()
Explanation: Apply fine tuned model to some images.
End of explanation |
13,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: So we have the same set of files in both versions
Step3: Let's make sure the structure hasn't changed
Step4: All files have the same columns as before
Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
Step6: Alright, so the only change seems to be 17 new jobs added. Let's take a look (only showing interesting fields)
Step7: They mostly seem to be new jobs
Step8: As anticipated it is a very minor change (hard to see it visually)
Step9: The new ones seem legit to me.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
Step10: So in addition to the added and removed items, there are few fixes. Let's have a look at them | Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '335'
NEW_VERSION = '337'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
Explanation: Author: Pascal, [email protected]
Date: 2018-12-18
ROME update from v335 to v337
In December 2018 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v335. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
rome_data = [VersionedDataset(
basename=path.basename(f),
old=pd.read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=pd.read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
Explanation: So we have the same set of files in both versions: good start.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
Explanation: Let's make sure the structure hasn't changed:
End of explanation
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
Explanation: All files have the same columns as before: still good.
Now let's see for each file if there are more or less rows.
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
End of explanation
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
Explanation: Alright, so the only change seems to be 17 new jobs added. Let's take a look (only showing interesting fields):
End of explanation
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
Explanation: They mostly seem to be new jobs: Business Developer, VR-related jobs, global climate jobs.
OK, let's check at the changes in items:
End of explanation
items.new[items.new.code_ogr.isin(new_items)].head()
Explanation: As anticipated it is a very minor change (hard to see it visually): there are no obsolete items but new ones have been created. Let's have a look at them.
End of explanation
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
Explanation: The new ones seem legit to me.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
End of explanation
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(5)
Explanation: So in addition to the added and removed items, there are few fixes. Let's have a look at them:
End of explanation |
13,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
In this notebook, you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use DataFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Importing Libraries
Step1: Unzipping files with Amazon Baby Products Reviews
The dataset consists of baby product reviews from Amazon.com.
Step2: Loading the products data
The dataset is loaded into a Pandas DataFrame called products.
Step3: Now, let us see a preview of what the dataset looks like.
Step4: Performing text cleaning
Let us explore a specific example of a baby product.
Step5: Now, we will perform 2 simple data transformations
Step6: Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
Step7: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
Step8: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Below, we are create a function we will applyi to the "ratings" column of the dataframe to determine if the review is positive or negative.
Step9: Creating a "sentiment" column by applying the sent_func to the "rating" column in the dataframe.
Step10: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set.
Loading the indicies for the train and test data and putting them in a list
Step11: Using the indicies of the train and test data to create the train and test datasets.
Step12: Build the word count vector for each review
We will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows
Step13: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target.
Note
Step14: Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model sentiment_model.
Step15: Putting all the weights from the model into a numpy array.
Step16: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Quiz question
Step17: Making predictions with logistic regression
Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.
Step18: Let's dig deeper into the first row of the sample_test_data. Here's the full review
Step19: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
Step20: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as
Step21: Predicting sentiment
These scores can be used to make class predictions as follows
Step22: Checkpoint
Step23: Probability predictions
Recall from the lectures that we can also calculate the probability predictions from the scores using
Step24: Checkpoint
Step25: Quiz Question
Step26: To find the 40 most positive and the 40 most negative values, we will create a list of tuples with the entries (probability, index). We will then sort the list and will be able to extract the indicies corresponding to each entry.
Step27: Filling the list of tuples with the (probability, index) values
Step28: Sorting the list with the entries (probability, index)
Step29: Extracting the top 40 positive reviews and the top 40 negative reviews
Step30: Getting the indicies of the top 40 positive reviews.
Step31: Getting the indicies of the top 40 negative reviews.
Step32: Quiz Question
Step33: Quiz Question
Step34: Compute accuracy of the classifier
We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
This can be computed as follows
Step35: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
Step36: Quiz Question
Step37: Quiz Question
Step38: Finding the weights of significant words for the sentiment_model.
In this section, we will find the weights of significant words for the sentiment_model.
Creating a vocab list. The vocab list constains all the words used for the sentiment_model
Step39: Creating a list of the significant words in the utf-8 format
Step40: Creating a list that will store all the indicies where the significant words appear in the vocab list.
Step41: Finding the index where each significant word appears.
Step42: Creating an empty list that will store the weights of the significant words. Then, using the index to find the weight for each signigicant word.
Step43: Creating a series that will store the weights of the significant words and displaying this Series.
Step44: Learn another classifier with fewer words
There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are
Step45: Compute a new set of word count vectors using only these words. The CountVectorizer class has a parameter that lets you limit the choice of words when building word count vectors
Step46: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
Creating an instance of the LogisticRegression class. Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model simple_model.
Step47: Getting the weights for the 20 significant words from the simple_model
Step48: Putting the weights in a Series with the words corresponding to the weights as the index.
Step49: Quiz Question
Step50: Quiz Question
Step51: Comparing models
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data
Step52: Now, compute the classification accuracy of the simple_model on the train_data
Step53: Quiz Question
Step54: Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data
Step55: Next, we will compute the classification accuracy of the simple_model on the test_data
Step56: Quiz Question
Step57: Baseline
Step58: Now compute the accuracy of the majority class classifier on test_data.
Quiz Question
Step59: Quiz Question | Python Code:
import os
import zipfile
import string
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
Explanation: Predicting sentiment from product reviews
In this notebook, you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use DataFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Importing Libraries
End of explanation
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filename of unzipped file
unzipped_file = 'amazon_baby.csv'
# If upzipped file not in files_list, unzip the file
if unzipped_file not in files_list:
zip_file = unzipped_file + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
Explanation: Unzipping files with Amazon Baby Products Reviews
The dataset consists of baby product reviews from Amazon.com.
End of explanation
products = pd.read_csv("amazon_baby.csv")
Explanation: Loading the products data
The dataset is loaded into a Pandas DataFrame called products.
End of explanation
products.head()
Explanation: Now, let us see a preview of what the dataset looks like.
End of explanation
products.ix[1]
Explanation: Performing text cleaning
Let us explore a specific example of a baby product.
End of explanation
products["review"] = products["review"].fillna("")
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Transform the reviews into word-counts.
Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.
Before removing the punctuation from the strings in the review column, we will fall all NA values with empty string.
End of explanation
products["review_clean"] = products["review"].str.translate(None, string.punctuation)
Explanation: Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
End of explanation
products = products[products['rating'] != 3]
len(products)
Explanation: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
End of explanation
def sent_func(x):
# If rating is >=4, return a positive sentiment (+1)
if x>=4:
return 1
# Else, return a negative sentiment (-1)
else:
return -1
Explanation: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Below, we are create a function we will applyi to the "ratings" column of the dataframe to determine if the review is positive or negative.
End of explanation
products['sentiment'] = products['rating'].apply(sent_func)
products.ix[20:22]
Explanation: Creating a "sentiment" column by applying the sent_func to the "rating" column in the dataframe.
End of explanation
with open('module-2-assignment-train-idx.txt', 'r') as train_file:
ind_list_train = map(int,train_file.read().split(','))
with open('module-2-assignment-test-idx.txt', 'r') as test_file:
ind_list_test = map(int,test_file.read().split(','))
Explanation: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set.
Loading the indicies for the train and test data and putting them in a list
End of explanation
train_data = products.iloc[ind_list_train,:]
test_data = products.iloc[ind_list_test,:]
print len(train_data)
print len(test_data)
Explanation: Using the indicies of the train and test data to create the train and test datasets.
End of explanation
# Use this token pattern to keep single-letter words
vectorizer = CountVectorizer(token_pattern=r'\b\w+\b')
# First, learn vocabulary from the training data and assign columns to words
# Then convert the training data into a sparse matrix
train_matrix = vectorizer.fit_transform(train_data['review_clean'])
# Second, convert the test data into a sparse matrix, using the same word-column mapping
test_matrix = vectorizer.transform(test_data['review_clean'])
Explanation: Build the word count vector for each review
We will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows:
Learn a vocabulary (set of all words) from the training data. Only the words that show up in the training data will be considered for feature extraction.
Compute the occurrences of the words in each review and collect them into a row vector.
Build a sparse matrix where each row is the word count vector for the corresponding review. Call this matrix train_matrix.
Using the same mapping between words and columns, convert the test data into a sparse matrix test_matrix.
The following cell uses CountVectorizer in scikit-learn. Notice the token_pattern argument in the constructor.
End of explanation
logreg = linear_model.LogisticRegression()
Explanation: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target.
Note: This line may take 1-2 minutes.
Creating an instance of the LogisticRegression class
End of explanation
sentiment_model = logreg.fit(train_matrix, train_data["sentiment"])
Explanation: Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model sentiment_model.
End of explanation
weights_list = list(sentiment_model.intercept_) + list(sentiment_model.coef_.flatten())
weights_sent_model = np.array(weights_list, dtype = np.double)
print len(weights_sent_model)
Explanation: Putting all the weights from the model into a numpy array.
End of explanation
num_positive_weights = len(weights_sent_model[weights_sent_model >= 0.0])
num_negative_weights = len(weights_sent_model[weights_sent_model < 0.0])
print "Number of positive weights: %i" % num_positive_weights
print "Number of negative weights: %i" % num_negative_weights
Explanation: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Quiz question: How many weights are >= 0?
End of explanation
sample_test_data = test_data.ix[[59,71,91]]
print sample_test_data['rating']
sample_test_data
Explanation: Making predictions with logistic regression
Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.
End of explanation
sample_test_data['review'].ix[59]
Explanation: Let's dig deeper into the first row of the sample_test_data. Here's the full review:
End of explanation
sample_test_data['review'].ix[71]
Explanation: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
End of explanation
sample_test_matrix = vectorizer.transform(sample_test_data['review_clean'])
scores = sentiment_model.decision_function(sample_test_matrix)
print scores
Explanation: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:
$$
\mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i)
$$
where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores . For each row, the score (or margin) is a number in the range [-inf, inf].
End of explanation
pred_sent_test_data = []
for val in scores:
if val>0:
pred_sent_test_data.append(1)
else:
pred_sent_test_data.append(-1)
print pred_sent_test_data
Explanation: Predicting sentiment
These scores can be used to make class predictions as follows:
$$
\hat{y} =
\left{
\begin{array}{ll}
+1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \
-1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \
\end{array}
\right.
$$
Using scores, write code to calculate $\hat{y}$, the class predictions:
End of explanation
print "Class predictions according to Scikit-Learn:"
print sentiment_model.predict(sample_test_matrix)
Explanation: Checkpoint: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from Scikit-Learn.
End of explanation
prob_pos_score = 1.0/(1.0 + np.exp(-scores))
prob_pos_score
Explanation: Probability predictions
Recall from the lectures that we can also calculate the probability predictions from the scores using:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}.
$$
Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
End of explanation
print "Class predictions according to Scikit-Learn:"
print sentiment_model.predict_proba(sample_test_matrix)[:,1]
Explanation: Checkpoint: Make sure your probability predictions match the ones obtained from Scikit-Learn.
End of explanation
scores_test_data = sentiment_model.decision_function(test_matrix)
prob_test_data = 1.0/(1.0 + np.exp(-scores_test_data))
Explanation: Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?
The 3rd data point has the lowest probability of being positive
Find the most positive (and negative) review
We now turn to examining the full test dataset, test_data.
Using the sentiment_model, find the 40 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-40 reviews, use the following steps:
1. Make probability predictions on test_data using the sentiment_model.
2. Sort the data according to those predictions and pick the top 40.
Computing the scores with the sentiment_model decision function and then calculating the probability that y = +1
End of explanation
# List of indicies in the test data
ind_vals_test_data = test_data.index.values
# Empty list that will be filled with the tuples (probability, index)
score_label_lst_test = len(scores_test_data)*[-1]
Explanation: To find the 40 most positive and the 40 most negative values, we will create a list of tuples with the entries (probability, index). We will then sort the list and will be able to extract the indicies corresponding to each entry.
End of explanation
for i in range(len(scores_test_data)):
score_label_lst_test[i] = (prob_test_data[i],ind_vals_test_data[i])
Explanation: Filling the list of tuples with the (probability, index) values
End of explanation
score_label_lst_test.sort()
Explanation: Sorting the list with the entries (probability, index)
End of explanation
top_40_pos_test_rev = score_label_lst_test[-40:]
top_40_neg_test_rev = score_label_lst_test[0:40]
Explanation: Extracting the top 40 positive reviews and the top 40 negative reviews
End of explanation
ind_top_40_pos_test = 40*[-1]
for i,val in enumerate(top_40_pos_test_rev):
ind_top_40_pos_test[i] = val[1]
Explanation: Getting the indicies of the top 40 positive reviews.
End of explanation
ind_top_40_neg_test = 40*[-1]
for i,val in enumerate(top_40_neg_test_rev):
ind_top_40_neg_test[i] = val[1]
Explanation: Getting the indicies of the top 40 negative reviews.
End of explanation
test_data.ix[ind_top_40_pos_test]["name"]
Explanation: Quiz Question: Which of the following products are represented in the 40 most positive reviews? [multiple choice]
End of explanation
test_data.ix[ind_top_40_neg_test]["name"]
Explanation: Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
End of explanation
def get_classification_accuracy(model, data, true_labels):
# Constructing the wordcount vector
data_matrix = vectorizer.transform(data['review_clean'])
# Getting the predictions
preds_data = model.predict(data_matrix)
# Computing the number of correctly classified examples and the total examples
n_correct = float(np.sum(preds_data == true_labels.values))
n_total = float(len(preds_data))
# Computing the accuracy by dividing number of
#correctly classified examples by total number of examples
accuracy = n_correct/n_total
return accuracy
Explanation: Compute accuracy of the classifier
We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
This can be computed as follows:
Step 1: Use the trained model to compute class predictions
Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).
Step 3: Divide the total number of correct predictions by the total number of data points in the dataset.
Complete the function below to compute the classification accuracy:
End of explanation
acc_sent_mod_test = get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
print acc_sent_mod_test
Explanation: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
End of explanation
print "Accuracy on Test Data: %.2f" %(acc_sent_mod_test)
Explanation: Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
End of explanation
acc_sent_mod_train = get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
print acc_sent_mod_train
Explanation: Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?
No, you may be overfitting.
Now, computing the accuracy of the sentiment model on the training data for a future quiz question.
End of explanation
vocab = vectorizer.get_feature_names()
print len(vocab)
Explanation: Finding the weights of significant words for the sentiment_model.
In this section, we will find the weights of significant words for the sentiment_model.
Creating a vocab list. The vocab list constains all the words used for the sentiment_model
End of explanation
un_sig_words = [u'love', u'great', u'easy', u'old', u'little', u'perfect', u'loves',
u'well', u'able', u'car', u'broke', u'less', u'even', u'waste', u'disappointed',
u'work', u'product', u'money', u'would', u'return']
Explanation: Creating a list of the significant words in the utf-8 format
End of explanation
ind_vocab_sig_words = []
Explanation: Creating a list that will store all the indicies where the significant words appear in the vocab list.
End of explanation
for word in un_sig_words:
ind_vocab_sig_words.append(vocab.index(word))
Explanation: Finding the index where each significant word appears.
End of explanation
ws_sent_mod_sig_words = []
for ind in ind_vocab_sig_words:
ws_sent_mod_sig_words.append(sentiment_model.coef_.flatten()[ind])
Explanation: Creating an empty list that will store the weights of the significant words. Then, using the index to find the weight for each signigicant word.
End of explanation
ws_sent_mod_ser = pd.Series(data=ws_sent_mod_sig_words, index=un_sig_words)
ws_sent_mod_ser
Explanation: Creating a series that will store the weights of the significant words and displaying this Series.
End of explanation
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
Explanation: Learn another classifier with fewer words
There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:
End of explanation
vectorizer_word_subset = CountVectorizer(vocabulary=significant_words) # limit to 20 words
train_matrix_word_subset = vectorizer_word_subset.fit_transform(train_data['review_clean'])
test_matrix_word_subset = vectorizer_word_subset.transform(test_data['review_clean'])
Explanation: Compute a new set of word count vectors using only these words. The CountVectorizer class has a parameter that lets you limit the choice of words when building word count vectors:
End of explanation
log_reg = linear_model.LogisticRegression()
simple_model = logreg.fit(train_matrix_word_subset, train_data["sentiment"])
Explanation: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
Creating an instance of the LogisticRegression class. Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model simple_model.
End of explanation
ws_simp_model = list(simple_model.coef_.flatten())
Explanation: Getting the weights for the 20 significant words from the simple_model
End of explanation
ws_simp_mod_ser = pd.Series(data=ws_simp_model, index=significant_words)
ws_simp_mod_ser
Explanation: Putting the weights in a Series with the words corresponding to the weights as the index.
End of explanation
print len(simple_model.coef_[simple_model.coef_>0])
Explanation: Quiz Question: Consider the coefficients of simple_model. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
End of explanation
ws_sent_mod_ser
Explanation: Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?
Yes, see weights below for the significant words for the sentiment model
End of explanation
acc_sent_mod_train
Explanation: Comparing models
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data:
End of explanation
preds_simp_mod_train = simple_model.predict(train_matrix_word_subset)
n_cor_preds_simp_mod_train = float(np.sum(preds_simp_mod_train == train_data['sentiment'].values))
n_tol_preds_simp_mod_train = float(len(preds_simp_mod_train))
acc_simp_mod_train = n_cor_preds_simp_mod_train/n_tol_preds_simp_mod_train
print acc_simp_mod_train
Explanation: Now, compute the classification accuracy of the simple_model on the train_data:
End of explanation
if acc_sent_mod_train>acc_simp_mod_train:
print "sentiment_model"
else:
print "simple_model"
Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
End of explanation
acc_sent_mod_test
Explanation: Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
End of explanation
preds_simp_mod_test = simple_model.predict(test_matrix_word_subset)
n_cor_preds_simp_mod_test = float(np.sum(preds_simp_mod_test == test_data['sentiment'].values))
n_tol_preds_simp_mod_test = float(len(preds_simp_mod_test))
acc_simp_mod_test = n_cor_preds_simp_mod_test/n_tol_preds_simp_mod_test
print acc_simp_mod_test
Explanation: Next, we will compute the classification accuracy of the simple_model on the test_data:
End of explanation
if acc_sent_mod_test>acc_simp_mod_test:
print "sentiment_model"
else:
print "simple_model"
Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
End of explanation
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
acc_pos_train = float(num_positive)/float(len(train_data['sentiment']))
acc_neg_train = float(num_negative)/float(len(train_data['sentiment']))
if acc_pos_train>acc_neg_train:
print "Positive Sentiment is Majority Classifier for Training Data"
else:
print "Negative Sentiment is Majority Classifier for Training Data"
Explanation: Baseline: Majority class prediction
It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
What is the majority class in the train_data?
End of explanation
num_pos_test = (test_data['sentiment'] == +1).sum()
acc_pos_test = float(num_pos_test)/float(len(test_data['sentiment']))
print "Accuracy of Majority Class Classifier on Test Data: %.2f" %(acc_pos_test)
Explanation: Now compute the accuracy of the majority class classifier on test_data.
Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).
End of explanation
if acc_sent_mod_test>acc_pos_test:
print "Yes, the sentiment_model is better than majority class classifier"
else:
print "No, the majority class classifier is better than sentiment_model"
Explanation: Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?
End of explanation |
13,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: Container (Docker) image
Next, we will set the Docker container images for training.
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available
Step13: Machine Type
Next, set the machine type to use for training.
Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training.
machine type
n1-standard
Step14: Tutorial
Now you are ready to start creating your own hyperparameter tuning and training of a custom text binary classification.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Job Service for hyperparameter tuning.
Step15: Tuning a model - Hello World
There are two ways you can hyperparameter tune and train a custom model using a container image
Step16: Prepare your disk specification
(optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning.
boot_disk_type
Step17: Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following
Step18: Create a study specification
Let's start with a simple study. You will just use a single parameter -- the learning rate. Since its just one parameter, it doesn't make much sense to do a random search. Instead, we will do a grid search over a range of values.
metrics
Step19: Assemble a hyperparameter tuning job specification
Now assemble the complete description for the custom hyperparameter tuning specification
Step20: Examine the hyperparameter tuning package
Package layout
Before you start the hyperparameter tuning, you will look at how a Python package is assembled for a custom hyperparameter tuning job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom hyperparameter tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step21: Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary
Step22: Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step23: Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as loss.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps
Step24: Now get the unique identifier for the hyperparameter tuning job you created.
Step25: Get information on a hyperparameter tuning job
Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter
Step26: Wait for tuning to complete
Hyperparameter tuning the above model may take upwards of 20 minutes time.
Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time.
For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/<trial_number>/saved_model.pb'.
Step27: Review the results of the study
Now review the results of trials.
Step28: Best trial
Now look at which trial was the best
Step29: Get the Best Model
If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at
Step30: Tuning a model - IMDB Movie Reviews
Now that you have seen the overall steps for hyperparameter tuning a custom training job using a Python package that mimics training a model, you will do a new hyperparameter tuning job for a custom training job for a IMDB Movie Reviews model.
For this example, you will change two parts
Step31: Assemble a hyperparameter tuning job specification
Now assemble the complete description for the custom hyperparameter tuning specification
Step32: Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary
Step33: Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step34: Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as loss.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps
Step35: Now get the unique identifier for the custom job you created.
Step36: Get information on a hyperparameter tuning job
Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter
Step37: Wait for tuning to complete
Hyperparameter tuning the above model may take upwards of 20 minutes time.
Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time.
For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/<trial_number>/saved_model.pb'.
Step38: Review the results of the study
Now review the results of trials.
Step39: Best trial
Now look at which trial was the best
Step40: Get the Best Model
If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at
Step41: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step42: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
Step43: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step44: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: Hyperparameter tuning text binary classification model
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_text_binary_classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_text_binary_classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to do hyperparameter tuning for a custom text binary classification model.
Dataset
The dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment.
Objective
In this notebook, you learn how to create a hyperparameter tuning job for a custom text binary classification model from a Python script in a docker container using the Vertex client library. You can alternatively hyperparameter tune models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create an Vertex hyperparameter turning job for training a custom model.
Tune the custom model.
Evaluate the study results.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
Explanation: Container (Docker) image
Next, we will set the Docker container images for training.
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available:
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
Explanation: Machine Type
Next, set the machine type to use for training.
Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own hyperparameter tuning and training of a custom text binary classification.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Job Service for hyperparameter tuning.
End of explanation
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
Explanation: Tuning a model - Hello World
There are two ways you can hyperparameter tune and train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for hyperparameter tuning and training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for hyperparameter tuning and training a custom model.
Prepare your hyperparameter tuning job specification
Now that your clients are ready, your first step is to create a Job Specification for your hyperparameter tuning job. The job specification will consist of the following:
trial_job_spec: The specification for the custom job.
worker_pool_spec : The specification of the type of machine(s) you will use for hyperparameter tuning and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
study_spec: The specification for what to tune.
parameters: This is the specification of the hyperparameters that you will tune for the custom training job. It will contain a list of the
metrics: This is the specification on how to evaluate the result of each tuning trial.
Prepare your machine specification
Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_imdb.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom hyperparameter tuning job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom hyperparameter tuning job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the hyperparameter tuning script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The hyperparameter tuning distribution strategy to use for single or distributed hyperparameter tuning.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
study_spec = {
"metrics": [
{
"metric_id": "val_accuracy",
"goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE,
}
],
"parameters": [
{
"parameter_id": "lr",
"discrete_value_spec": {"values": [0.001, 0.01, 0.1]},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
],
"algorithm": aip.StudySpec.Algorithm.GRID_SEARCH,
}
Explanation: Create a study specification
Let's start with a simple study. You will just use a single parameter -- the learning rate. Since its just one parameter, it doesn't make much sense to do a random search. Instead, we will do a grid search over a range of values.
metrics:
metric_id: In this example, the objective metric to report back is 'val_accuracy'
goal: In this example, the hyperparameter tuning service will evaluate trials to maximize the value of the objective metric.
parameters: The specification for the hyperparameters to tune.
parameter_id: The name of the hyperparameter that will be passed to the Python package as a command line argument.
scale_type: The scale type determines the resolution the hyperparameter tuning service uses when searching over the search space.
UNIT_LINEAR_SCALE: Uses a resolution that is the same everywhere in the search space.
UNIT_LOG_SCALE: Values close to the bottom of the search space are further away.
UNIT_REVERSE_LOG_SCALE: Values close to the top of the search space are further away.
search space: This is where you will specify the search space of values for the hyperparameter to select for tuning.
integer_value_spec: Specifies an integer range of values between a min_value and max_value.
double_value_spec: Specifies a continuous range of values between a min_value and max_value.
discrete_value_spec: Specifies a list of values.
algorithm: The search method for selecting hyperparameter values per trial:
GRID_SEARCH: Combinatorically search -- which is used in this example.
RANDOM_SEARCH: Random search.
End of explanation
hpt_job = {
"display_name": JOB_NAME,
"trial_job_spec": {"worker_pool_specs": worker_pool_spec},
"study_spec": study_spec,
"max_trial_count": 6,
"parallel_trial_count": 1,
}
Explanation: Assemble a hyperparameter tuning job specification
Now assemble the complete description for the custom hyperparameter tuning specification:
display_name: The human readable name you assign to this custom hyperparameter tuning job.
trial_job_spec: The specification for the custom hyperparameter tuning job.
study_spec: The specification for what to tune.
max_trial_count: The maximum number of tuning trials.
parallel_trial_count: How many trials to try in parallel; otherwise, they are done sequentially.
End of explanation
# Make folder for Python hyperparameter tuning script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: IMDB Movie Reviews text binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration hyperparameter tuning script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Examine the hyperparameter tuning package
Package layout
Before you start the hyperparameter tuning, you will look at how a Python package is assembled for a custom hyperparameter tuning job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom hyperparameter tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# HP Tuning hello world example
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
from hypertune import HyperTune
import argparse
import os
import sys
import time
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--model-dir',
dest='model_dir',
default='/tmp/saved_model',
type=str,
help='Model dir.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Instantiate the HyperTune reporting object
hpt = HyperTune()
for epoch in range(1, args.epochs+1):
# mimic metric result at the end of an epoch
acc = args.lr * epoch
# save the metric result to communicate back to the HPT service
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_accuracy',
metric_value=acc,
global_step=epoch)
print('epoch: {}, accuracy: {}'.format(epoch, acc))
time.sleep(1)
Explanation: Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary:
Passes the hyperparameter values for a trial as a command line argument (parser.add_argument('--lr',...))
Mimics a training loop, where on each loop (epoch) the variable accuracy is set to the loop iteration * the learning rate.
Reports back the objective metric accuracy back to the hyperparameter tuning service using report_hyperparameter_tuning_metric().
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
Explanation: Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
def create_hyperparameter_tuning_job(hpt_job):
response = clients["job"].create_hyperparameter_tuning_job(
parent=PARENT, hyperparameter_tuning_job=hpt_job
)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_hyperparameter_tuning_job(hpt_job)
Explanation: Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as loss.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps:
Import the HyperTune module: from hypertune import HyperTune().
At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are:
hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification.
metric_value: The value of the objective metric to report back to the hyperparameter service.
global_step: The epoch iteration, starting at 0.
Hyperparameter Tune the model
Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter:
-hpt_job: The specification for the hyperparameter tuning job.
The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-hyperparameter_tuning_job: The specification for the hyperparameter tuning job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom hyperparameter tuning job.
End of explanation
# The full unique ID for the hyperparameter tuning job
hpt_job_id = response.name
# The short numeric ID for the hyperparameter tuning job
hpt_job_short_id = hpt_job_id.split("/")[-1]
print(hpt_job_id)
Explanation: Now get the unique identifier for the hyperparameter tuning job you created.
End of explanation
def get_hyperparameter_tuning_job(name, silent=False):
response = clients["job"].get_hyperparameter_tuning_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_hyperparameter_tuning_job(hpt_job_id)
Explanation: Get information on a hyperparameter tuning job
Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
The helper function calls the job client service's get_hyperparameter_tuning_job method, with the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
If you recall, you got the Vertex fully qualified identifier for the hyperparameter tuning job in the response.name field when you called the create_hyperparameter_tuning_job method, and saved the identifier in the variable hpt_job_id.
End of explanation
while True:
job_response = get_hyperparameter_tuning_job(hpt_job_id, True)
if job_response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Study trials have not completed:", job_response.state)
if job_response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
print("Study trials have completed")
break
time.sleep(60)
Explanation: Wait for tuning to complete
Hyperparameter tuning the above model may take upwards of 20 minutes time.
Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time.
For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/<trial_number>/saved_model.pb'.
End of explanation
best = (None, None, None, 0.0)
for trial in job_response.trials:
print(trial)
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
Explanation: Review the results of the study
Now review the results of trials.
End of explanation
print("ID", best[0])
print("Learning Rate", best[1])
print("Decay", best[2])
print("Validation Accuracy", best[3])
Explanation: Best trial
Now look at which trial was the best:
End of explanation
BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model"
Explanation: Get the Best Model
If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at:
MODEL_DIR/<best_trial_id>/model
End of explanation
study_spec = {
"metrics": [
{"metric_id": "loss", "goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE}
],
"parameters": [
{
"parameter_id": "lr",
"discrete_value_spec": {"values": [0.001, 0.01, 0.1]},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
{
"parameter_id": "decay",
"double_value_spec": {"min_value": 1e-6, "max_value": 1e-2},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
],
"algorithm": aip.StudySpec.Algorithm.RANDOM_SEARCH,
}
Explanation: Tuning a model - IMDB Movie Reviews
Now that you have seen the overall steps for hyperparameter tuning a custom training job using a Python package that mimics training a model, you will do a new hyperparameter tuning job for a custom training job for a IMDB Movie Reviews model.
For this example, you will change two parts:
Specify the IMDB Movie Reviews custom hyperparameter tuning Python package.
Specify a study specification specific to the hyperparameters used in the IMDB Movie Reviews custom hyperparameter tuning Python package.
Create a study specification
In this study, you will tune for two hyperparameters using the random search algorithm:
learning rate: The search space is a set of discrete values.
learning rate decay: The search space is a continuous range between 1e-6 and 1e-2.
The objective (goal) is to maximize the validation accuracy.
You will run a maximum of six trials.
End of explanation
hpt_job = {
"display_name": JOB_NAME,
"trial_job_spec": {"worker_pool_specs": worker_pool_spec},
"study_spec": study_spec,
"max_trial_count": 6,
"parallel_trial_count": 1,
}
Explanation: Assemble a hyperparameter tuning job specification
Now assemble the complete description for the custom hyperparameter tuning specification:
display_name: The human readable name you assign to this custom hyperparameter tuning job.
trial_job_spec: The specification for the custom hyperparameter tuning job.
study_spec: The specification for what to tune.
max_trial_count: The maximum number of tuning trials.
parallel_trial_count: How many trials to try in parallel; otherwise, they are done sequentially.
End of explanation
%%writefile custom/trainer/task.py
# Custom Training for IMDB Movie Reviews
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
from hypertune import HyperTune
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=1e-4, type=float,
help='Learning rate.')
parser.add_argument('--decay', dest='decay',
default=0.98, type=float,
help='Decay rate')
parser.add_argument('--units', dest='units',
default=64, type=int,
help='Number of units.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Preparing dataset
BUFFER_SIZE = 1000
BATCH_SIZE = 64
def make_datasets():
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
encoder = info.features['text'].encoder
padded_shapes = ([None],())
return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder
train_dataset, encoder = make_datasets()
# Build the Keras model
def build_and_compile_rnn_model(encoder):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(args.units)),
tf.keras.layers.Dense(args.units, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=args.lr, decay=args.decay),
metrics=['accuracy'])
return model
model = build_and_compile_rnn_model(encoder)
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='loss',
metric_value=logs['loss'],
global_step=epoch)
model.fit(train_dataset, epochs=args.epochs, callbacks=[HPTCallback()])
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary:
Parse the command line arguments for the hyperparameter settings for the current trial.
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads IMDB Movie Review dataset from TF Datasets (tfds).
Builds a simple RNN model using TF.Keras model API.
The learning rate and number of units per dense and LSTM layer hyperparameter values are used during the compile of the model.
Compiles the model (compile()).
A definition of a callback HPTCallback which obtains the validation loss at the end of each epoch (on_epoch_end()) and reports it to the hyperparameter tuning service using hpt.report_hyperparameter_tuning_metric().
Train the model with the fit() method and specify a callback which will report the validation loss back to the hyperparameter tuning service.
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
Explanation: Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
def create_hyperparameter_tuning_job(hpt_job):
response = clients["job"].create_hyperparameter_tuning_job(
parent=PARENT, hyperparameter_tuning_job=hpt_job
)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_hyperparameter_tuning_job(hpt_job)
Explanation: Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as loss.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps:
Import the HyperTune module: from hypertune import HyperTune().
At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are:
hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification.
metric_value: The value of the objective metric to report back to the hyperparameter service.
global_step: The epoch iteration, starting at 0.
Hyperparameter Tune the model
Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter:
-hpt_job: The specification for the hyperparameter tuning job.
The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-hyperparameter_tuning_job: The specification for the hyperparameter tuning job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom hyperparameter tuning job.
End of explanation
# The full unique ID for the custom job
hpt_job_id = response.name
# The short numeric ID for the custom job
hpt_job_short_id = hpt_job_id.split("/")[-1]
print(hpt_job_id)
Explanation: Now get the unique identifier for the custom job you created.
End of explanation
def get_hyperparameter_tuning_job(name, silent=False):
response = clients["job"].get_hyperparameter_tuning_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_hyperparameter_tuning_job(hpt_job_id)
Explanation: Get information on a hyperparameter tuning job
Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
The helper function calls the job client service's get_hyperparameter_tuning_job method, with the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
If you recall, you got the Vertex fully qualified identifier for the hyperparameter tuning job in the response.name field when you called the create_hyperparameter_tuning_job method, and saved the identifier in the variable hpt_job_id.
End of explanation
while True:
job_response = get_hyperparameter_tuning_job(hpt_job_id, True)
if job_response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Study trials have not completed:", job_response.state)
if job_response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
print("Study trials have completed")
break
time.sleep(60)
Explanation: Wait for tuning to complete
Hyperparameter tuning the above model may take upwards of 20 minutes time.
Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time.
For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/<trial_number>/saved_model.pb'.
End of explanation
best = (None, None, None, 0.0)
for trial in job_response.trials:
print(trial)
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
Explanation: Review the results of the study
Now review the results of trials.
End of explanation
print("ID", best[0])
print("Learning Rate", best[1])
print("Decay", best[2])
print("Validation Accuracy", best[3])
Explanation: Best trial
Now look at which trial was the best:
End of explanation
BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model"
Explanation: Get the Best Model
If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at:
MODEL_DIR/<best_trial_id>/model
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
encoder = info.features["text"].encoder
BATCH_SIZE = 64
padded_shapes = ([None], ())
test_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
End of explanation
model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
13,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Explanation
Observations $s$ are scalar values between -100 and 100. Actions $a$ are scalar values between -1 and 1.
The plot above shows $(s, a)$ pairs, with their corresponding targets $y$ as color (dark is low, light is hight). The high target regions follow the curve $f$, which gives the mode (argmax action) as a function of $s$.
The goal is to recover $f$ from the given data points.
Step2: What loss functions best recover the curve $f$ from our dataset?
Step3: Test recovery of $f$. | Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow.compat.v2 as tf
import matplotlib.pyplot as plt
tf.enable_v2_behavior()
# Mode as a function of observation
def f(s):
return np.sin(s*2*np.pi/100.)/2.
N = 100
s = np.random.uniform(-100, 100, size=N) # observations between -100 and 100
a = np.random.uniform(-1, 1, size=N) # Actions between -1 and 1
P = 0.2
y = -100*np.abs(a - f(s))**P
y /= np.max(np.abs(y))
print(np.max(y))
print(np.min(y))
plt.scatter(s, a, c=y)
plt.plot(np.sort(s), f(np.sort(s)))
plt.plot()
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
data = (s[:, np.newaxis], a[:, np.newaxis], y[:, np.newaxis])
s_features = tf.constant(np.linspace(-100, 100, 50)[np.newaxis, :], dtype=tf.float32)
hidden_widths = [1000, 500]
model = tf.keras.Sequential(
[tf.keras.layers.Lambda(lambda x: tf.exp(-(x - s_features)**2/2000))]
+ [tf.keras.layers.Dense(w, activation='relu') for w in hidden_widths]
+ [tf.keras.layers.Dense(1, activation=None)]
)
Explanation: Explanation
Observations $s$ are scalar values between -100 and 100. Actions $a$ are scalar values between -1 and 1.
The plot above shows $(s, a)$ pairs, with their corresponding targets $y$ as color (dark is low, light is hight). The high target regions follow the curve $f$, which gives the mode (argmax action) as a function of $s$.
The goal is to recover $f$ from the given data points.
End of explanation
# loss A
# ||h(s) - a|^p - R|^q
# This is danabo's mode regression loss
p = 0.1
q = 1/P
# p = q = 2.0
def loss(model, s, a, y):
reg = tf.linalg.global_norm(model.trainable_variables)
return tf.reduce_mean(tf.abs(-tf.abs(model(s)-a)**p - y)**q) + 0.003*reg
# loss B
# |h(s) - a|^p * exp(R/tau)
# This is one of Dale's surrogate loss, specifically dot-product loss.
p = 1.0
tau = 1/10.
def loss(model, s, a, y):
reg = tf.linalg.global_norm(model.trainable_variables)
target = tf.cast(tf.exp(y/tau), tf.float32)
return tf.reduce_mean(tf.abs(model(s)-a)**p * target) + 0.0005*reg
np.var(s)
# Initialize model
device_string = '/device:GPU:0'
# device_string = '/device:TPU:0'
# device_string = '' # CPU
with tf.device(device_string):
model(data[0])
print(loss(model, *data).numpy()) # Initialize model
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
def sample_batch(batch_size, *args):
assert args
idx = np.random.choice(args[0].shape[0], batch_size)
return tuple([arg[idx] for arg in args])
for i in range(10000):
# batch = sample_batch(100, *data)
batch = data
optimizer.minimize(lambda: loss(model, *batch), model.trainable_variables)
if i % 100 == 0:
print(i, '\t', loss(model, *data).numpy())
Explanation: What loss functions best recover the curve $f$ from our dataset?
End of explanation
X = np.linspace(-100, 100, 200)[:, np.newaxis]
Y = model(X).numpy()
plt.plot(X, Y)
plt.show()
Explanation: Test recovery of $f$.
End of explanation |
13,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dictionaries
Dictionaries allow us to store connected bits of information. For example, you might store a person's name and age together.
Previous
Step1: Since the keys and values in dictionaries can be long, we often write just one key-value pair on a line. You might see dictionaries that look more like this
Step2: This is a bit easier to read, especially if the values are long.
Example
A simple example involves modeling an actual dictionary.
Step3: We can get individual items out of the dictionary, by giving the dictionary's name, and the key in square brackets
Step4: This code looks pretty repetitive, and it is. Dictionaries have their own for-loop syntax, but since there are two kinds of information in dictionaries, the structure is a bit more complicated than it is for lists. Here is how to use a for loop with a dictionary
Step5: The output is identical, but we did it in 3 lines instead of 6. If we had 100 terms in our dictionary, we would still be able to print them out with just 3 lines.
The only tricky part about using for loops with dictionaries is figuring out what to call those first two variables. The general syntax for this for loop is
Step6: <a id="Exercises-what"></a>
Exercises
Pet Names
Create a dictionary to hold information about pets. Each key is an animal's name, and each value is the kind of animal.
For example, 'ziggy'
Step7: top
Modifying values in a dictionary
At some point you may want to modify one of the values in your dictionary. Modifying a value in a dictionary is pretty similar to modifying an element in a list. You give the name of the dictionary and then the key in square brackets, and set that equal to the new value.
Step8: top
Removing key-value pairs
You may want to remove some key-value pairs from one of your dictionaries at some point. You can do this using the same del command you learned to use with lists. To remove a key-value pair, you give the del command, followed by the name of the dictionary, with the key that you want to delete. This removes the key and the value as a pair.
Step9: If you were going to work with this code, you would certainly want to put the code for displaying the dictionary into a function. Let's see what this looks like
Step10: As long as we have a nice clean function to work with, let's clean up our output a little
Step11: This is much more realistic code.
Modifying keys in a dictionary
Modifying a value in a dictionary was straightforward, because nothing else depends on the value. Modifying a key is a little harder, because each key is used to unlock a value. We can change a key in two steps
Step12: top
<a id="Exercises-operations"></a>
Exercises
Pet Names 2
Make a copy of your program from Pet Names.
Use a for loop to print out a series of statements such as "Willie is a dog."
Modify one of the values in your dictionary. You could clarify to name a breed, or you could change an animal from a cat to a dog.
Use a for loop to print out a series of statements such as "Willie is a dog."
Add a new key-value pair to your dictionary.
Use a for loop to print out a series of statements such as "Willie is a dog."
Remove one of the key-value pairs from your dictionary.
Use a for loop to print out a series of statements such as "Willie is a dog."
Bonus
Step13: This works because the method .items() pulls all key-value pairs from a dictionary into a list of tuples
Step14: The syntax for key, value in my_dict.items()
Step15: top
Looping through all keys in a dictionary
Python provides a clear syntax for looping through just the keys in a dictionary
Step16: This is actually the default behavior of looping through the dictionary itself. So you can leave out the .keys() part, and get the exact same behavior
Step17: The only advantage of using the .keys() in the code is a little bit of clarity. But anyone who knows Python reasonably well is going to recognize what the second version does. In the rest of our code, we will leave out the .keys() when we want this behavior.
You can pull out the value of any key that you are interested in within your loop, using the standard notation for accessing a dictionary value from a key
Step18: Let's show how we might use this in our Python words program. This kind of loop provides a straightforward way to show only the words in the dictionary
Step19: We can extend this slightly to make a program that lets you look up words. We first let the user choose a word. When the user has chosen a word, we get the meaning for that word, and display it
Step20: This allows the user to select one word that has been defined. If we enclose the input part of the program in a while loop, the user can see as many definitions as they'd like
Step21: This allows the user to ask for as many meanings as they want, but it takes the word "quit" as a requested word. Let's add an elif clause to clean up this behavior
Step22: top
Looping through all values in a dictionary
Python provides a straightforward syntax for looping through all the values in a dictionary, as well
Step23: We can use this loop syntax to have a little fun with the dictionary example, by making a little quiz program. The program will display a meaning, and ask the user to guess the word that matches that meaning. Let's start out by showing all the meanings in the dictionary
Step24: Now we can add a prompt after each meaning, asking the user to guess the word
Step25: This is starting to work, but we can see from the output that the user does not get the chance to take a second guess if they guess wrong for any meaning. We can use a while loop around the guessing code, to let the user guess until they get it right
Step26: This is better. Now, if the guess is incorrect, the user is caught in a loop that they can only exit by guessing correctly. The final revision to this code is to show the user a list of words to choose from when they are asked to guess
Step27: top
Looping through a dictionary in order
Dictionaries are quite useful because they allow bits of information to be connected. One of the problems with dictionaries, however, is that they are not stored in any particular order. When you retrieve all of the keys or values in your dictionary, you can't be sure what order you will get them back. There is a quick and easy way to do this, however, when you want them in a particular order.
Let's take a look at the order that results from a simple call to dictionary.keys()
Step28: The resulting list is not in order. The list of keys can be put in order by passing the list into the sorted() function, in the line that initiates the for loop
Step29: This approach can be used to work with the keys and values in order. For example, the words and meanings can be printed in alphabetical order by word
Step30: In this example, the keys have been put into alphabetical order in the for loop only; Python has not changed the way the dictionary is stored at all. So the next time the dictionary is accessed, the keys could be returned in any order. There is no way to permanently specify an order for the items in an ordinary dictionary, but if you want to do this you can use the OrderedDict structure.
top
<a id="Exercises-looping"></a>
Exercises
Mountain Heights
Wikipedia has a list of the tallest mountains in the world, with each mountain's elevation. Pick five mountains from this list.
Create a dictionary with the mountain names as keys, and the elevations as values.
Print out just the mountains' names, by looping through the keys of your dictionary.
Print out just the mountains' elevations, by looping through the values of your dictionary.
Print out a series of statements telling how tall each mountain is
Step31: We are really just working our way through each key in the dictionary, so let's use a for loop to go through the keys in the dictionary
Step32: This structure is fairly complex, so don't worry if it takes a while for things to sink in. The dictionary itself probably makes sense; each person is connected to a list of their favorite numbers.
This works, but we'd rather not print raw Python in our output. Let's use a for loop to print the favorite numbers individually, rather than in a Python list.
Step33: Things get a little more complicated inside the for loop. The value is a list of favorite numbers, so the for loop pulls each favorite_number out of the list one at a time. If it makes more sense to you, you are free to store the list in a new variable, and use that to define your for loop
Step34: top
Dictionaries in a dictionary
The most powerful nesting concept we will cover right now is nesting a dictionary inside of a dictionary.
To demonstrate this, let's make a dictionary of pets, with some information about each pet. The keys for this dictionary will consist of the pet's name. The values will include information such as the kind of animal, the owner, and whether the pet has been vaccinated.
Step35: Clearly this is some repetitive code, but it shows exactly how we access information in a nested dictionary. In the first set of print statements, we use the name 'willie' to unlock the 'kind' of animal he is, the 'owner' he has, and whether or not he is 'vaccinated'. We have to wrap the vaccination value in the str function so that Python knows we want the words 'True' and 'False', not the values True and False. We then do the same thing for each animal.
Let's rewrite this program, using a for loop to go through the dictionary's keys
Step36: This code is much shorter and easier to maintain. But even this code will not keep up with our dictionary. If we add more information to the dictionary later, we will have to update our print statements. Let's put a second for loop inside the first loop in order to run through all the information about each pet
Step37: This nested loop can look pretty complicated, so again, don't worry if it doesn't make sense for a while.
The first loop gives us all the keys in the main dictionary, which consist of the name of each pet.
Each of these names can be used to 'unlock' the dictionary of each pet.
The inner loop goes through the dictionary for that individual pet, and pulls out all of the keys in that individual pet's dictionary.
We print the key, which tells us the kind of information we are about to see, and the value for that key.
You can see that we could improve the formatting in the output.
We could capitalize the owner's name.
We could print 'yes' or 'no', instead of True and False.
Let's show one last version that uses some if statements to clean up our data for printing | Python Code:
dictionary_name = {key_1: value_1, key_2: value_2, key_3: value_3}
Explanation: Dictionaries
Dictionaries allow us to store connected bits of information. For example, you might store a person's name and age together.
Previous: Basic Terminal Apps |
Home |
Next: More Functions
Contents
What are dictionaries?
General Syntax
Example
Exercises
Common operations with dictionaries
Adding new key-value pairs
Modifying values in a dictionary
Removing key-value pairs
Modifying keys in a dictionary
Exercises
Looping through a dictionary
Looping through all key-value pairs
Looping through all keys in a dictionary
Looping through all values in a dictionary
Looping through a dictionary in order
Exercises
Nesting
Lists in a dictionary
Dictionaries in a dictionary
An important note about nesting
Exercises
Overall Challenges
top
What are dictionaries?
Dictionaries are a way to store information that is connected in some way. Dictionaries store information in key-value pairs, so that any one piece of information in a dictionary is connected to at least one other piece of information.
Dictionaries do not store their information in any particular order, so you may not get your information back in the same order you entered it.
General Syntax
A general dictionary in Python looks something like this:
End of explanation
dictionary_name = {key_1: value_1,
key_2: value_2,
key_3: value_3,
}
Explanation: Since the keys and values in dictionaries can be long, we often write just one key-value pair on a line. You might see dictionaries that look more like this:
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
Explanation: This is a bit easier to read, especially if the values are long.
Example
A simple example involves modeling an actual dictionary.
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
print("\nWord: %s" % 'list')
print("Meaning: %s" % python_words['list'])
print("\nWord: %s" % 'dictionary')
print("Meaning: %s" % python_words['dictionary'])
print("\nWord: %s" % 'function')
print("Meaning: %s" % python_words['function'])
Explanation: We can get individual items out of the dictionary, by giving the dictionary's name, and the key in square brackets:
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Print out the items in the dictionary.
for word, meaning in python_words.items():
print("\nWord: %s" % word)
print("Meaning: %s" % meaning)
Explanation: This code looks pretty repetitive, and it is. Dictionaries have their own for-loop syntax, but since there are two kinds of information in dictionaries, the structure is a bit more complicated than it is for lists. Here is how to use a for loop with a dictionary:
End of explanation
for key_name, value_name in dictionary_name.items():
print(key_name) # The key is stored in whatever you called the first variable.
print(value_name) # The value associated with that key is stored in your second variable.
Explanation: The output is identical, but we did it in 3 lines instead of 6. If we had 100 terms in our dictionary, we would still be able to print them out with just 3 lines.
The only tricky part about using for loops with dictionaries is figuring out what to call those first two variables. The general syntax for this for loop is:
End of explanation
# Create an empty dictionary.
python_words = {}
# Fill the dictionary, pair by pair.
python_words['list'] ='A collection of values that are not connected, but have an order.'
python_words['dictionary'] = 'A collection of key-value pairs.'
python_words['function'] = 'A named set of instructions that defines a set of actions in Python.'
# Print out the items in the dictionary.
for word, meaning in python_words.items():
print("\nWord: %s" % word)
print("Meaning: %s" % meaning)
Explanation: <a id="Exercises-what"></a>
Exercises
Pet Names
Create a dictionary to hold information about pets. Each key is an animal's name, and each value is the kind of animal.
For example, 'ziggy': 'canary'
Put at least 3 key-value pairs in your dictionary.
Use a for loop to print out a series of statements such as "Willie is a dog."
Polling Friends
Think of a question you could ask your friends. Create a dictionary where each key is a person's name, and each value is that person's response to your question.
Store at least three responses in your dictionary.
Use a for loop to print out a series of statements listing each person's name, and their response.
top
Common operations with dictionaries
There are a few common things you will want to do with dictionaries. These include adding new key-value pairs, modifying information in the dictionary, and removing items from dictionaries.
Adding new key-value pairs
To add a new key-value pair, you give the dictionary name followed by the new key in square brackets, and set that equal to the new value. We will show this by starting with an empty dictionary, and re-creating the dictionary from the example above.
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
print('dictionary: ' + python_words['dictionary'])
# Clarify one of the meanings.
python_words['dictionary'] = 'A collection of key-value pairs. Each key can be used to access its corresponding value.'
print('\ndictionary: ' + python_words['dictionary'])
Explanation: top
Modifying values in a dictionary
At some point you may want to modify one of the values in your dictionary. Modifying a value in a dictionary is pretty similar to modifying an element in a list. You give the name of the dictionary and then the key in square brackets, and set that equal to the new value.
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Show the current set of words and meanings.
print("\n\nThese are the Python words I know:")
for word, meaning in python_words.items():
print("\nWord: %s" % word)
print("Meaning: %s" % meaning)
# Remove the word 'list' and its meaning.
del python_words['list']
# Show the current set of words and meanings.
print("\n\nThese are the Python words I know:")
for word, meaning in python_words.items():
print("\nWord: %s" % word)
print("Meaning: %s" % meaning)
Explanation: top
Removing key-value pairs
You may want to remove some key-value pairs from one of your dictionaries at some point. You can do this using the same del command you learned to use with lists. To remove a key-value pair, you give the del command, followed by the name of the dictionary, with the key that you want to delete. This removes the key and the value as a pair.
End of explanation
###highlight=[2,3,4,5,6,7,8,16,21]
def show_words_meanings(python_words):
# This function takes in a dictionary of python words and meanings,
# and prints out each word with its meaning.
print("\n\nThese are the Python words I know:")
for word, meaning in python_words.items():
print("\nWord: %s" % word)
print("Meaning: %s" % meaning)
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
show_words_meanings(python_words)
# Remove the word 'list' and its meaning.
del python_words['list']
show_words_meanings(python_words)
Explanation: If you were going to work with this code, you would certainly want to put the code for displaying the dictionary into a function. Let's see what this looks like:
End of explanation
###highlight=[7]
def show_words_meanings(python_words):
# This function takes in a dictionary of python words and meanings,
# and prints out each word with its meaning.
print("\n\nThese are the Python words I know:")
for word, meaning in python_words.items():
print("\n%s: %s" % (word, meaning))
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
show_words_meanings(python_words)
# Remove the word 'list' and its meaning.
del python_words['list']
show_words_meanings(python_words)
Explanation: As long as we have a nice clean function to work with, let's clean up our output a little:
End of explanation
# We have a spelling mistake!
python_words = {'lisst': 'A collection of values that are not connected, but have an order.'}
# Create a new, correct key, and connect it to the old value.
# Then delete the old key.
python_words['list'] = python_words['lisst']
del python_words['lisst']
# Print the dictionary, to show that the key has changed.
print(python_words)
Explanation: This is much more realistic code.
Modifying keys in a dictionary
Modifying a value in a dictionary was straightforward, because nothing else depends on the value. Modifying a key is a little harder, because each key is used to unlock a value. We can change a key in two steps:
Make a new key, and copy the value to the new key.
Delete the old key, which also deletes the old value.
Here's what this looks like. We will use a dictionary with just one key-value pair, to keep things simple.
End of explanation
my_dict = {'key_1': 'value_1',
'key_2': 'value_2',
'key_3': 'value_3',
}
for key, value in my_dict.items():
print('\nKey: %s' % key)
print('Value: %s' % value)
Explanation: top
<a id="Exercises-operations"></a>
Exercises
Pet Names 2
Make a copy of your program from Pet Names.
Use a for loop to print out a series of statements such as "Willie is a dog."
Modify one of the values in your dictionary. You could clarify to name a breed, or you could change an animal from a cat to a dog.
Use a for loop to print out a series of statements such as "Willie is a dog."
Add a new key-value pair to your dictionary.
Use a for loop to print out a series of statements such as "Willie is a dog."
Remove one of the key-value pairs from your dictionary.
Use a for loop to print out a series of statements such as "Willie is a dog."
Bonus: Use a function to do all of the looping and printing in this problem.
Weight Lifting
Make a dictionary where the keys are the names of weight lifting exercises, and the values are the number of times you did that exercise.
Use a for loop to print out a series of statements such as "I did 10 bench presses".
Modify one of the values in your dictionary, to represent doing more of that exercise.
Use a for loop to print out a series of statements such as "I did 10 bench presses".
Add a new key-value pair to your dictionary.
Use a for loop to print out a series of statements such as "I did 10 bench presses".
Remove one of the key-value pairs from your dictionary.
Use a for loop to print out a series of statements such as "I did 10 bench presses".
Bonus: Use a function to do all of the looping and printing in this problem.
top
Looping through a dictionary
Since dictionaries are really about connecting bits of information, you will often use them in the ways described above, where you add key-value pairs whenever you receive some new information, and then you retrieve the key-value pairs that you care about. Sometimes, however, you will want to loop through the entire dictionary. There are several ways to do this:
You can loop through all key-value pairs;
You can loop through the keys, and pull out the values for any keys that you care about;
You can loop through the values.
Looping through all key-value pairs
This is the kind of loop that was shown in the first example. Here's what this loop looks like, in a general format:
End of explanation
my_dict = {'key_1': 'value_1',
'key_2': 'value_2',
'key_3': 'value_3',
}
print(my_dict.items())
Explanation: This works because the method .items() pulls all key-value pairs from a dictionary into a list of tuples:
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
for word, meaning in python_words.items():
print("\nWord: %s" % word)
print("Meaning: %s" % meaning)
Explanation: The syntax for key, value in my_dict.items(): does the work of looping through this list of tuples, and pulling the first and second item from each tuple for us.
There is nothing special about any of these variable names, so Python code that uses this syntax becomes really readable. Rather than create a new example of this loop, let's just look at the original example again to see this in a meaningful context:
End of explanation
my_dict = {'key_1': 'value_1',
'key_2': 'value_2',
'key_3': 'value_3',
}
for key in my_dict.keys():
print('Key: %s' % key)
Explanation: top
Looping through all keys in a dictionary
Python provides a clear syntax for looping through just the keys in a dictionary:
End of explanation
###highlight=[7]
my_dict = {'key_1': 'value_1',
'key_2': 'value_2',
'key_3': 'value_3',
}
for key in my_dict:
print('Key: %s' % key)
Explanation: This is actually the default behavior of looping through the dictionary itself. So you can leave out the .keys() part, and get the exact same behavior:
End of explanation
###highlight=[9,10]
my_dict = {'key_1': 'value_1',
'key_2': 'value_2',
'key_3': 'value_3',
}
for key in my_dict:
print('Key: %s' % key)
if key == 'key_2':
print(" The value for key_2 is %s." % my_dict[key])
Explanation: The only advantage of using the .keys() in the code is a little bit of clarity. But anyone who knows Python reasonably well is going to recognize what the second version does. In the rest of our code, we will leave out the .keys() when we want this behavior.
You can pull out the value of any key that you are interested in within your loop, using the standard notation for accessing a dictionary value from a key:
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Show the words that are currently in the dictionary.
print("The following Python words have been defined:")
for word in python_words:
print("- %s" % word)
Explanation: Let's show how we might use this in our Python words program. This kind of loop provides a straightforward way to show only the words in the dictionary:
End of explanation
###highlight=[12,13,14]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Show the words that are currently in the dictionary.
print("The following Python words have been defined:")
for word in python_words:
print("- %s" % word)
# Allow the user to choose a word, and then display the meaning for that word.
requested_word = raw_input("\nWhat word would you like to learn about? ")
print("\n%s: %s" % (requested_word, python_words[requested_word]))
Explanation: We can extend this slightly to make a program that lets you look up words. We first let the user choose a word. When the user has chosen a word, we get the meaning for that word, and display it:
End of explanation
###highlight=[12,13,14,15,16,17,18,19,20]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Show the words that are currently in the dictionary.
print("The following Python words have been defined:")
for word in python_words:
print("- %s" % word)
requested_word = ''
while requested_word != 'quit':
# Allow the user to choose a word, and then display the meaning for that word.
requested_word = raw_input("\nWhat word would you like to learn about? (or 'quit') ")
if requested_word in python_words.keys():
print("\n %s: %s" % (requested_word, python_words[requested_word]))
else:
# Handle misspellings, and words not yet stored.
print("\n Sorry, I don't know that word.")
Explanation: This allows the user to select one word that has been defined. If we enclose the input part of the program in a while loop, the user can see as many definitions as they'd like:
End of explanation
###highlight=[16,17,18,19,20,21,22,23,24]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Show the words that are currently in the dictionary.
print("The following Python words have been defined:")
for word in python_words:
print("- %s" % word)
requested_word = ''
while requested_word != 'quit':
# Allow the user to choose a word, and then display the meaning for that word.
requested_word = raw_input("\nWhat word would you like to learn about? (or 'quit') ")
if requested_word in python_words.keys():
# This is a word we know, so show the meaning.
print("\n %s: %s" % (requested_word, python_words[requested_word]))
elif requested_word != 'quit':
# This is not in python_words, and it's not 'quit'.
print("\n Sorry, I don't know that word.")
else:
# The word is quit.
print "\n Bye!"
Explanation: This allows the user to ask for as many meanings as they want, but it takes the word "quit" as a requested word. Let's add an elif clause to clean up this behavior:
End of explanation
my_dict = {'key_1': 'value_1',
'key_2': 'value_2',
'key_3': 'value_3',
}
for value in my_dict.values():
print('Value: %s' % value)
Explanation: top
Looping through all values in a dictionary
Python provides a straightforward syntax for looping through all the values in a dictionary, as well:
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
for meaning in python_words.values():
print("Meaning: %s" % meaning)
Explanation: We can use this loop syntax to have a little fun with the dictionary example, by making a little quiz program. The program will display a meaning, and ask the user to guess the word that matches that meaning. Let's start out by showing all the meanings in the dictionary:
End of explanation
###highlight=[12,13,14,15,16,17,18]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Print each meaning, one at a time, and ask the user
# what word they think it is.
for meaning in python_words.values():
print("\nMeaning: %s" % meaning)
guessed_word = raw_input("What word do you think this is? ")
# The guess is correct if the guessed word's meaning matches the current meaning.
if python_words[guessed_word] == meaning:
print("You got it!")
else:
print("Sorry, that's just not the right word.")
Explanation: Now we can add a prompt after each meaning, asking the user to guess the word:
End of explanation
###highlight=[12,13,14,15,16,17,18,19,20,21,22]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
# Print each meaning, one at a time, and ask the user
# what word they think it is.
for meaning in python_words.values():
print("\nMeaning: %s" % meaning)
# Assume the guess is not correct; keep guessing until correct.
correct = False
while not correct:
guessed_word = input("\nWhat word do you think this is? ")
# The guess is correct if the guessed word's meaning matches the current meaning.
if python_words[guessed_word] == meaning:
print("You got it!")
correct = True
else:
print("Sorry, that's just not the right word.")
Explanation: This is starting to work, but we can see from the output that the user does not get the chance to take a second guess if they guess wrong for any meaning. We can use a while loop around the guessing code, to let the user guess until they get it right:
End of explanation
###highlight=[7,8,9,10,11,12,23,24,25]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
def show_words(python_words):
# A simple function to show the words in the dictionary.
display_message = ""
for word in python_words.keys():
display_message += word + ' '
print display_message
# Print each meaning, one at a time, and ask the user
# what word they think it is.
for meaning in python_words.values():
print("\n%s" % meaning)
# Assume the guess is not correct; keep guessing until correct.
correct = False
while not correct:
print("\nWhat word do you think this is?")
show_words(python_words)
guessed_word = raw_input("- ")
# The guess is correct if the guessed word's meaning matches the current meaning.
if python_words[guessed_word] == meaning:
print("You got it!")
correct = True
else:
print("Sorry, that's just not the right word.")
Explanation: This is better. Now, if the guess is incorrect, the user is caught in a loop that they can only exit by guessing correctly. The final revision to this code is to show the user a list of words to choose from when they are asked to guess:
End of explanation
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
for word in python_words.keys():
print(word)
Explanation: top
Looping through a dictionary in order
Dictionaries are quite useful because they allow bits of information to be connected. One of the problems with dictionaries, however, is that they are not stored in any particular order. When you retrieve all of the keys or values in your dictionary, you can't be sure what order you will get them back. There is a quick and easy way to do this, however, when you want them in a particular order.
Let's take a look at the order that results from a simple call to dictionary.keys():
End of explanation
###highlight=[7]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
for word in sorted(python_words.keys()):
print(word)
Explanation: The resulting list is not in order. The list of keys can be put in order by passing the list into the sorted() function, in the line that initiates the for loop:
End of explanation
###highlight=[8]
python_words = {'list': 'A collection of values that are not connected, but have an order.',
'dictionary': 'A collection of key-value pairs.',
'function': 'A named set of instructions that defines a set of actions in Python.',
}
for word in sorted(python_words.keys()):
print("%s: %s" % (word.title(), python_words[word]))
Explanation: This approach can be used to work with the keys and values in order. For example, the words and meanings can be printed in alphabetical order by word:
End of explanation
# This program stores people's favorite numbers, and displays them.
favorite_numbers = {'eric': [3, 11, 19, 23, 42],
'ever': [2, 4, 5],
'willie': [5, 35, 120],
}
# Display each person's favorite numbers.
print("Eric's favorite numbers are:")
print(favorite_numbers['eric'])
print("\nEver's favorite numbers are:")
print(favorite_numbers['ever'])
print("\nWillie's favorite numbers are:")
print(favorite_numbers['willie'])
Explanation: In this example, the keys have been put into alphabetical order in the for loop only; Python has not changed the way the dictionary is stored at all. So the next time the dictionary is accessed, the keys could be returned in any order. There is no way to permanently specify an order for the items in an ordinary dictionary, but if you want to do this you can use the OrderedDict structure.
top
<a id="Exercises-looping"></a>
Exercises
Mountain Heights
Wikipedia has a list of the tallest mountains in the world, with each mountain's elevation. Pick five mountains from this list.
Create a dictionary with the mountain names as keys, and the elevations as values.
Print out just the mountains' names, by looping through the keys of your dictionary.
Print out just the mountains' elevations, by looping through the values of your dictionary.
Print out a series of statements telling how tall each mountain is: "Everest is 8848 meters tall."
Revise your output, if necessary.
Make sure there is an introductory sentence describing the output for each loop you write.
Make sure there is a blank line between each group of statements.
Mountain Heights 2
Revise your final output from Mountain Heights, so that the information is listed in alphabetical order by each mountain's name.
That is, print out a series of statements telling how tall each mountain is: "Everest is 8848 meters tall."
Make sure your output is in alphabetical order.
top
Nesting
Nesting is one of the most powerful concepts we have come to so far. Nesting involves putting a list or dictionary inside another list or dictionary. We will look at two examples here, lists inside of a dictionary and dictionaries inside of a dictionary. With nesting, the kind of information we can model in our programs is expanded greatly.
Lists in a dictionary
A dictionary connects two pieces of information. Those two pieces of information can be any kind of data structure in Python. Let's keep using strings for our keys, but let's try giving a list as a value.
The first example will involve storing a number of people's favorite numbers. The keys consist of people's names, and the values are lists of each person's favorite numbers. In this first example, we will access each person's list one at a time.
End of explanation
###highlight=[8,9,10,11]
# This program stores people's favorite numbers, and displays them.
favorite_numbers = {'eric': [3, 11, 19, 23, 42],
'ever': [2, 4, 5],
'willie': [5, 35, 120],
}
# Display each person's favorite numbers.
for name in favorite_numbers:
print("\n%s's favorite numbers are:" % name.title())
print(favorite_numbers[name])
Explanation: We are really just working our way through each key in the dictionary, so let's use a for loop to go through the keys in the dictionary:
End of explanation
###highlight=[11,12,13,14]
# This program stores people's favorite numbers, and displays them.
favorite_numbers = {'eric': [3, 11, 19, 23, 42],
'ever': [2, 4, 5],
'willie': [5, 35, 120],
}
# Display each person's favorite numbers.
for name in favorite_numbers:
print("\n%s's favorite numbers are:" % name.title())
# Each value is itself a list, so we need another for loop
# to work with the list.
for favorite_number in favorite_numbers[name]:
print(favorite_number)
Explanation: This structure is fairly complex, so don't worry if it takes a while for things to sink in. The dictionary itself probably makes sense; each person is connected to a list of their favorite numbers.
This works, but we'd rather not print raw Python in our output. Let's use a for loop to print the favorite numbers individually, rather than in a Python list.
End of explanation
###highlight=[12,13,14,15]
# This program stores people's favorite numbers, and displays them.
favorite_numbers = {'eric': [3, 11, 19, 23, 42],
'ever': [2, 4, 5],
'willie': [5, 35, 120],
}
# Display each person's favorite numbers.
for name in favorite_numbers:
print("\n%s's favorite numbers are:" % name.title())
# Each value is itself a list, so let's put that list in a variable.
current_favorite_numbers = favorite_numbers[name]
for favorite_number in current_favorite_numbers:
print(favorite_number)
Explanation: Things get a little more complicated inside the for loop. The value is a list of favorite numbers, so the for loop pulls each favorite_number out of the list one at a time. If it makes more sense to you, you are free to store the list in a new variable, and use that to define your for loop:
End of explanation
# This program stores information about pets. For each pet,
# we store the kind of animal, the owner's name, and
# the breed.
pets = {'willie': {'kind': 'dog', 'owner': 'eric', 'vaccinated': True},
'walter': {'kind': 'cockroach', 'owner': 'eric', 'vaccinated': False},
'peso': {'kind': 'dog', 'owner': 'chloe', 'vaccinated': True},
}
# Let's show all the information for each pet.
print("Here is what I know about Willie:")
print("kind: " + pets['willie']['kind'])
print("owner: " + pets['willie']['owner'])
print("vaccinated: " + str(pets['willie']['vaccinated']))
print("\nHere is what I know about Walter:")
print("kind: " + pets['walter']['kind'])
print("owner: " + pets['walter']['owner'])
print("vaccinated: " + str(pets['walter']['vaccinated']))
print("\nHere is what I know about Peso:")
print("kind: " + pets['peso']['kind'])
print("owner: " + pets['peso']['owner'])
print("vaccinated: " + str(pets['peso']['vaccinated']))
Explanation: top
Dictionaries in a dictionary
The most powerful nesting concept we will cover right now is nesting a dictionary inside of a dictionary.
To demonstrate this, let's make a dictionary of pets, with some information about each pet. The keys for this dictionary will consist of the pet's name. The values will include information such as the kind of animal, the owner, and whether the pet has been vaccinated.
End of explanation
###highlight=[10,11,12,13,14,15]
# This program stores information about pets. For each pet,
# we store the kind of animal, the owner's name, and
# the breed.
pets = {'willie': {'kind': 'dog', 'owner': 'eric', 'vaccinated': True},
'walter': {'kind': 'cockroach', 'owner': 'eric', 'vaccinated': False},
'peso': {'kind': 'dog', 'owner': 'chloe', 'vaccinated': True},
}
# Let's show all the information for each pet.
for pet_name, pet_information in pets.items():
print("\nHere is what I know about %s:" % pet_name.title())
print("kind: " + pet_information['kind'])
print("owner: " + pet_information['owner'])
print("vaccinated: " + str(pet_information['vaccinated']))
Explanation: Clearly this is some repetitive code, but it shows exactly how we access information in a nested dictionary. In the first set of print statements, we use the name 'willie' to unlock the 'kind' of animal he is, the 'owner' he has, and whether or not he is 'vaccinated'. We have to wrap the vaccination value in the str function so that Python knows we want the words 'True' and 'False', not the values True and False. We then do the same thing for each animal.
Let's rewrite this program, using a for loop to go through the dictionary's keys:
End of explanation
###highlight=[14,15]
# This program stores information about pets. For each pet,
# we store the kind of animal, the owner's name, and
# the breed.
pets = {'willie': {'kind': 'dog', 'owner': 'eric', 'vaccinated': True},
'walter': {'kind': 'cockroach', 'owner': 'eric', 'vaccinated': False},
'peso': {'kind': 'dog', 'owner': 'chloe', 'vaccinated': True},
}
# Let's show all the information for each pet.
for pet_name, pet_information in pets.items():
print("\nHere is what I know about %s:" % pet_name.title())
# Each animal's dictionary is in 'information'
for key in pet_information:
print(key + ": " + str(pet_information[key]))
Explanation: This code is much shorter and easier to maintain. But even this code will not keep up with our dictionary. If we add more information to the dictionary later, we will have to update our print statements. Let's put a second for loop inside the first loop in order to run through all the information about each pet:
End of explanation
###highlight=[15,16,17,18,19,20,21,22,23,24,25,26,27]
# This program stores information about pets. For each pet,
# we store the kind of animal, the owner's name, and
# the breed.
pets = {'willie': {'kind': 'dog', 'owner': 'eric', 'vaccinated': True},
'walter': {'kind': 'cockroach', 'owner': 'eric', 'vaccinated': False},
'peso': {'kind': 'dog', 'owner': 'chloe', 'vaccinated': True},
}
# Let's show all the information for each pet.
for pet_name, pet_information in pets.items():
print("\nHere is what I know about %s:" % pet_name.title())
# Each animal's dictionary is in pet_information
for key in pet_information:
if key == 'owner':
# Capitalize the owner's name.
print(key + ": " + pet_information[key].title())
elif key == 'vaccinated':
# Print 'yes' for True, and 'no' for False.
vaccinated = pet_information['vaccinated']
if vaccinated:
print 'vaccinated: yes'
else:
print 'vaccinated: no'
else:
# No special formatting needed for this key.
print(key + ": " + pet_information[key])
Explanation: This nested loop can look pretty complicated, so again, don't worry if it doesn't make sense for a while.
The first loop gives us all the keys in the main dictionary, which consist of the name of each pet.
Each of these names can be used to 'unlock' the dictionary of each pet.
The inner loop goes through the dictionary for that individual pet, and pulls out all of the keys in that individual pet's dictionary.
We print the key, which tells us the kind of information we are about to see, and the value for that key.
You can see that we could improve the formatting in the output.
We could capitalize the owner's name.
We could print 'yes' or 'no', instead of True and False.
Let's show one last version that uses some if statements to clean up our data for printing:
End of explanation |
13,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATM 623
Step1: Contents
Introducing climlab
Using climlab to implement the zero-dimensional energy balance model
Run the zero-dimensional EBM out to equilibrium
A climate change scenario in the EBM
Further climlab resources
<a id='section1'></a>
1. Introducing climlab
climlab is a python package for process-oriented climate modeling.
It is based on a very general concept of a model as a collection of individual,
interacting processes. climlab defines a base class called Process, which
can contain an arbitrarily complex tree of sub-processes (each also some
sub-class of Process). Every climate process (radiative, dynamical,
physical, turbulent, convective, chemical, etc.) can be simulated as a stand-alone
process model given appropriate input, or as a sub-process of a more complex model.
New classes of model can easily be defined and run interactively by putting together an
appropriate collection of sub-processes.
climlab is an open-source community project. The latest code can always be found on github
Step2: <a id='section2'></a>
2. Using climlab to implement the zero-dimensional energy balance model
Recall that we have worked with a zero-dimensional Energy Balance Model
$$ C \frac{dT_s}{dt} = (1-\alpha) Q - \tau \sigma T_s^4 $$
Here we are going to implement this exact model using climlab.
Yes, we have already written code to implement this model, but we are going to repeat this effort here as a way of learning how to use climlab.
There are tools within climlab to implement much more complicated models, but the basic interface will be the same.
Step3: Here we have created a dictionary called state with a single item called Ts
Step4: This dictionary holds the state variables for our model -- which is this case is a single number! It is a temperature in degrees Celsius.
For convenience, we can access the same data as an attribute (which lets us use tab-autocomplete when doing interactive work)
Step5: It is also possible to see this state dictionary as an xarray.Dataset object
Step6: The object called ebm here is the entire model -- including its current state (the temperature Ts) as well as all the methods needed to integrated forward in time!
The current model state, accessed two ways
Step7: Here is some internal information about the timestep of the model
Step8: This says the timestep is 2592000 seconds (30 days!), and the model has taken 0 steps forward so far.
To take a single step forward
Step9: The model got colder!
To see why, let's look at some useful diagnostics computed by this model
Step10: This is another dictionary, now with two items. They should make sense to you.
Just like the state variables, we can access these diagnostics variables as attributes
Step11: So why did the model get colder in the first timestep?
What do you think will happen next?
<a id='section3'></a>
3. Run the zero-dimensional EBM out to equilibrium
Let's look at how the model adjusts toward its equilibrium temperature.
Exercise
Step12: The parameter tau is a property of the OutgoingLongwave subprocess
Step13: and the parameter albedo is a property of the AbsorbedShortwave subprocess
Step14: Let's make an exact clone of our model and then change these two parameters
Step15: Now our model is out of equilibrium and the climate will change!
To see this without actually taking a step forward
Step16: Shoud the model warm up or cool down?
Well, we can find out
Step17: Automatic timestepping
Often we want to integrate a model forward in time to equilibrium without needing to store information about the transient state.
climlab offers convenience methods to do this easily
Step18: <a id='section5'></a>
5. Further climlab resources
We will be using climlab extensively throughout this course. Lots of examples of more advanced usage are found here in the course notes. Here are some links to other resources | Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 4: Building simple climate models using climlab
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
Explanation: Contents
Introducing climlab
Using climlab to implement the zero-dimensional energy balance model
Run the zero-dimensional EBM out to equilibrium
A climate change scenario in the EBM
Further climlab resources
<a id='section1'></a>
1. Introducing climlab
climlab is a python package for process-oriented climate modeling.
It is based on a very general concept of a model as a collection of individual,
interacting processes. climlab defines a base class called Process, which
can contain an arbitrarily complex tree of sub-processes (each also some
sub-class of Process). Every climate process (radiative, dynamical,
physical, turbulent, convective, chemical, etc.) can be simulated as a stand-alone
process model given appropriate input, or as a sub-process of a more complex model.
New classes of model can easily be defined and run interactively by putting together an
appropriate collection of sub-processes.
climlab is an open-source community project. The latest code can always be found on github:
https://github.com/brian-rose/climlab
You can install climlab by doing
conda install -c conda-forge climlab
End of explanation
# create a zero-dimensional domain with a single surface temperature
state = climlab.surface_state(num_lat=1, # a single point
water_depth = 100., # 100 meters slab of water (sets the heat capacity)
)
state
Explanation: <a id='section2'></a>
2. Using climlab to implement the zero-dimensional energy balance model
Recall that we have worked with a zero-dimensional Energy Balance Model
$$ C \frac{dT_s}{dt} = (1-\alpha) Q - \tau \sigma T_s^4 $$
Here we are going to implement this exact model using climlab.
Yes, we have already written code to implement this model, but we are going to repeat this effort here as a way of learning how to use climlab.
There are tools within climlab to implement much more complicated models, but the basic interface will be the same.
End of explanation
state['Ts']
Explanation: Here we have created a dictionary called state with a single item called Ts:
End of explanation
state.Ts
Explanation: This dictionary holds the state variables for our model -- which is this case is a single number! It is a temperature in degrees Celsius.
For convenience, we can access the same data as an attribute (which lets us use tab-autocomplete when doing interactive work):
End of explanation
climlab.to_xarray(state)
# create the longwave radiation process
olr = climlab.radiation.Boltzmann(name='OutgoingLongwave',
state=state,
tau = 0.612,
eps = 1.,
timestep = 60*60*24*30.)
# Look at what we just created
print(olr)
# create the shortwave radiation process
asr = climlab.radiation.SimpleAbsorbedShortwave(name='AbsorbedShortwave',
state=state,
insolation=341.3,
albedo=0.299,
timestep = 60*60*24*30.)
# Look at what we just created
print(asr)
# couple them together into a single model
ebm = olr + asr
# Give the parent process name
ebm.name = 'EnergyBalanceModel'
# Examine the model object
print(ebm)
Explanation: It is also possible to see this state dictionary as an xarray.Dataset object:
End of explanation
ebm.state
ebm.Ts
Explanation: The object called ebm here is the entire model -- including its current state (the temperature Ts) as well as all the methods needed to integrated forward in time!
The current model state, accessed two ways:
End of explanation
print(ebm.time['timestep'])
print(ebm.time['steps'])
Explanation: Here is some internal information about the timestep of the model:
End of explanation
ebm.step_forward()
ebm.Ts
Explanation: This says the timestep is 2592000 seconds (30 days!), and the model has taken 0 steps forward so far.
To take a single step forward:
End of explanation
ebm.diagnostics
Explanation: The model got colder!
To see why, let's look at some useful diagnostics computed by this model:
End of explanation
ebm.OLR
ebm.ASR
Explanation: This is another dictionary, now with two items. They should make sense to you.
Just like the state variables, we can access these diagnostics variables as attributes:
End of explanation
for name, process in ebm.subprocess.items():
print(name)
print(process)
Explanation: So why did the model get colder in the first timestep?
What do you think will happen next?
<a id='section3'></a>
3. Run the zero-dimensional EBM out to equilibrium
Let's look at how the model adjusts toward its equilibrium temperature.
Exercise:
Using a for loop, take 500 steps forward with this model
Store the current temperature at each step in an array
Make a graph of the temperature as a function of time
<a id='section4'></a>
4. A climate change scenario
Suppose we want to investigate the effects of a small decrease in the transmissitivity of the atmosphere tau.
Previously we used the zero-dimensional model to investigate a hypothetical climate change scenario in which:
- the transmissitivity of the atmosphere tau decreases to 0.57
- the planetary albedo increases to 0.32
How would we do that using climlab?
Recall that the model is comprised of two sub-components:
End of explanation
ebm.subprocess['OutgoingLongwave'].tau
Explanation: The parameter tau is a property of the OutgoingLongwave subprocess:
End of explanation
ebm.subprocess['AbsorbedShortwave'].albedo
Explanation: and the parameter albedo is a property of the AbsorbedShortwave subprocess:
End of explanation
ebm2 = climlab.process_like(ebm)
print(ebm2)
ebm2.subprocess['OutgoingLongwave'].tau = 0.57
ebm2.subprocess['AbsorbedShortwave'].albedo = 0.32
Explanation: Let's make an exact clone of our model and then change these two parameters:
End of explanation
# Computes diagnostics based on current state but does not change the state
ebm2.compute_diagnostics()
ebm2.ASR - ebm2.OLR
Explanation: Now our model is out of equilibrium and the climate will change!
To see this without actually taking a step forward:
End of explanation
ebm2.Ts
ebm2.step_forward()
ebm2.Ts
Explanation: Shoud the model warm up or cool down?
Well, we can find out:
End of explanation
ebm3 = climlab.process_like(ebm2)
ebm3.integrate_years(50)
# What is the current temperature?
ebm3.Ts
# How close are we to energy balance?
ebm3.ASR - ebm3.OLR
# We should be able to accomplish the exact same thing with explicit timestepping
for n in range(608):
ebm2.step_forward()
ebm2.Ts
ebm2.ASR - ebm2.OLR
Explanation: Automatic timestepping
Often we want to integrate a model forward in time to equilibrium without needing to store information about the transient state.
climlab offers convenience methods to do this easily:
End of explanation
%load_ext version_information
%version_information numpy, matplotlib, climlab
Explanation: <a id='section5'></a>
5. Further climlab resources
We will be using climlab extensively throughout this course. Lots of examples of more advanced usage are found here in the course notes. Here are some links to other resources:
The documentation is hosted at https://climlab.readthedocs.io/en/latest/
Source code (for both software and docs) are at https://github.com/brian-rose/climlab
A video of a talk I gave about climlab at the 2018 AMS Python symposium (January 2018)
Slides from a talk and demonstration that I gave in Febrary 2018 (The Apple Keynote version contains some animations that will not show up in the pdf version)
Version information
End of explanation |
13,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic Flux Balance Analysis (dFBA) in COBRApy
The following notebook shows a simple, but slow example of implementing dFBA using COBRApy and scipy.integrate.solve_ivp. This notebook shows a static optimization approach (SOA) implementation and should not be considered production ready.
The model considers only basic Michaelis-Menten limited growth on glucose.
Step1: Create or load a cobrapy model. Here, we use the 'textbook' e-coli core model.
Step5: Set up the dynamic system
Dynamic flux balance analysis couples a dynamic system in external cellular concentrations to a pseudo-steady state metabolic model.
In this notebook, we define the function add_dynamic_bounds(model, y) to convert the external metabolite concentrations into bounds on the boundary fluxes in the metabolic model.
Step6: Run the dynamic FBA simulation
Step7: Because the culture runs out of glucose, the simulation terminates early. The exact time of this 'cell death' is recorded in sol.t_events.
Step8: Plot timelines of biomass and glucose | Python Code:
import numpy as np
from tqdm import tqdm
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Dynamic Flux Balance Analysis (dFBA) in COBRApy
The following notebook shows a simple, but slow example of implementing dFBA using COBRApy and scipy.integrate.solve_ivp. This notebook shows a static optimization approach (SOA) implementation and should not be considered production ready.
The model considers only basic Michaelis-Menten limited growth on glucose.
End of explanation
import cobra
from cobra.io import load_model
model = load_model('textbook')
Explanation: Create or load a cobrapy model. Here, we use the 'textbook' e-coli core model.
End of explanation
def add_dynamic_bounds(model, y):
Use external concentrations to bound the uptake flux of glucose.
biomass, glucose = y # expand the boundary species
glucose_max_import = -10 * glucose / (5 + glucose)
model.reactions.EX_glc__D_e.lower_bound = glucose_max_import
def dynamic_system(t, y):
Calculate the time derivative of external species.
biomass, glucose = y # expand the boundary species
# Calculate the specific exchanges fluxes at the given external concentrations.
with model:
add_dynamic_bounds(model, y)
cobra.util.add_lp_feasibility(model)
feasibility = cobra.util.fix_objective_as_constraint(model)
lex_constraints = cobra.util.add_lexicographic_constraints(
model, ['Biomass_Ecoli_core', 'EX_glc__D_e'], ['max', 'max'])
# Since the calculated fluxes are specific rates, we multiply them by the
# biomass concentration to get the bulk exchange rates.
fluxes = lex_constraints.values
fluxes *= biomass
# This implementation is **not** efficient, so I display the current
# simulation time using a progress bar.
if dynamic_system.pbar is not None:
dynamic_system.pbar.update(1)
dynamic_system.pbar.set_description('t = {:.3f}'.format(t))
return fluxes
dynamic_system.pbar = None
def infeasible_event(t, y):
Determine solution feasibility.
Avoiding infeasible solutions is handled by solve_ivp's built-in event detection.
This function re-solves the LP to determine whether or not the solution is feasible
(and if not, how far it is from feasibility). When the sign of this function changes
from -epsilon to positive, we know the solution is no longer feasible.
with model:
add_dynamic_bounds(model, y)
cobra.util.add_lp_feasibility(model)
feasibility = cobra.util.fix_objective_as_constraint(model)
return feasibility - infeasible_event.epsilon
infeasible_event.epsilon = 1E-6
infeasible_event.direction = 1
infeasible_event.terminal = True
Explanation: Set up the dynamic system
Dynamic flux balance analysis couples a dynamic system in external cellular concentrations to a pseudo-steady state metabolic model.
In this notebook, we define the function add_dynamic_bounds(model, y) to convert the external metabolite concentrations into bounds on the boundary fluxes in the metabolic model.
End of explanation
ts = np.linspace(0, 15, 100) # Desired integration resolution and interval
y0 = [0.1, 10]
with tqdm() as pbar:
dynamic_system.pbar = pbar
sol = solve_ivp(
fun=dynamic_system,
events=[infeasible_event],
t_span=(ts.min(), ts.max()),
y0=y0,
t_eval=ts,
rtol=1e-6,
atol=1e-8,
method='BDF'
)
Explanation: Run the dynamic FBA simulation
End of explanation
sol
Explanation: Because the culture runs out of glucose, the simulation terminates early. The exact time of this 'cell death' is recorded in sol.t_events.
End of explanation
ax = plt.subplot(111)
ax.plot(sol.t, sol.y.T[:, 0])
ax2 = plt.twinx(ax)
ax2.plot(sol.t, sol.y.T[:, 1], color='r')
ax.set_ylabel('Biomass', color='b')
ax2.set_ylabel('Glucose', color='r')
Explanation: Plot timelines of biomass and glucose
End of explanation |
13,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step7: GRU class and functions
Step8: Placeholder and initializers
Step9: Models
Step10: Dataset Preparation | Python Code:
import numpy as np
import tensorflow as tf
from sklearn import datasets
from sklearn.cross_validation import train_test_split
import pylab as pl
from IPython import display
import sys
%matplotlib inline
Explanation: <span style="color:green"> GRU ON 8*8 MNIST DATASET TO PREDICT TEN CLASS
<span style="color:blue">Its a dynamic sequence and batch GRU rnn. This is created with tensorflow scan and map higher ops!!!!
<span style="color:blue">This is a base rnn which can be used to create LSTM, Neural Stack Machine, Neural Turing Machine and RNN-EM and so on!
Importing Libraries
End of explanation
class RNN_cell(object):
RNN cell object which takes 3 arguments for initialization.
input_size = Input Vector size
hidden_layer_size = Hidden layer size
target_size = Output vector size
def __init__(self, input_size, hidden_layer_size, target_size):
#Initialization of given values
self.input_size = input_size
self.hidden_layer_size = hidden_layer_size
self.target_size = target_size
# Weights for input and hidden tensor
self.Wx = tf.Variable(tf.zeros([self.input_size,self.hidden_layer_size]))
self.Wr = tf.Variable(tf.zeros([self.input_size,self.hidden_layer_size]))
self.Wz = tf.Variable(tf.zeros([self.input_size,self.hidden_layer_size]))
self.br = tf.Variable(tf.truncated_normal([self.hidden_layer_size],mean=1))
self.bz = tf.Variable(tf.truncated_normal([self.hidden_layer_size],mean=1))
self.Wh = tf.Variable(tf.zeros([self.hidden_layer_size,self.hidden_layer_size]))
#Weights for output layer
self.Wo = tf.Variable(tf.truncated_normal([self.hidden_layer_size,self.target_size],mean=1,stddev=.01))
self.bo = tf.Variable(tf.truncated_normal([self.target_size],mean=1,stddev=.01))
# Placeholder for input vector with shape[batch, seq, embeddings]
self._inputs = tf.placeholder(tf.float32,
shape=[None, None, self.input_size],
name='inputs')
# Processing inputs to work with scan function
self.processed_input = process_batch_input_for_RNN(self._inputs)
'''
Initial hidden state's shape is [1,self.hidden_layer_size]
In First time stamp, we are doing dot product with weights to
get the shape of [batch_size, self.hidden_layer_size].
For this dot product tensorflow use broadcasting. But during
Back propagation a low level error occurs.
So to solve the problem it was needed to initialize initial
hiddden state of size [batch_size, self.hidden_layer_size].
So here is a little hack !!!! Getting the same shaped
initial hidden state of zeros.
'''
self.initial_hidden = self._inputs[:, 0, :]
self.initial_hidden = tf.matmul(
self.initial_hidden, tf.zeros([input_size, hidden_layer_size]))
#Function for GRU cell
def Gru(self, previous_hidden_state, x):
GRU Equations
z= tf.sigmoid(tf.matmul(x,self.Wz)+ self.bz)
r= tf.sigmoid(tf.matmul(x,self.Wr)+ self.br)
h_= tf.tanh(tf.matmul(x,self.Wx) + tf.matmul(previous_hidden_state,self.Wh)*r)
current_hidden_state = tf.multiply((1-z),h_) + tf.multiply(previous_hidden_state,z)
return current_hidden_state
# Function for getting all hidden state.
def get_states(self):
Iterates through time/ sequence to get all hidden state
# Getting all hidden state throuh time
all_hidden_states = tf.scan(self.Gru,
self.processed_input,
initializer=self.initial_hidden,
name='states')
return all_hidden_states
# Function to get output from a hidden layer
def get_output(self, hidden_state):
This function takes hidden state and returns output
output = tf.nn.relu(tf.matmul(hidden_state, self.Wo) + self.bo)
return output
# Function for getting all output layers
def get_outputs(self):
Iterating through hidden states to get outputs for all timestamp
all_hidden_states = self.get_states()
all_outputs = tf.map_fn(self.get_output, all_hidden_states)
return all_outputs
# Function to convert batch input data to use scan ops of tensorflow.
def process_batch_input_for_RNN(batch_input):
Process tensor of size [5,3,2] to [3,5,2]
batch_input_ = tf.transpose(batch_input, perm=[2, 0, 1])
X = tf.transpose(batch_input_)
return X
Explanation: GRU class and functions
End of explanation
hidden_layer_size = 30
input_size = 8
target_size = 10
y = tf.placeholder(tf.float32, shape=[None, target_size],name='inputs')
Explanation: Placeholder and initializers
End of explanation
#Initializing rnn object
rnn=RNN_cell( input_size, hidden_layer_size, target_size)
#Getting all outputs from rnn
outputs = rnn.get_outputs()
#Getting final output through indexing after reversing
last_output = outputs[-1]
#As rnn model output the final layer through Relu activation softmax is used for final output.
output=tf.nn.softmax(last_output)
#Computing the Cross Entropy loss
cross_entropy = -tf.reduce_sum(y * tf.log(output))
# Trainning with Adadelta Optimizer
train_step = tf.train.AdamOptimizer().minimize(cross_entropy)
#Calculatio of correct prediction and accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(output,1))
accuracy = (tf.reduce_mean(tf.cast(correct_prediction, tf.float32)))*100
Explanation: Models
End of explanation
#Function to get on hot
def get_on_hot(number):
on_hot=[0]*10
on_hot[number]=1
return on_hot
#Using Sklearn MNIST dataset.
digits = datasets.load_digits()
X=digits.images
Y_=digits.target
Y=map(get_on_hot,Y_)
#Getting Train and test Dataset
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.22, random_state=42)
#Cuttting for simple iteration
X_train=X_train[:1400]
y_train=y_train[:1400]
sess=tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
#Iterations to do trainning
for epoch in range(200):
start=0
end=100
for i in range(14):
X=X_train[start:end]
Y=y_train[start:end]
start=end
end=start+100
sess.run(train_step,feed_dict={rnn._inputs:X, y:Y})
Loss=str(sess.run(cross_entropy,feed_dict={rnn._inputs:X, y:Y}))
Train_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_train, y:y_train}))
Test_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_test, y:y_test}))
pl.plot([epoch],Loss,'b.',)
pl.plot([epoch],Train_accuracy,'r*',)
pl.plot([epoch],Test_accuracy,'g+')
display.clear_output(wait=True)
display.display(pl.gcf())
sys.stdout.flush()
print("\rIteration: %s Loss: %s Train Accuracy: %s Test Accuracy: %s"%(epoch,Loss,Train_accuracy,Test_accuracy)),
sys.stdout.flush()
Explanation: Dataset Preparation
End of explanation |
13,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Array views and slicing
A NumPy array is an object of numpy.ndarray type
Step1: All ndarrays have a .base attribute.
If this attribute is not None, then the array is a view of some other object's memory, typically another ndarray.
This is a very powerful tool, because allocating memory and copying memory contents are expensive operations, but updating metadata on how to interpret some already allocated memory is cheap!
The simplest way of creating an array's view is by slicing it
Step2: Let's look more closely at what an array's metadata looks like. NumPy provides the np.info function, which can list for us some low level attributes of an array
Step5: By the end of the workshop you will understand what most of these mean.
But rather than listen through a lesson, you get to try and figure what they mean yourself.
To help you with that, here's a function that prints the information from two arrays side by side
Step6: Exercise 1.
Create a one dimensional NumPy array with a few items (consider using np.arange).
Compare the printout of np.info on your array and on slices of it (use the [start
Step7: Exercise 1 debrief
Every array has an underlying block of memory assigned to it.
When we slice an array, rather than making a copy of it, NumPy makes a view, reusing the memory block, but interpreting it differently.
Lets take a look at what NumPy did for us in the above examples, and make sense of some of the changes to info.
shape
Step8: A look at data types
Similarly to how we can change the shape, strides and data pointer of an array through slicing, we can change how it's items are interpreted by changing it's data type.
This is done by calling the array's .view() method, and passing it the new data type.
But before we go there, lets look a little closer at dtypes. You are hopefully familiar with the basic NumPy numerical data types
Step9: The Constructor They Don't Want You To Know About.
You typically construct your NumPy arrays using one of the many factory fuctions provided, np.array() being the most popular.
But it is also possible to call the np.ndarray object constructor directly.
You will typically not want to do this, because there are probably simpler alternatives.
But it is a great way of putting your understanding of views of arrays to the test!
You can check the full documentation, but the np.ndarray constructor takes the following arguments that we care about
Step10: Reshaping Into Higher Dimensions
So far we have sticked to one dimensional arrays. Things get substantially more interesting when we move into higher dimensions.
One way of getting views with a different number of dimensions is by using the .reshape() method of NumPy arrays, or the equivalent np.reshape() function.
The first argument to any of the reshape functions is the new shape of the array. When providing it, keep in mind
Step11: Exercise 5 debrief
As the examples show, an n-dimensional array will have an n item tuple .shape and .strides. The number of dimensions can be directly queried from the .ndim attribute.
The shape tells us how large the array is along each dimension, the strides tell us how many bytes to skip in memory to get to the next item along each dimension.
When we reshape an array using C order, a.k.a. row major order, items along higher dimensions are closer in memory. When we use Fortran orser, a.k.a. column major order, it is items along smaller dimensions that are closer.
Reshaping with a purpose
One typical use of reshaping is to apply some aggregation function to equal subdivision of an array.
Say you have, e.g. a 12 item 1D array, and would like to compute the sum of every three items. This is how this is typically accomplished
Step12: You can apply fancier functions than .sum(), e.g. let's compute the variance of each group
Step13: Exercise 6
Your turn to do a fancier reshaping
Step14: Rearranging dimensions
Once we have a multidimensional array, rearranging the order of its dimensions is as simple as rearranging its .shape and .tuple attributes. You could do this with np.ndarray, but it would be a pain. NumPy has a bunch of functions for doing that, but they are all watered down versions of np.transpose, which takes a tuple with the desired permutation of the array dimensions.
Exercise 7
Write a function roll_axis_to_end that takes an array and an axis, and makes that axis the last dimension of the array.
For extra credit, rewrite your function using np.ndarray.
Step15: Playing with strides
For the rest of the workshop we are going to dome some fancy tricks with strides, to create interesting views of an existing array.
Exercise 8
Create a function to extract the diagonal of a 2-D array, using the np.ndarray constructor.
Step16: Exercise 9
Something very interesting happens when we set a stride to zero. Give that idea some thought and then
Step17: Exercise 10
In the last exercise we used zero strides to reuse an item more than once in the resulting view. Let's try to build on that idea
Step18: Parting pro tip
NumPy's worst kept secret is the existence of a mostly undocumented, mostly hidden, as_strided function, that makes creating views with funny strides much easier (and also much more dangerous!) than using np.ndarray. Here's the available documentation | Python Code:
a = np.arange(3)
type(a)
Explanation: Array views and slicing
A NumPy array is an object of numpy.ndarray type:
End of explanation
a = np.arange(3)
a.base is None
a[:].base is None
Explanation: All ndarrays have a .base attribute.
If this attribute is not None, then the array is a view of some other object's memory, typically another ndarray.
This is a very powerful tool, because allocating memory and copying memory contents are expensive operations, but updating metadata on how to interpret some already allocated memory is cheap!
The simplest way of creating an array's view is by slicing it:
End of explanation
np.info(a)
Explanation: Let's look more closely at what an array's metadata looks like. NumPy provides the np.info function, which can list for us some low level attributes of an array:
End of explanation
def info_for_two(one_array, another_array):
Prints side-by-side results of running np.info on its inputs.
def info_as_ordered_dict(array):
Converts return of np.infor into an ordered dict.
import collections
import io
buffer = io.StringIO()
np.info(array, output=buffer)
data = (
item.split(':') for item in buffer.getvalue().strip().split('\n'))
return collections.OrderedDict(
((key, value.strip()) for key, value in data))
one_dict = info_as_ordered_dict(one_array)
another_dict = info_as_ordered_dict(another_array)
name_w = max(len(name) for name in one_dict.keys())
one_w = max(len(name) for name in one_dict.values())
another_w = max(len(name) for name in another_dict.values())
output = (
f'{name:<{name_w}} : {one:>{one_w}} : {another:>{another_w}}'
for name, one, another in zip(
one_dict.keys(), one_dict.values(), another_dict.values()))
print('\n'.join(output))
Explanation: By the end of the workshop you will understand what most of these mean.
But rather than listen through a lesson, you get to try and figure what they mean yourself.
To help you with that, here's a function that prints the information from two arrays side by side:
End of explanation
# Your code goes here
Explanation: Exercise 1.
Create a one dimensional NumPy array with a few items (consider using np.arange).
Compare the printout of np.info on your array and on slices of it (use the [start:stop:step] indexing syntax, and make sure to try steps other than one).
Do you see any patterns?
End of explanation
# Your code goes here
Explanation: Exercise 1 debrief
Every array has an underlying block of memory assigned to it.
When we slice an array, rather than making a copy of it, NumPy makes a view, reusing the memory block, but interpreting it differently.
Lets take a look at what NumPy did for us in the above examples, and make sense of some of the changes to info.
shape: for a one dimensional array shape is a single item tuple, equal to the total number of items in the array. You can get the shape of an array as its .shape attribute.
strides: is also a single item tuple for one-dimensional arrays, its value being the number of bytes to skip in memory to get to the next item. And yes, strides can be negative. You can get this as the .strides attribute of any array.
data pointer: this is the address in memory of the first byte of the first item of the array. Note that this doesn't have to be the same as the first byte of the underlying memory block! You rarely need to know the exact address of the data pointer, but it's part of the string representation of the arrays .data attribute.
itemsize: this isn't properly an attribute of the array, but of it's data type. It is the number of bytes that an array item takes up in memory. You can get this value from an array as the .itemsize attribute of its .dtype attribute, i.e. array.dtype.itemsize.
type: this lets us know how each array item should be interpreted e.g. for calculations. We'll talk more about this later, but you can get an array's type object through its .dtype attribute.
contiguous: this is one of several boolean flags of an array. Its meaning is a little more specific, but for now lets say it tells us whether the array items use the memory block efficiently, without leaving unused spaces between items. It's value can be checked as the .contiguous attribute of the arrays .flags attribute
Exercise 2
Take a couple or minutes to familiarize yourself with the NumPy array's attributes discussed above:
Create a small one dimensional array of your choosing.
Look at its .shape, .strides, .dtype, .flags and .data attributes.
For .dtype and .flags, store them into a separate variable, and use tab completion on those to explore their subattributes.
End of explanation
# Your code goes here
Explanation: A look at data types
Similarly to how we can change the shape, strides and data pointer of an array through slicing, we can change how it's items are interpreted by changing it's data type.
This is done by calling the array's .view() method, and passing it the new data type.
But before we go there, lets look a little closer at dtypes. You are hopefully familiar with the basic NumPy numerical data types:
| Type Family | NumPy Defined Types | Character Codes |
| :---: |
| boolean | np.bool | '?' |
| unsigned integers | np.uint8 - np.uint64 | 'u1', 'u2', 'u4', 'u8' |
| signed integers | np.int8 - np.int64 | 'i1', 'i2', 'i4', 'i8' |
| floating point | np.float16 - np.float128 | 'f2', 'f4', 'f8', 'f16' |
| complex | np.complex64, np.complex128 | 'c8', 'c16' |
You can create a new data type by calling its constructor, np.dtype(), with either a NumPy defined type, or the character code.
Character codes can have '<' or '>' prepended, to indicate whether the type is little or big endian. If unspecified, native encoding is used, which for all practical purposes is going to be little endian.
Exercise 3
Let's play a little with dtype views:
Create a simple array of a type you feel comfortable you understand, e.g. np.arange(4, dtype=np.uint16).
Take a view of type np.uint8 of your array. This will give you the raw byte contents of your array. Is this what you were expecting?
Take a few views of your array, with dtypes of larger itemsize, or changing the endianess of the data type. Try to predict what the output will be before running the examples.
Take a look at the wikipedia page on single precision floating point numbers, more specifically its examples of encodings. Create arrays of four np.uint8 values which, when viewed as a np.float32 give the values 1, -2, and 1/3.
End of explanation
# Your code goes here
Explanation: The Constructor They Don't Want You To Know About.
You typically construct your NumPy arrays using one of the many factory fuctions provided, np.array() being the most popular.
But it is also possible to call the np.ndarray object constructor directly.
You will typically not want to do this, because there are probably simpler alternatives.
But it is a great way of putting your understanding of views of arrays to the test!
You can check the full documentation, but the np.ndarray constructor takes the following arguments that we care about:
shape: the shape of the returned array,
dtype: the data type of the returned array,
buffer: an object to reuse the underlying memory from, e.g. an existing array or its .data attribute,
offset: by how many bytes to move the starting data pointer of the returned array relative to the passed buffer,
strides: the strides of the returned array.
Exercise 4
Write a function, using the np.ndarray constructor, that takes a one dimensional array and returns a reversed view of it.
End of explanation
# Your code goes here
Explanation: Reshaping Into Higher Dimensions
So far we have sticked to one dimensional arrays. Things get substantially more interesting when we move into higher dimensions.
One way of getting views with a different number of dimensions is by using the .reshape() method of NumPy arrays, or the equivalent np.reshape() function.
The first argument to any of the reshape functions is the new shape of the array. When providing it, keep in mind:
the total size of the array must stay unchanged, i.e. the product of the values of the new shape tuple must be equal to the product of the values of the old shape tuple.
by entering -1 for one of the new dimensions, you can have NumPy compute its value for you, but the other dimensions must be compatible with the calculated one being an integer.
.reshape() can also take an order= kwarg, which can be set to 'C' (as the programming language) or 'F' (for the Fortran programming language). This correspond to row and column major orders, respectively.
Exercise 5
Let's look at how multidimensional arrays are represented in NumPy with an exercise.
Create a small linear array with a total length that is a multiple of two different small primes, e.g. 6 = 2 * 3.
Reshape the array into a two dimensional one, starting with the default order='C'. Try both possible combinations of rows and columns, e.g. (2, 3) and (3, 2). Look at the resulting arrays, and compare their metadata. Do you understand what's going on?
Try the same reshaping with order='F'. Can you see what the differences are?
If you feel confident with these, give a higher dimensional array a try.
End of explanation
a = np.arange(12, dtype=float)
a
a.reshape(4, 3).sum(axis=-1)
Explanation: Exercise 5 debrief
As the examples show, an n-dimensional array will have an n item tuple .shape and .strides. The number of dimensions can be directly queried from the .ndim attribute.
The shape tells us how large the array is along each dimension, the strides tell us how many bytes to skip in memory to get to the next item along each dimension.
When we reshape an array using C order, a.k.a. row major order, items along higher dimensions are closer in memory. When we use Fortran orser, a.k.a. column major order, it is items along smaller dimensions that are closer.
Reshaping with a purpose
One typical use of reshaping is to apply some aggregation function to equal subdivision of an array.
Say you have, e.g. a 12 item 1D array, and would like to compute the sum of every three items. This is how this is typically accomplished:
End of explanation
a.reshape(4, 3).var(axis=-1)
Explanation: You can apply fancier functions than .sum(), e.g. let's compute the variance of each group:
End of explanation
# Your code goes here
Explanation: Exercise 6
Your turn to do a fancier reshaping: we will compute the average of a 2D array over non-overlapping rectangular patches:
Choose to small numbers m and n, e.g. 3 and 4.
Create a 2D array, with number of rows a multiple of one of those numbers, and number of columns a multiple of the other, e.g. 15 x 24.
Reshape and aggregate to create a 2D array holding the sums over non overlapping m x n tiles, e.g. a 5 x 6 array.
Hint: .sum() can take a tuple of integers as axis=, so you can do the whole thing in a single reshape from 2D to 4D, then aggregate back to 2D. If tyou find this confusing, doing two aggregations will also work.
End of explanation
# Your code goes here
Explanation: Rearranging dimensions
Once we have a multidimensional array, rearranging the order of its dimensions is as simple as rearranging its .shape and .tuple attributes. You could do this with np.ndarray, but it would be a pain. NumPy has a bunch of functions for doing that, but they are all watered down versions of np.transpose, which takes a tuple with the desired permutation of the array dimensions.
Exercise 7
Write a function roll_axis_to_end that takes an array and an axis, and makes that axis the last dimension of the array.
For extra credit, rewrite your function using np.ndarray.
End of explanation
# Your code goes here
Explanation: Playing with strides
For the rest of the workshop we are going to dome some fancy tricks with strides, to create interesting views of an existing array.
Exercise 8
Create a function to extract the diagonal of a 2-D array, using the np.ndarray constructor.
End of explanation
# Your code goes here
Explanation: Exercise 9
Something very interesting happens when we set a stride to zero. Give that idea some thought and then:
Create two functions, stacked_column_vector and stacked_row_vector, that take a 1D array (the vector), and an integer n, and create a 2D view of the array that stack n copies of the vector, either as columns or rows of the view.
Use this functions to create an outer_product function that takes two 1D vectors and computes their outer product.
End of explanation
# Your code goes here
Explanation: Exercise 10
In the last exercise we used zero strides to reuse an item more than once in the resulting view. Let's try to build on that idea:
Write a function that takes a 1D array and a window integer value, and creates a 2D view of the array, each row a view through a sliding window of size window into the original array.
Hint: There are len(array) - window + 1 such "views through a window".
Another hint: Here's a small example expected run:
>>> sliding_window(np.arange(4), 2)
[[0, 1],
[1, 2],
[2, 3]]
End of explanation
from numpy.lib.stride_tricks import as_strided
np.info(as_strided)
Explanation: Parting pro tip
NumPy's worst kept secret is the existence of a mostly undocumented, mostly hidden, as_strided function, that makes creating views with funny strides much easier (and also much more dangerous!) than using np.ndarray. Here's the available documentation:
End of explanation |
13,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lists and Tuples
In this notebook, you will learn to store more than one valuable in a single variable. This by itself is one of the most powerful ideas in programming, and it introduces a number of other central concepts such as loops. If this section ends up making sense to you, you will be able to start writing some interesting programs, and you can be more confident that you will be able to develop overall competence as a programmer.
Previous
Step1: Naming and defining a list
Since lists are collection of objects, it is good practice to give them a plural name. If each item in your list is a car, call the list 'cars'. If each item is a dog, call your list 'dogs'. This gives you a straightforward way to refer to the entire list ('dogs'), and to a single item in the list ('dog').
In Python, square brackets designate a list. To define a list, you give the name of the list, the equals sign, and the values you want to include in your list within square brackets.
Step2: Accessing one item in a list
Items in a list are identified by their position in the list, starting with zero. This will almost certainly trip you up at some point. Programmers even joke about how often we all make "off-by-one" errors, so don't feel bad when you make this kind of error.
To access the first element in a list, you give the name of the list, followed by a zero in parentheses.
Step3: The number in parentheses is called the index of the item. Because lists start at zero, the index of an item is always one less than its position in the list. So to get the second item in the list, we need to use an index of 1.
Step4: Accessing the last items in a list
You can probably see that to get the last item in this list, we would use an index of 2. This works, but it would only work because our list has exactly three items. To get the last item in a list, no matter how long the list is, you can use an index of -1.
Step5: This syntax also works for the second to last item, the third to last, and so forth.
Step6: You can't use a negative number larger than the length of the list, however.
Step7: top
<a id="Exercises-lists"></a>
Exercises
First List
Store the values 'python', 'c', and 'java' in a list. Print each of these values out, using their position in the list.
First Neat List
Store the values 'python', 'c', and 'java' in a list. Print a statement about each of these values, using their position in the list.
Your statement could simply be, 'A nice programming language is value.'
Your First List
Think of something you can store in a list. Make a list with three or four items, and then print a message that includes at least one item from your list. Your sentence could be as simple as, "One item in my list is a ____."
top
Lists and Looping
Accessing all elements in a list
This is one of the most important concepts related to lists. You can have a list with a million items in it, and in three lines of code you can write a sentence for each of those million items. If you want to understand lists, and become a competent programmer, make sure you take the time to understand this section.
We use a loop to access all the elements in a list. A loop is a block of code that repeats itself until it runs out of items to work with, or until a certain condition is met. In this case, our loop will run once for every item in our list. With a list that is three items long, our loop will run three times.
Let's take a look at how we access all the items in a list, and then try to understand how it works.
Step8: We have already seen how to create a list, so we are really just trying to understand how the last two lines work. These last two lines make up a loop, and the language here can help us see what is happening
Step9: Visualize this on <a href="http
Step10: Notice that the last line only runs once, after the loop is completed. Also notice the use of newlines ("\n") to make the output easier to read. Run this code on <a href="http
Step11: To enumerate a list, you need to add an index variable to hold the current index. So instead of
for dog in dogs
Step12: A common looping error
One common looping error occurs when instead of using the single variable dog inside the loop, we accidentally use the variable that holds the entire list
Step13: In this example, instead of printing each dog in the list, we print the entire list every time we go through the loop. Python puts each individual item in the list into the variable dog, but we never use that variable. Sometimes you will just get an error if you try to do this
Step14: <a id="Exercises-loops"></a>
Exercises
First List - Loop
Repeat First List, but this time use a loop to print out each value in the list.
First Neat List - Loop
Repeat First Neat List, but this time use a loop to print out your statements. Make sure you are writing the same sentence for all values in your list. Loops are not effective when you are trying to generate different output for each value in your list.
Your First List - Loop
Repeat Your First List, but this time use a loop to print out your message for each item in your list. Again, if you came up with different messages for each value in your list, decide on one message to repeat for each value in your list.
top
Common List Operations
Modifying elements in a list
You can change the value of any element in a list if you know the position of that item.
Step15: Finding an element in a list
If you want to find out the position of an element in a list, you can use the index() function.
Step16: This method returns a ValueError if the requested item is not in the list.
Step17: Testing whether an item is in a list
You can test whether an item is in a list using the "in" keyword. This will become more useful after learning how to use if-else statements.
Step18: Adding items to a list
Appending items to the end of a list
We can add an item to a list using the append() method. This method adds the new item to the end of the list.
Step19: Inserting items into a list
We can also insert items anywhere we want in a list, using the insert() function. We specify the position we want the item to have, and everything from that point on is shifted one position to the right. In other words, the index of every item after the new item is increased by one.
Step20: Note that you have to give the position of the new item first, and then the value of the new item. If you do it in the reverse order, you will get an error.
Creating an empty list
Now that we know how to add items to a list after it is created, we can use lists more dynamically. We are no longer stuck defining our entire list at once.
A common approach with lists is to define an empty list, and then let your program add items to the list as necessary. This approach works, for example, when starting to build an interactive web site. Your list of users might start out empty, and then as people register for the site it will grow. This is a simplified approach to how web sites actually work, but the idea is realistic.
Here is a brief example of how to start with an empty list, start to fill it up, and work with the items in the list. The only new thing here is the way we define an empty list, which is just an empty set of square brackets.
Step21: If we don't change the order in our list, we can use the list to figure out who our oldest and newest users are.
Step22: Note that the code welcoming our newest user will always work, because we have used the index -1. If we had used the index 2 we would always get the third user, even as our list of users grows and grows.
Sorting a List
We can sort a list alphabetically, in either order.
Step23: sorted() vs. sort()
Whenever you consider sorting a list, keep in mind that you can not recover the original order. If you want to display a list in sorted order, but preserve the original order, you can use the sorted() function. The sorted() function also accepts the optional reverse=True argument.
Step24: Reversing a list
We have seen three possible orders for a list
Step25: Note that reverse is permanent, although you could follow up with another call to reverse() and get back the original order of the list.
Sorting a numerical list
All of the sorting functions work for numerical lists as well.
Step26: Finding the length of a list
You can find the length of a list using the len() function.
Step27: There are many situations where you might want to know how many items in a list. If you have a list that stores your users, you can find the length of your list at any time, and know how many users you have.
Step28: On a technical note, the len() function returns an integer, which can't be printed directly with strings. We use the str() function to turn the integer into a string so that it prints nicely
Step29: <a id="Exercises-operations"></a>
Exercises
Working List
Make a list that includes four careers, such as 'programmer' and 'truck driver'.
Use the list.index() function to find the index of one career in your list.
Use the in function to show that this career is in your list.
Use the append() function to add a new career to your list.
Use the insert() function to add a new career at the beginning of the list.
Use a loop to show all the careers in your list.
Starting From Empty
Create the list you ended up with in Working List, but this time start your file with an empty list and fill it up using append() statements.
Print a statement that tells us what the first career you thought of was.
Print a statement that tells us what the last career you thought of was.
Ordered Working List
Start with the list you created in Working List.
You are going to print out the list in a number of different orders.
Each time you print the list, use a for loop rather than printing the raw list.
Print a message each time telling us what order we should see the list in.
Print the list in its original order.
Print the list in alphabetical order.
Print the list in its original order.
Print the list in reverse alphabetical order.
Print the list in its original order.
Print the list in the reverse order from what it started.
Print the list in its original order
Permanently sort the list in alphabetical order, and then print it out.
Permanently sort the list in reverse alphabetical order, and then print it out.
Ordered Numbers
Make a list of 5 numbers, in a random order.
You are going to print out the list in a number of different orders.
Each time you print the list, use a for loop rather than printing the raw list.
Print a message each time telling us what order we should see the list in.
Print the numbers in the original order.
Print the numbers in increasing order.
Print the numbers in the original order.
Print the numbers in decreasing order.
Print the numbers in their original order.
Print the numbers in the reverse order from how they started.
Print the numbers in the original order.
Permanently sort the numbers in increasing order, and then print them out.
Permanently sort the numbers in descreasing order, and then print them out.
List Lengths
Copy two or three of the lists you made from the previous exercises, or make up two or three new lists.
Print out a series of statements that tell us how long each list is.
top
Removing Items from a List
Hopefully you can see by now that lists are a dynamic structure. We can define an empty list and then fill it up as information comes into our program. To become really dynamic, we need some ways to remove items from a list when we no longer need them. You can remove items from a list through their position, or through their value.
Removing items by position
If you know the position of an item in a list, you can remove that item using the del command. To use this approach, give the command del and the name of your list, with the index of the item you want to move in square brackets
Step30: Removing items by value
You can also remove an item from a list if you know its value. To do this, we use the remove() function. Give the name of the list, followed by the word remove with the value of the item you want to remove in parentheses. Python looks through your list, finds the first item with this value, and removes it.
Step31: Be careful to note, however, that only the first item with this value is removed. If you have multiple items with the same value, you will have some items with this value left in your list.
Step32: Popping items from a list
There is a cool concept in programming called "popping" items from a collection. Every programming language has some sort of data structure similar to Python's lists. All of these structures can be used as queues, and there are various ways of processing the items in a queue.
One simple approach is to start with an empty list, and then add items to that list. When you want to work with the items in the list, you always take the last item from the list, do something with it, and then remove that item. The pop() function makes this easy. It removes the last item from the list, and gives it to us so we can work with it. This is easier to show with an example
Step33: This is an example of a first-in, last-out approach. The first item in the list would be the last item processed if you kept using this approach. We will see a full implementation of this approach later on, when we learn about while loops.
You can actually pop any item you want from a list, by giving the index of the item you want to pop. So we could do a first-in, first-out approach by popping the first iem in the list
Step34: <a id="Exercises-removing"></a>
Exercises
Famous People
Make a list that includes the names of four famous people.
Remove each person from the list, one at a time, using each of the four methods we have just seen
Step35: If you want to grab everything up to a certain position in the list, you can also leave the first index blank
Step36: When we grab a slice from a list, the original list is not affected
Step37: We can get any segment of a list we want, using the slice method
Step38: To get all items from one position in the list to the end of the list, we can leave off the second index
Step39: Copying a list
You can use the slice notation to make a copy of a list, by leaving out both the starting and the ending index. This causes the slice to consist of everything from the first item to the last, which is the entire list.
Step40: <a id="Exercises-slicing"></a>
Exercises
Alphabet Slices
Store the first ten letters of the alphabet in a list.
Use a slice to print out the first three letters of the alphabet.
Use a slice to print out any three letters from the middle of your list.
Use a slice to print out the letters from any point in the middle of your list, to the end.
Protected List
Your goal in this exercise is to prove that copying a list protects the original list.
Make a list with three people's names in it.
Use a slice to make a copy of the entire list.
Add at least two new names to the new copy of the list.
Make a loop that prints out all of the names in the original list, along with a message that this is the original list.
Make a loop that prints out all of the names in the copied list, along with a message that this is the copied list.
top
Numerical Lists
There is nothing special about lists of numbers, but there are some functions you can use to make working with numerical lists more efficient. Let's make a list of the first ten numbers, and start working with it to see how we can use numbers in a list.
Step41: The range() function
This works, but it is not very efficient if we want to work with a large set of numbers. The range() function helps us generate long lists of numbers. Here are two ways to do the same thing, using the range function.
Step42: The range function takes in a starting number, and an end number. You get all integers, up to but not including the end number. You can also add a step value, which tells the range function how big of a step to take between numbers
Step43: If we want to store these numbers in a list, we can use the list() function. This function takes in a range, and turns it into a list
Step44: This is incredibly powerful; we can now create a list of the first million numbers, just as easily as we made a list of the first ten numbers. It doesn't really make sense to print the million numbers here, but we can show that the list really does have one million items in it, and we can print the last ten items to show that the list is correct.
Step45: There are two things here that might be a little unclear. The expression
str(len(numbers))
takes the length of the numbers list, and turns it into a string that can be printed.
The expression
numbers[-10
Step46: <a id="Exercises-numerical"></a>
Exercises
First Twenty
Use the range() function to store the first twenty numbers (1-20) in a list, and print them out.
Larger Sets
Take the first_twenty.py program you just wrote. Change your end number to a much larger number. How long does it take your computer to print out the first million numbers? (Most people will never see a million numbers scroll before their eyes. You can now see this!)
Five Wallets
Imagine five wallets with different amounts of cash in them. Store these five values in a list, and print out the following sentences
Step47: This should make sense at this point. If it doesn't, go over the code with these thoughts in mind
Step48: List comprehensions allow us to collapse the first three lines of code into one line. Here's what it looks like
Step49: It should be pretty clear that this code is more efficient than our previous approach, but it may not be clear what is happening. Let's take a look at everything that is happening in that first line
Step50: Here's how we might think of doing the same thing, using a list comprehension
Step51: Non-numerical comprehensions
We can use comprehensions with non-numerical lists as well. In this case, we will create an initial list, and then use a comprehension to make a second list from the first one. Here is a simple example, without using comprehensions
Step52: To use a comprehension in this code, we want to write something like this
Step53: <a id="Exercises-comprehensions"></a>
Exercises
If these examples are making sense, go ahead and try to do the following exercises using comprehensions. If not, try the exercises without comprehensions. You may figure out how to use comprehensions after you have solved each exercise the longer way.
Multiples of Ten
Make a list of the first ten multiples of ten (10, 20, 30... 90, 100). There are a number of ways to do this, but try to do it using a list comprehension. Print out your list.
Cubes
We saw how to make a list of the first ten squares. Make a list of the first ten cubes (1, 8, 27... 1000) using a list comprehension, and print them out.
Awesomeness
Store five names in a list. Make a second list that adds the phrase "is awesome!" to each name, using a list comprehension. Print out the awesome version of the names.
Working Backwards
Write out the following code without using a list comprehension
Step54: We can create a list from a string. The list will have one element for each character in the string
Step55: Slicing strings
We can access any character in a string by its position, just as we access individual items in a list
Step56: We can extend this to take slices of a string
Step57: Finding substrings
Now that you have seen what indexes mean for strings, we can search for substrings. A substring is a series of characters that appears in a string.
You can use the in keyword to find out whether a particular substring appears in a string
Step58: If you want to know where a substring appears in a string, you can use the find() method. The find() method tells you the index at which the substring begins.
Step59: Note, however, that this function only returns the index of the first appearance of the substring you are looking for. If the substring appears more than once, you will miss the other substrings.
Step60: If you want to find the last appearance of a substring, you can use the rfind() function
Step61: Replacing substrings
You can use the replace() function to replace any substring with another substring. To use the replace() function, give the substring you want to replace, and then the substring you want to replace it with. You also need to store the new string, either in the same string variable or in a new variable.
Step62: Counting substrings
If you want to know how many times a substring appears within a string, you can use the count() method.
Step63: Splitting strings
Strings can be split into a set of substrings when they are separated by a repeated character. If a string consists of a simple sentence, the string can be split based on spaces. The split() function returns a list of substrings. The split() function takes one argument, the character that separates the parts of the string.
Step64: Notice that the punctuation is left in the substrings.
It is more common to split strings that are really lists, separated by something like a comma. The split() function gives you an easy way to turn comma-separated strings, which you can't do much with in Python, into lists. Once you have your data in a list, you can work with it in much more powerful ways.
Step65: Notice that in this case, the spaces are also ignored. It is a good idea to test the output of the split() function and make sure it is doing what you want with the data you are interested in.
One use of this is to work with spreadsheet data in your Python programs. Most spreadsheet applications allow you to dump your data into a comma-separated text file. You can read this file into your Python program, or even copy and paste from the text file into your program file, and then turn the data into a list. You can then process your spreadsheet data using a for loop.
Other string methods
There are a number of other string methods that we won't go into right here, but you might want to take a look at them. Most of these methods should make sense to you at this point. You might not have use for any of them right now, but it is good to know what you can do with strings. This way you will have a sense of how to solve certain problems, even if it means referring back to the list of methods to remind yourself how to write the correct syntax when you need it.
<a id="Exercises-strings-as-lists"></a>
Exercises
Listing a Sentence
Store a single sentence in a variable. Use a for loop to print each character from your sentence on a separate line.
Sentence List
Store a single sentence in a variable. Create a list from your sentence. Print your raw list (don't use a loop, just print the list).
Sentence Slices
Store a sentence in a variable. Using slices, print out the first five characters, any five consecutive characters from the middle of the sentence, and the last five characters of the sentence.
Finding Python
Store a sentence in a variable, making sure you use the word Python at least twice in the sentence.
Use the in keyword to prove that the word Python is actually in the sentence.
Use the find() function to show where the word Python first appears in the sentence.
Use the rfind() function to show the last place Python appears in the sentence.
Use the count() function to show how many times the word Python appears in your sentence.
Use the split() function to break your sentence into a list of words. Print the raw list, and use a loop to print each word on its own line.
Use the replace() function to change Python to Ruby in your sentence.
<a id="Challenges-strings-as-lists"></a>
Challenges
Counting DNA Nucleotides
Project Rosalind is a problem set based on biotechnology concepts. It is meant to show how programming skills can help solve problems in genetics and biology.
If you have understood this section on strings, you have enough information to solve the first problem in Project Rosalind, Counting DNA Nucleotides. Give the sample problem a try.
If you get the sample problem correct, log in and try the full version of the problem!
Transcribing DNA into RNA
You also have enough information to try the second problem, Transcribing DNA into RNA. Solve the sample problem.
If you solved the sample problem, log in and try the full version!
Complementing a Strand of DNA
You guessed it, you can now try the third problem as well
Step66: If you try to add something to a tuple, you will get an error
Step67: The same kind of thing happens when you try to remove something from a tuple, or modify one of its elements. Once you define a tuple, you can be confident that its values will not change.
Using tuples to make strings
We have seen that it is pretty useful to be able to mix raw English strings with values that are stored in variables, as in the following
Step68: This was especially useful when we had a series of similar statements to make
Step69: I like this approach of using the plus sign to build strings because it is fairly intuitive. We can see that we are adding several smaller strings together to make one longer string. This is intuitive, but it is a lot of typing. There is a shorter way to do this, using placeholders.
Python ignores most of the characters we put inside of strings. There are a few characters that Python pays attention to, as we saw with strings such as "\t" and "\n". Python also pays attention to "%s" and "%d". These are placeholders. When Python sees the "%s" placeholder, it looks ahead and pulls in the first argument after the % sign
Step70: This is a much cleaner way of generating strings that include values. We compose our sentence all in one string, and then tell Python what values to pull into the string, in the appropriate places.
This is called string formatting, and it looks the same when you use a list
Step71: If you have more than one value to put into the string you are composing, you have to pack the values into a tuple
Step72: String formatting with numbers
If you recall, printing a number with a string can cause an error
Step73: Python knows that you could be talking about the value 23, or the characters '23'. So it throws an error, forcing us to clarify that we want Python to treat the number as a string. We do this by casting the number into a string using the str() function
Step74: The format string "%d" takes care of this for us. Watch how clean this code is
Step75: If you want to use a series of numbers, you pack them into a tuple just like we saw with strings
Step76: Just for clarification, look at how much longer the code is if you use concatenation instead of string formatting
Step77: You can mix string and numerical placeholders in any order you want. | Python Code:
students = ['bernice', 'aaron', 'cody']
for student in students:
print("Hello, " + student.title() + "!")
Explanation: Lists and Tuples
In this notebook, you will learn to store more than one valuable in a single variable. This by itself is one of the most powerful ideas in programming, and it introduces a number of other central concepts such as loops. If this section ends up making sense to you, you will be able to start writing some interesting programs, and you can be more confident that you will be able to develop overall competence as a programmer.
Previous: Variables, Strings, and Numbers |
Home |
Next: Introducing Functions
Contents
Lists
Introducing Lists
Example
Naming and defining a list
Accessing one item in a list
Exercises
Lists and Looping
Accessing all elements in a list
Enumerating a list
Exercises
Common List Operations
Modifying elements in a list
Finding an element in a list
Testing whether an element is in a list
Adding items to a list
Creating an empty list
Sorting a list
Finding the length of a list
Exercises
Removing Items from a List
Removing items by position
Removing items by value
Popping items
Exercises
Want to see what functions are?
Slicing a List
Copying a list
Exercises
Numerical Lists
The range() function
The min(), max(), sum() functions
Exercises
List Comprehensions
Numerical comprehensions
Non-numerical comprehensions
Exercises
Strings as Lists
Strings as a list of characters
Slicing strings
Finding substrings
Replacing substrings
Counting substrings
Splitting strings
Other string methods
Exercises
Challenges
Tuples
Defining tuples, and accessing elements
Using tuples to make strings
Exercises
Coding Style: PEP 8
Why have style conventions?
What is a PEP?
Basic Python style guidelines
Exercises
Overall Challenges
Lists
Introducing Lists
Example
A list is a collection of items, that is stored in a variable. The items should be related in some way, but there are no restrictions on what can be stored in a list. Here is a simple example of a list, and how we can quickly access each item in the list.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
Explanation: Naming and defining a list
Since lists are collection of objects, it is good practice to give them a plural name. If each item in your list is a car, call the list 'cars'. If each item is a dog, call your list 'dogs'. This gives you a straightforward way to refer to the entire list ('dogs'), and to a single item in the list ('dog').
In Python, square brackets designate a list. To define a list, you give the name of the list, the equals sign, and the values you want to include in your list within square brackets.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[0]
print(dog.title())
Explanation: Accessing one item in a list
Items in a list are identified by their position in the list, starting with zero. This will almost certainly trip you up at some point. Programmers even joke about how often we all make "off-by-one" errors, so don't feel bad when you make this kind of error.
To access the first element in a list, you give the name of the list, followed by a zero in parentheses.
End of explanation
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[1]
print(dog.title())
Explanation: The number in parentheses is called the index of the item. Because lists start at zero, the index of an item is always one less than its position in the list. So to get the second item in the list, we need to use an index of 1.
End of explanation
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-1]
print(dog.title())
Explanation: Accessing the last items in a list
You can probably see that to get the last item in this list, we would use an index of 2. This works, but it would only work because our list has exactly three items. To get the last item in a list, no matter how long the list is, you can use an index of -1.
End of explanation
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-2]
print(dog.title())
Explanation: This syntax also works for the second to last item, the third to last, and so forth.
End of explanation
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-4]
print(dog.title())
Explanation: You can't use a negative number larger than the length of the list, however.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print(dog)
Explanation: top
<a id="Exercises-lists"></a>
Exercises
First List
Store the values 'python', 'c', and 'java' in a list. Print each of these values out, using their position in the list.
First Neat List
Store the values 'python', 'c', and 'java' in a list. Print a statement about each of these values, using their position in the list.
Your statement could simply be, 'A nice programming language is value.'
Your First List
Think of something you can store in a list. Make a list with three or four items, and then print a message that includes at least one item from your list. Your sentence could be as simple as, "One item in my list is a ____."
top
Lists and Looping
Accessing all elements in a list
This is one of the most important concepts related to lists. You can have a list with a million items in it, and in three lines of code you can write a sentence for each of those million items. If you want to understand lists, and become a competent programmer, make sure you take the time to understand this section.
We use a loop to access all the elements in a list. A loop is a block of code that repeats itself until it runs out of items to work with, or until a certain condition is met. In this case, our loop will run once for every item in our list. With a list that is three items long, our loop will run three times.
Let's take a look at how we access all the items in a list, and then try to understand how it works.
End of explanation
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
Explanation: We have already seen how to create a list, so we are really just trying to understand how the last two lines work. These last two lines make up a loop, and the language here can help us see what is happening:
for dog in dogs:
The keyword "for" tells Python to get ready to use a loop.
The variable "dog", with no "s" on it, is a temporary placeholder variable. This is the variable that Python will place each item in the list into, one at a time.
The first time through the loop, the value of "dog" will be 'border collie'.
The second time through the loop, the value of "dog" will be 'australian cattle dog'.
The third time through, "dog" will be 'labrador retriever'.
After this, there are no more items in the list, and the loop will end.
The site <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print(dog)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor.com</a> allows you to run Python code one line at a time. As you run the code, there is also a visualization on the screen that shows you how the variable "dog" holds different values as the loop progresses. There is also an arrow that moves around your code, showing you how some lines are run just once, while other lines are run multiple tiimes. If you would like to see this in action, click the Forward button and watch the visualization, and the output as it is printed to the screen. Tools like this are incredibly valuable for seeing what Python is doing with your code.
Doing more with each item
We can do whatever we want with the value of "dog" inside the loop. In this case, we just print the name of the dog.
print(dog)
We are not limited to just printing the word dog. We can do whatever we want with this value, and this action will be carried out for every item in the list. Let's say something about each dog in our list.
End of explanation
###highlight=[6,7,8]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
print('No, I really really like ' + dog +'s!\n')
print("\nThat's just how I feel about dogs.")
Explanation: Visualize this on <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print('I+like+'+%2B+dog+%2B+'s.')&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor</a>.
Inside and outside the loop
Python uses indentation to decide what is inside the loop and what is outside the loop. Code that is inside the loop will be run for every item in the list. Code that is not indented, which comes after the loop, will be run once just like regular code.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print("Results for the dog show are as follows:\n")
for index, dog in enumerate(dogs):
place = str(index)
print("Place: " + place + " Dog: " + dog.title())
Explanation: Notice that the last line only runs once, after the loop is completed. Also notice the use of newlines ("\n") to make the output easier to read. Run this code on <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print('I+like+'+%2B+dog+%2B+'s.')%0A++++print('No,+I+really+really+like+'+%2B+dog+%2B's!%5Cn')%0A++++%0Aprint(%22%5CnThat's+just+how+I+feel+about+dogs.%22)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor</a>.
top
Enumerating a list
When you are looping through a list, you may want to know the index of the current item. You could always use the list.index(value) syntax, but there is a simpler way. The enumerate() function tracks the index of each item for you, as it loops through the list:
End of explanation
###highlight=[6]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print("Results for the dog show are as follows:\n")
for index, dog in enumerate(dogs):
place = str(index + 1)
print("Place: " + place + " Dog: " + dog.title())
Explanation: To enumerate a list, you need to add an index variable to hold the current index. So instead of
for dog in dogs:
You have
for index, dog in enumerate(dogs)
The value in the variable index is always an integer. If you want to print it in a string, you have to turn the integer into a string:
str(index)
The index always starts at 0, so in this example the value of place should actually be the current index, plus one:
End of explanation
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print(dogs)
Explanation: A common looping error
One common looping error occurs when instead of using the single variable dog inside the loop, we accidentally use the variable that holds the entire list:
End of explanation
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dogs + 's.')
Explanation: In this example, instead of printing each dog in the list, we print the entire list every time we go through the loop. Python puts each individual item in the list into the variable dog, but we never use that variable. Sometimes you will just get an error if you try to do this:
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs[0] = 'australian shepherd'
print(dogs)
Explanation: <a id="Exercises-loops"></a>
Exercises
First List - Loop
Repeat First List, but this time use a loop to print out each value in the list.
First Neat List - Loop
Repeat First Neat List, but this time use a loop to print out your statements. Make sure you are writing the same sentence for all values in your list. Loops are not effective when you are trying to generate different output for each value in your list.
Your First List - Loop
Repeat Your First List, but this time use a loop to print out your message for each item in your list. Again, if you came up with different messages for each value in your list, decide on one message to repeat for each value in your list.
top
Common List Operations
Modifying elements in a list
You can change the value of any element in a list if you know the position of that item.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('australian cattle dog'))
Explanation: Finding an element in a list
If you want to find out the position of an element in a list, you can use the index() function.
End of explanation
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('poodle'))
Explanation: This method returns a ValueError if the requested item is not in the list.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print('australian cattle dog' in dogs)
print('poodle' in dogs)
Explanation: Testing whether an item is in a list
You can test whether an item is in a list using the "in" keyword. This will become more useful after learning how to use if-else statements.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.append('poodle')
for dog in dogs:
print(dog.title() + "s are cool.")
Explanation: Adding items to a list
Appending items to the end of a list
We can add an item to a list using the append() method. This method adds the new item to the end of the list.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.insert(1, 'poodle')
print(dogs)
Explanation: Inserting items into a list
We can also insert items anywhere we want in a list, using the insert() function. We specify the position we want the item to have, and everything from that point on is shifted one position to the right. In other words, the index of every item after the new item is increased by one.
End of explanation
# Create an empty list to hold our users.
usernames = []
# Add some users.
usernames.append('bernice')
usernames.append('cody')
usernames.append('aaron')
# Greet all of our users.
for username in usernames:
print("Welcome, " + username.title() + '!')
Explanation: Note that you have to give the position of the new item first, and then the value of the new item. If you do it in the reverse order, you will get an error.
Creating an empty list
Now that we know how to add items to a list after it is created, we can use lists more dynamically. We are no longer stuck defining our entire list at once.
A common approach with lists is to define an empty list, and then let your program add items to the list as necessary. This approach works, for example, when starting to build an interactive web site. Your list of users might start out empty, and then as people register for the site it will grow. This is a simplified approach to how web sites actually work, but the idea is realistic.
Here is a brief example of how to start with an empty list, start to fill it up, and work with the items in the list. The only new thing here is the way we define an empty list, which is just an empty set of square brackets.
End of explanation
###highlight=[10,11,12]
# Create an empty list to hold our users.
usernames = []
# Add some users.
usernames.append('bernice')
usernames.append('cody')
usernames.append('aaron')
# Greet all of our users.
for username in usernames:
print("Welcome, " + username.title() + '!')
# Recognize our first user, and welcome our newest user.
print("\nThank you for being our very first user, " + usernames[0].title() + '!')
print("And a warm welcome to our newest user, " + usernames[-1].title() + '!')
Explanation: If we don't change the order in our list, we can use the list to figure out who our oldest and newest users are.
End of explanation
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
# Display the list in its current order.
print("Our students are currently in alphabetical order.")
for student in students:
print(student.title())
#Put students in reverse alphabetical order.
students.sort(reverse=True)
# Display the list in its current order.
print("\nOur students are now in reverse alphabetical order.")
for student in students:
print(student.title())
Explanation: Note that the code welcoming our newest user will always work, because we have used the index -1. If we had used the index 2 we would always get the third user, even as our list of users grows and grows.
Sorting a List
We can sort a list alphabetically, in either order.
End of explanation
students = ['bernice', 'aaron', 'cody']
# Display students in alphabetical order, but keep the original order.
print("Here is the list in alphabetical order:")
for student in sorted(students):
print(student.title())
# Display students in reverse alphabetical order, but keep the original order.
print("\nHere is the list in reverse alphabetical order:")
for student in sorted(students, reverse=True):
print(student.title())
print("\nHere is the list in its original order:")
# Show that the list is still in its original order.
for student in students:
print(student.title())
Explanation: sorted() vs. sort()
Whenever you consider sorting a list, keep in mind that you can not recover the original order. If you want to display a list in sorted order, but preserve the original order, you can use the sorted() function. The sorted() function also accepts the optional reverse=True argument.
End of explanation
students = ['bernice', 'aaron', 'cody']
students.reverse()
print(students)
Explanation: Reversing a list
We have seen three possible orders for a list:
- The original order in which the list was created
- Alphabetical order
- Reverse alphabetical order
There is one more order we can use, and that is the reverse of the original order of the list. The reverse() function gives us this order.
End of explanation
numbers = [1, 3, 4, 2]
# sort() puts numbers in increasing order.
numbers.sort()
print(numbers)
# sort(reverse=True) puts numbers in decreasing order.
numbers.sort(reverse=True)
print(numbers)
numbers = [1, 3, 4, 2]
# sorted() preserves the original order of the list:
print(sorted(numbers))
print(numbers)
numbers = [1, 3, 4, 2]
# The reverse() function also works for numerical lists.
numbers.reverse()
print(numbers)
Explanation: Note that reverse is permanent, although you could follow up with another call to reverse() and get back the original order of the list.
Sorting a numerical list
All of the sorting functions work for numerical lists as well.
End of explanation
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print(user_count)
Explanation: Finding the length of a list
You can find the length of a list using the len() function.
End of explanation
# Create an empty list to hold our users.
usernames = []
# Add some users, and report on how many users we have.
usernames.append('bernice')
user_count = len(usernames)
print("We have " + str(user_count) + " user!")
usernames.append('cody')
usernames.append('aaron')
user_count = len(usernames)
print("We have " + str(user_count) + " users!")
Explanation: There are many situations where you might want to know how many items in a list. If you have a list that stores your users, you can find the length of your list at any time, and know how many users you have.
End of explanation
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will cause an error: " + user_count)
###highlight=[5]
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will work: " + str(user_count))
Explanation: On a technical note, the len() function returns an integer, which can't be printed directly with strings. We use the str() function to turn the integer into a string so that it prints nicely:
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove the first dog from the list.
del dogs[0]
print(dogs)
Explanation: <a id="Exercises-operations"></a>
Exercises
Working List
Make a list that includes four careers, such as 'programmer' and 'truck driver'.
Use the list.index() function to find the index of one career in your list.
Use the in function to show that this career is in your list.
Use the append() function to add a new career to your list.
Use the insert() function to add a new career at the beginning of the list.
Use a loop to show all the careers in your list.
Starting From Empty
Create the list you ended up with in Working List, but this time start your file with an empty list and fill it up using append() statements.
Print a statement that tells us what the first career you thought of was.
Print a statement that tells us what the last career you thought of was.
Ordered Working List
Start with the list you created in Working List.
You are going to print out the list in a number of different orders.
Each time you print the list, use a for loop rather than printing the raw list.
Print a message each time telling us what order we should see the list in.
Print the list in its original order.
Print the list in alphabetical order.
Print the list in its original order.
Print the list in reverse alphabetical order.
Print the list in its original order.
Print the list in the reverse order from what it started.
Print the list in its original order
Permanently sort the list in alphabetical order, and then print it out.
Permanently sort the list in reverse alphabetical order, and then print it out.
Ordered Numbers
Make a list of 5 numbers, in a random order.
You are going to print out the list in a number of different orders.
Each time you print the list, use a for loop rather than printing the raw list.
Print a message each time telling us what order we should see the list in.
Print the numbers in the original order.
Print the numbers in increasing order.
Print the numbers in the original order.
Print the numbers in decreasing order.
Print the numbers in their original order.
Print the numbers in the reverse order from how they started.
Print the numbers in the original order.
Permanently sort the numbers in increasing order, and then print them out.
Permanently sort the numbers in descreasing order, and then print them out.
List Lengths
Copy two or three of the lists you made from the previous exercises, or make up two or three new lists.
Print out a series of statements that tell us how long each list is.
top
Removing Items from a List
Hopefully you can see by now that lists are a dynamic structure. We can define an empty list and then fill it up as information comes into our program. To become really dynamic, we need some ways to remove items from a list when we no longer need them. You can remove items from a list through their position, or through their value.
Removing items by position
If you know the position of an item in a list, you can remove that item using the del command. To use this approach, give the command del and the name of your list, with the index of the item you want to move in square brackets:
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove australian cattle dog from the list.
dogs.remove('australian cattle dog')
print(dogs)
Explanation: Removing items by value
You can also remove an item from a list if you know its value. To do this, we use the remove() function. Give the name of the list, followed by the word remove with the value of the item you want to remove in parentheses. Python looks through your list, finds the first item with this value, and removes it.
End of explanation
letters = ['a', 'b', 'c', 'a', 'b', 'c']
# Remove the letter a from the list.
letters.remove('a')
print(letters)
Explanation: Be careful to note, however, that only the first item with this value is removed. If you have multiple items with the same value, you will have some items with this value left in your list.
End of explanation
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
last_dog = dogs.pop()
print(last_dog)
print(dogs)
Explanation: Popping items from a list
There is a cool concept in programming called "popping" items from a collection. Every programming language has some sort of data structure similar to Python's lists. All of these structures can be used as queues, and there are various ways of processing the items in a queue.
One simple approach is to start with an empty list, and then add items to that list. When you want to work with the items in the list, you always take the last item from the list, do something with it, and then remove that item. The pop() function makes this easy. It removes the last item from the list, and gives it to us so we can work with it. This is easier to show with an example:
End of explanation
###highlight=[3]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
first_dog = dogs.pop(0)
print(first_dog)
print(dogs)
Explanation: This is an example of a first-in, last-out approach. The first item in the list would be the last item processed if you kept using this approach. We will see a full implementation of this approach later on, when we learn about while loops.
You can actually pop any item you want from a list, by giving the index of the item you want to pop. So we could do a first-in, first-out approach by popping the first iem in the list:
End of explanation
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
for user in first_batch:
print(user.title())
Explanation: <a id="Exercises-removing"></a>
Exercises
Famous People
Make a list that includes the names of four famous people.
Remove each person from the list, one at a time, using each of the four methods we have just seen:
Pop the last item from the list, and pop any item except the last item.
Remove one item by its position, and one item by its value.
Print out a message that there are no famous people left in your list, and print your list to prove that it is empty.
top
Want to see what functions are?
At this point, you might have noticed we have a fair bit of repetetive code in some of our examples. This repetition will disappear once we learn how to use functions. If this repetition is bothering you already, you might want to go look at Introducing Functions before you do any more exercises in this section.
Slicing a List
Since a list is a collection of items, we should be able to get any subset of those items. For example, if we want to get just the first three items from the list, we should be able to do so easily. The same should be true for any three items in the middle of the list, or the last three items, or any x items from anywhere in the list. These subsets of a list are called slices.
To get a subset of a list, we give the position of the first item we want, and the position of the first item we do not want to include in the subset. So the slice list[0:3] will return a list containing items 0, 1, and 2, but not item 3. Here is how you get a batch containing the first three items.
End of explanation
###highlight=[5]
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[:3]
for user in first_batch:
print(user.title())
Explanation: If you want to grab everything up to a certain position in the list, you can also leave the first index blank:
End of explanation
###highlight=[7,8,9]
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
# The original list is unaffected.
for user in usernames:
print(user.title())
Explanation: When we grab a slice from a list, the original list is not affected:
End of explanation
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab a batch from the middle of the list.
middle_batch = usernames[1:4]
for user in middle_batch:
print(user.title())
Explanation: We can get any segment of a list we want, using the slice method:
End of explanation
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab all users from the third to the end.
end_batch = usernames[2:]
for user in end_batch:
print(user.title())
Explanation: To get all items from one position in the list to the end of the list, we can leave off the second index:
End of explanation
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Make a copy of the list.
copied_usernames = usernames[:]
print("The full copied list:\n\t", copied_usernames)
# Remove the first two users from the copied list.
del copied_usernames[0]
del copied_usernames[0]
print("\nTwo users removed from copied list:\n\t", copied_usernames)
# The original list is unaffected.
print("\nThe original list:\n\t", usernames)
Explanation: Copying a list
You can use the slice notation to make a copy of a list, by leaving out both the starting and the ending index. This causes the slice to consist of everything from the first item to the last, which is the entire list.
End of explanation
# Print out the first ten numbers.
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for number in numbers:
print(number)
Explanation: <a id="Exercises-slicing"></a>
Exercises
Alphabet Slices
Store the first ten letters of the alphabet in a list.
Use a slice to print out the first three letters of the alphabet.
Use a slice to print out any three letters from the middle of your list.
Use a slice to print out the letters from any point in the middle of your list, to the end.
Protected List
Your goal in this exercise is to prove that copying a list protects the original list.
Make a list with three people's names in it.
Use a slice to make a copy of the entire list.
Add at least two new names to the new copy of the list.
Make a loop that prints out all of the names in the original list, along with a message that this is the original list.
Make a loop that prints out all of the names in the copied list, along with a message that this is the copied list.
top
Numerical Lists
There is nothing special about lists of numbers, but there are some functions you can use to make working with numerical lists more efficient. Let's make a list of the first ten numbers, and start working with it to see how we can use numbers in a list.
End of explanation
# Print the first ten numbers.
for number in range(1,11):
print(number)
Explanation: The range() function
This works, but it is not very efficient if we want to work with a large set of numbers. The range() function helps us generate long lists of numbers. Here are two ways to do the same thing, using the range function.
End of explanation
# Print the first ten odd numbers.
for number in range(1,21,2):
print(number)
Explanation: The range function takes in a starting number, and an end number. You get all integers, up to but not including the end number. You can also add a step value, which tells the range function how big of a step to take between numbers:
End of explanation
# Create a list of the first ten numbers.
numbers = list(range(1,11))
print(numbers)
Explanation: If we want to store these numbers in a list, we can use the list() function. This function takes in a range, and turns it into a list:
End of explanation
# Store the first million numbers in a list.
numbers = list(range(1,1000001))
# Show the length of the list:
print("The list 'numbers' has " + str(len(numbers)) + " numbers in it.")
# Show the last ten numbers:
print("\nThe last ten numbers in the list are:")
for number in numbers[-10:]:
print(number)
Explanation: This is incredibly powerful; we can now create a list of the first million numbers, just as easily as we made a list of the first ten numbers. It doesn't really make sense to print the million numbers here, but we can show that the list really does have one million items in it, and we can print the last ten items to show that the list is correct.
End of explanation
ages = [23, 16, 14, 28, 19, 11, 38]
youngest = min(ages)
oldest = max(ages)
total_years = sum(ages)
print("Our youngest reader is " + str(youngest) + " years old.")
print("Our oldest reader is " + str(oldest) + " years old.")
print("Together, we have " + str(total_years) + " years worth of life experience.")
Explanation: There are two things here that might be a little unclear. The expression
str(len(numbers))
takes the length of the numbers list, and turns it into a string that can be printed.
The expression
numbers[-10:]
gives us a slice of the list. The index -1 is the last item in the list, and the index -10 is the item ten places from the end of the list. So the slice numbers[-10:] gives us everything from that item to the end of the list.
The min(), max(), and sum() functions
There are three functions you can easily use with numerical lists. As you might expect, the min() function returns the smallest number in the list, the max() function returns the largest number in the list, and the sum() function returns the total of all numbers in the list.
End of explanation
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
new_square = number**2
squares.append(new_square)
# Show that our list is correct.
for square in squares:
print(square)
Explanation: <a id="Exercises-numerical"></a>
Exercises
First Twenty
Use the range() function to store the first twenty numbers (1-20) in a list, and print them out.
Larger Sets
Take the first_twenty.py program you just wrote. Change your end number to a much larger number. How long does it take your computer to print out the first million numbers? (Most people will never see a million numbers scroll before their eyes. You can now see this!)
Five Wallets
Imagine five wallets with different amounts of cash in them. Store these five values in a list, and print out the following sentences:
"The fattest wallet has $ value in it."
"The skinniest wallet has $ value in it."
"All together, these wallets have $ value in them."
top
List Comprehensions
I thought carefully before including this section. If you are brand new to programming, list comprehensions may look confusing at first. They are a shorthand way of creating and working with lists. It is good to be aware of list comprehensions, because you will see them in other people's code, and they are really useful when you understand how to use them. That said, if they don't make sense to you yet, don't worry about using them right away. When you have worked with enough lists, you will want to use comprehensions. For now, it is good enough to know they exist, and to recognize them when you see them. If you like them, go ahead and start trying to use them now.
Numerical Comprehensions
Let's consider how we might make a list of the first ten square numbers. We could do it like this:
End of explanation
###highlight=[8]
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
squares.append(number**2)
# Show that our list is correct.
for square in squares:
print(square)
Explanation: This should make sense at this point. If it doesn't, go over the code with these thoughts in mind:
- We make an empty list called squares that will hold the values we are interested in.
- Using the range() function, we start a loop that will go through the numbers 1-10.
- Each time we pass through the loop, we find the square of the current number by raising it to the second power.
- We add this new value to our list squares.
- We go through our newly-defined list and print out each square.
Now let's make this code more efficient. We don't really need to store the new square in its own variable new_square; we can just add it directly to the list of squares. The line
new_square = number**2
is taken out, and the next line takes care of the squaring:
End of explanation
###highlight=[2,3]
# Store the first ten square numbers in a list.
squares = [number**2 for number in range(1,11)]
# Show that our list is correct.
for square in squares:
print(square)
Explanation: List comprehensions allow us to collapse the first three lines of code into one line. Here's what it looks like:
End of explanation
# Make an empty list that will hold the even numbers.
evens = []
# Loop through the numbers 1-10, double each one, and add it to our list.
for number in range(1,11):
evens.append(number*2)
# Show that our list is correct:
for even in evens:
print(even)
Explanation: It should be pretty clear that this code is more efficient than our previous approach, but it may not be clear what is happening. Let's take a look at everything that is happening in that first line:
We define a list called squares.
Look at the second part of what's in square brackets:
for number in range(1,11)
This sets up a loop that goes through the numbers 1-10, storing each value in the variable number. Now we can see what happens to each number in the loop:
number**2
Each number is raised to the second power, and this is the value that is stored in the list we defined. We might read this line in the following way:
squares = [raise number to the second power, for each number in the range 1-10]
Another example
It is probably helpful to see a few more examples of how comprehensions can be used. Let's try to make the first ten even numbers, the longer way:
End of explanation
###highlight=[2,3]
# Make a list of the first ten even numbers.
evens = [number*2 for number in range(1,11)]
for even in evens:
print(even)
Explanation: Here's how we might think of doing the same thing, using a list comprehension:
evens = [multiply each number by 2, for each number in the range 1-10]
Here is the same line in code:
End of explanation
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = []
for student in students:
great_students.append(student.title() + " the great!")
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
Explanation: Non-numerical comprehensions
We can use comprehensions with non-numerical lists as well. In this case, we will create an initial list, and then use a comprehension to make a second list from the first one. Here is a simple example, without using comprehensions:
End of explanation
###highlight=[5,6]
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = [student.title() + " the great!" for student in students]
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
Explanation: To use a comprehension in this code, we want to write something like this:
great_students = [add 'the great' to each student, for each student in the list of students]
Here's what it looks like:
End of explanation
message = "Hello!"
for letter in message:
print(letter)
Explanation: <a id="Exercises-comprehensions"></a>
Exercises
If these examples are making sense, go ahead and try to do the following exercises using comprehensions. If not, try the exercises without comprehensions. You may figure out how to use comprehensions after you have solved each exercise the longer way.
Multiples of Ten
Make a list of the first ten multiples of ten (10, 20, 30... 90, 100). There are a number of ways to do this, but try to do it using a list comprehension. Print out your list.
Cubes
We saw how to make a list of the first ten squares. Make a list of the first ten cubes (1, 8, 27... 1000) using a list comprehension, and print them out.
Awesomeness
Store five names in a list. Make a second list that adds the phrase "is awesome!" to each name, using a list comprehension. Print out the awesome version of the names.
Working Backwards
Write out the following code without using a list comprehension:
plus_thirteen = [number + 13 for number in range(1,11)]
top
Strings as Lists
Now that you have some familiarity with lists, we can take a second look at strings. A string is really a list of characters, so many of the concepts from working with lists behave the same with strings.
Strings as a list of characters
We can loop through a string using a for loop, just like we loop through a list:
End of explanation
message = "Hello world!"
message_list = list(message)
print(message_list)
Explanation: We can create a list from a string. The list will have one element for each character in the string:
End of explanation
message = "Hello World!"
first_char = message[0]
last_char = message[-1]
print(first_char, last_char)
Explanation: Slicing strings
We can access any character in a string by its position, just as we access individual items in a list:
End of explanation
message = "Hello World!"
first_three = message[:3]
last_three = message[-3:]
print(first_three, last_three)
Explanation: We can extend this to take slices of a string:
End of explanation
message = "I like cats and dogs."
dog_present = 'dog' in message
print(dog_present)
Explanation: Finding substrings
Now that you have seen what indexes mean for strings, we can search for substrings. A substring is a series of characters that appears in a string.
You can use the in keyword to find out whether a particular substring appears in a string:
End of explanation
message = "I like cats and dogs."
dog_index = message.find('dog')
print(dog_index)
Explanation: If you want to know where a substring appears in a string, you can use the find() method. The find() method tells you the index at which the substring begins.
End of explanation
###highlight=[2]
message = "I like cats and dogs, but I'd much rather own a dog."
dog_index = message.find('dog')
print(dog_index)
Explanation: Note, however, that this function only returns the index of the first appearance of the substring you are looking for. If the substring appears more than once, you will miss the other substrings.
End of explanation
###highlight=[3,4]
message = "I like cats and dogs, but I'd much rather own a dog."
last_dog_index = message.rfind('dog')
print(last_dog_index)
Explanation: If you want to find the last appearance of a substring, you can use the rfind() function:
End of explanation
message = "I like cats and dogs, but I'd much rather own a dog."
message = message.replace('dog', 'snake')
print(message)
Explanation: Replacing substrings
You can use the replace() function to replace any substring with another substring. To use the replace() function, give the substring you want to replace, and then the substring you want to replace it with. You also need to store the new string, either in the same string variable or in a new variable.
End of explanation
message = "I like cats and dogs, but I'd much rather own a dog."
number_dogs = message.count('dog')
print(number_dogs)
Explanation: Counting substrings
If you want to know how many times a substring appears within a string, you can use the count() method.
End of explanation
message = "I like cats and dogs, but I'd much rather own a dog."
words = message.split(' ')
print(words)
Explanation: Splitting strings
Strings can be split into a set of substrings when they are separated by a repeated character. If a string consists of a simple sentence, the string can be split based on spaces. The split() function returns a list of substrings. The split() function takes one argument, the character that separates the parts of the string.
End of explanation
animals = "dog, cat, tiger, mouse, liger, bear"
# Rewrite the string as a list, and store it in the same variable
animals = animals.split(',')
print(animals)
Explanation: Notice that the punctuation is left in the substrings.
It is more common to split strings that are really lists, separated by something like a comma. The split() function gives you an easy way to turn comma-separated strings, which you can't do much with in Python, into lists. Once you have your data in a list, you can work with it in much more powerful ways.
End of explanation
colors = ('red', 'green', 'blue')
print("The first color is: " + colors[0])
print("\nThe available colors are:")
for color in colors:
print("- " + color)
Explanation: Notice that in this case, the spaces are also ignored. It is a good idea to test the output of the split() function and make sure it is doing what you want with the data you are interested in.
One use of this is to work with spreadsheet data in your Python programs. Most spreadsheet applications allow you to dump your data into a comma-separated text file. You can read this file into your Python program, or even copy and paste from the text file into your program file, and then turn the data into a list. You can then process your spreadsheet data using a for loop.
Other string methods
There are a number of other string methods that we won't go into right here, but you might want to take a look at them. Most of these methods should make sense to you at this point. You might not have use for any of them right now, but it is good to know what you can do with strings. This way you will have a sense of how to solve certain problems, even if it means referring back to the list of methods to remind yourself how to write the correct syntax when you need it.
<a id="Exercises-strings-as-lists"></a>
Exercises
Listing a Sentence
Store a single sentence in a variable. Use a for loop to print each character from your sentence on a separate line.
Sentence List
Store a single sentence in a variable. Create a list from your sentence. Print your raw list (don't use a loop, just print the list).
Sentence Slices
Store a sentence in a variable. Using slices, print out the first five characters, any five consecutive characters from the middle of the sentence, and the last five characters of the sentence.
Finding Python
Store a sentence in a variable, making sure you use the word Python at least twice in the sentence.
Use the in keyword to prove that the word Python is actually in the sentence.
Use the find() function to show where the word Python first appears in the sentence.
Use the rfind() function to show the last place Python appears in the sentence.
Use the count() function to show how many times the word Python appears in your sentence.
Use the split() function to break your sentence into a list of words. Print the raw list, and use a loop to print each word on its own line.
Use the replace() function to change Python to Ruby in your sentence.
<a id="Challenges-strings-as-lists"></a>
Challenges
Counting DNA Nucleotides
Project Rosalind is a problem set based on biotechnology concepts. It is meant to show how programming skills can help solve problems in genetics and biology.
If you have understood this section on strings, you have enough information to solve the first problem in Project Rosalind, Counting DNA Nucleotides. Give the sample problem a try.
If you get the sample problem correct, log in and try the full version of the problem!
Transcribing DNA into RNA
You also have enough information to try the second problem, Transcribing DNA into RNA. Solve the sample problem.
If you solved the sample problem, log in and try the full version!
Complementing a Strand of DNA
You guessed it, you can now try the third problem as well: Complementing a Strand of DNA. Try the sample problem, and then try the full version if you are successful.
top
Tuples
Tuples are basically lists that can never be changed. Lists are quite dynamic; they can grow as you append and insert items, and they can shrink as you remove items. You can modify any element you want to in a list. Sometimes we like this behavior, but other times we may want to ensure that no user or no part of a program can change a list. That's what tuples are for.
Technically, lists are mutable objects and tuples are immutable objects. Mutable objects can change (think of mutations), and immutable objects can not change.
Defining tuples, and accessing elements
You define a tuple just like you define a list, except you use parentheses instead of square brackets. Once you have a tuple, you can access individual elements just like you can with a list, and you can loop through the tuple with a for loop:
End of explanation
colors = ('red', 'green', 'blue')
colors.append('purple')
Explanation: If you try to add something to a tuple, you will get an error:
End of explanation
animal = 'dog'
print("I have a " + animal + ".")
Explanation: The same kind of thing happens when you try to remove something from a tuple, or modify one of its elements. Once you define a tuple, you can be confident that its values will not change.
Using tuples to make strings
We have seen that it is pretty useful to be able to mix raw English strings with values that are stored in variables, as in the following:
End of explanation
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a " + animal + ".")
Explanation: This was especially useful when we had a series of similar statements to make:
End of explanation
animal = 'dog'
print("I have a %s." % animal)
Explanation: I like this approach of using the plus sign to build strings because it is fairly intuitive. We can see that we are adding several smaller strings together to make one longer string. This is intuitive, but it is a lot of typing. There is a shorter way to do this, using placeholders.
Python ignores most of the characters we put inside of strings. There are a few characters that Python pays attention to, as we saw with strings such as "\t" and "\n". Python also pays attention to "%s" and "%d". These are placeholders. When Python sees the "%s" placeholder, it looks ahead and pulls in the first argument after the % sign:
End of explanation
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a %s." % animal)
Explanation: This is a much cleaner way of generating strings that include values. We compose our sentence all in one string, and then tell Python what values to pull into the string, in the appropriate places.
This is called string formatting, and it looks the same when you use a list:
End of explanation
animals = ['dog', 'cat', 'bear']
print("I have a %s, a %s, and a %s." % (animals[0], animals[1], animals[2]))
Explanation: If you have more than one value to put into the string you are composing, you have to pack the values into a tuple:
End of explanation
number = 23
print("My favorite number is " + number + ".")
Explanation: String formatting with numbers
If you recall, printing a number with a string can cause an error:
End of explanation
###highlight=[3]
number = 23
print("My favorite number is " + str(number) + ".")
Explanation: Python knows that you could be talking about the value 23, or the characters '23'. So it throws an error, forcing us to clarify that we want Python to treat the number as a string. We do this by casting the number into a string using the str() function:
End of explanation
###highlight=[3]
number = 23
print("My favorite number is %d." % number)
Explanation: The format string "%d" takes care of this for us. Watch how clean this code is:
End of explanation
numbers = [7, 23, 42]
print("My favorite numbers are %d, %d, and %d." % (numbers[0], numbers[1], numbers[2]))
Explanation: If you want to use a series of numbers, you pack them into a tuple just like we saw with strings:
End of explanation
###highlight=[3]
numbers = [7, 23, 42]
print("My favorite numbers are " + str(numbers[0]) + ", " + str(numbers[1]) + ", and " + str(numbers[2]) + ".")
Explanation: Just for clarification, look at how much longer the code is if you use concatenation instead of string formatting:
End of explanation
names = ['eric', 'ever']
numbers = [23, 2]
print("%s's favorite number is %d, and %s's favorite number is %d." % (names[0].title(), numbers[0], names[1].title(), numbers[1]))
Explanation: You can mix string and numerical placeholders in any order you want.
End of explanation |
13,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This tutorial is generated from a Jupyter notebook that can be found here.
BOLFI
In practice inference problems often have a complicated and computationally heavy simulator, and one simply cannot run it for millions of times. The Bayesian Optimization for Likelihood-Free Inference BOLFI framework is likely to prove useful in such situation
Step1: Although BOLFI is best used with complicated simulators, for demonstration purposes we will use the familiar MA2 model introduced in the basic tutorial, and load it from ready-made examples
Step2: Fitting the surrogate model
Now we can immediately proceed with the inference. However, when dealing with a Gaussian process, it may be beneficial to take a logarithm of the discrepancies in order to reduce the effect that high discrepancies have on the GP. (Sometimes you may want to add a small constant to avoid very negative or even -Inf distances occurring especially if it is likely that there can be exact matches between simulated and observed data.) In ELFI such transformed node can be created easily
Step3: As BOLFI is a more advanced inference method, its interface is also a bit more involved as compared to for example rejection sampling. But not much
Step4: Sometimes you may have some samples readily available. You could then initialize the GP model with a dictionary of previous results by giving initial_evidence=result.outputs.
The BOLFI class can now try to fit the surrogate model (the GP) to the relationship between parameter values and the resulting discrepancies. We'll request only 100 evidence points (including the initial_evidence defined above).
Step5: (More on the returned BolfiPosterior object below.)
Note that in spite of the very few simulator runs, fitting the model took longer than any of the previous methods. Indeed, BOLFI is intended for scenarios where the simulator takes a lot of time to run.
The fitted target_model uses the GPy library, and can be investigated further
Step6: It may be useful to see the acquired parameter values and the resulting discrepancies
Step7: There could be an unnecessarily high number of points at parameter bounds. These could probably be decreased by lowering the covariance of the noise added to acquired points, defined by the optional acq_noise_var argument for the BOLFI constructor. Another possibility could be to add virtual derivative observations at the borders, though not yet implemented in ELFI.
BOLFI Posterior
Above, the fit method returned a BolfiPosterior object representing a BOLFI posterior (please see the paper for details). The fit method accepts a threshold parameter; if none is given, ELFI will use the minimum value of discrepancy estimate mean. Afterwards, one may request for a posterior with a different threshold
Step8: One can visualize a posterior directly (remember that the priors form a triangle)
Step9: Sampling
Finally, samples from the posterior can be acquired with an MCMC sampler. By default it runs 4 chains, and half of the requested samples are spent in adaptation/warmup. Note that depending on the smoothness of the GP approximation, the number of priors, their gradients etc., this may be slow.
Step10: The sampling algorithms may be fine-tuned with some parameters. The default No-U-Turn-Sampler is a sophisticated algorithm, and in some cases one may get warnings about diverged proposals, which are signs that something may be wrong and should be investigated. It is good to understand the cause of these warnings although they don't automatically mean that the results are unreliable. You could try rerunning the sample method with a higher target probability target_prob during adaptation, as its default 0.6 may be inadequate for a non-smooth posteriors, but this will slow down the sampling.
Note also that since MCMC proposals outside the region allowed by either the model priors or GP bounds are rejected, a tight domain may lead to suboptimal overall acceptance ratio. In our MA2 case the prior defines a triangle-shaped uniform support for the posterior, making it a good example of a difficult model for the NUTS algorithm.
Now we finally have a Sample object again, which has several convenience methods
Step11: The black vertical lines indicate the end of warmup, which by default is half of the number of iterations. | Python Code:
import numpy as np
import scipy.stats
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%precision 2
import logging
logging.basicConfig(level=logging.INFO)
# Set an arbitrary global seed to keep the randomly generated quantities the same
seed = 1
np.random.seed(seed)
import elfi
Explanation: This tutorial is generated from a Jupyter notebook that can be found here.
BOLFI
In practice inference problems often have a complicated and computationally heavy simulator, and one simply cannot run it for millions of times. The Bayesian Optimization for Likelihood-Free Inference BOLFI framework is likely to prove useful in such situation: a statistical model (usually Gaussian process, GP) is created for the discrepancy, and its minimum is inferred with Bayesian optimization. This approach typically reduces the number of required simulator calls by several orders of magnitude.
This tutorial demonstrates how to use BOLFI to do LFI in ELFI.
End of explanation
from elfi.examples import ma2
model = ma2.get_model(seed_obs=seed)
elfi.draw(model)
Explanation: Although BOLFI is best used with complicated simulators, for demonstration purposes we will use the familiar MA2 model introduced in the basic tutorial, and load it from ready-made examples:
End of explanation
log_d = elfi.Operation(np.log, model['d'])
Explanation: Fitting the surrogate model
Now we can immediately proceed with the inference. However, when dealing with a Gaussian process, it may be beneficial to take a logarithm of the discrepancies in order to reduce the effect that high discrepancies have on the GP. (Sometimes you may want to add a small constant to avoid very negative or even -Inf distances occurring especially if it is likely that there can be exact matches between simulated and observed data.) In ELFI such transformed node can be created easily:
End of explanation
bolfi = elfi.BOLFI(log_d, batch_size=1, initial_evidence=20, update_interval=10,
bounds={'t1':(-2, 2), 't2':(-1, 1)}, acq_noise_var=[0.1, 0.1], seed=seed)
Explanation: As BOLFI is a more advanced inference method, its interface is also a bit more involved as compared to for example rejection sampling. But not much: Using the same graphical model as earlier, the inference could begin by defining a Gaussian process (GP) model, for which ELFI uses the GPy library. This could be given as an elfi.GPyRegression object via the keyword argument target_model. In this case, we are happy with the default that ELFI creates for us when we just give it each parameter some bounds as a dictionary.
Other notable arguments include the initial_evidence, which gives the number of initialization points sampled straight from the priors before starting to optimize the acquisition of points, update_interval which defines how often the GP hyperparameters are optimized, and acq_noise_var which defines the diagonal covariance of noise added to the acquired points. Note that in general BOLFI does not benefit from a batch_size higher than one, since the acquisition surface is updated after each batch (especially so if the noise is 0!).
End of explanation
%time post = bolfi.fit(n_evidence=200)
Explanation: Sometimes you may have some samples readily available. You could then initialize the GP model with a dictionary of previous results by giving initial_evidence=result.outputs.
The BOLFI class can now try to fit the surrogate model (the GP) to the relationship between parameter values and the resulting discrepancies. We'll request only 100 evidence points (including the initial_evidence defined above).
End of explanation
bolfi.target_model
bolfi.plot_state();
Explanation: (More on the returned BolfiPosterior object below.)
Note that in spite of the very few simulator runs, fitting the model took longer than any of the previous methods. Indeed, BOLFI is intended for scenarios where the simulator takes a lot of time to run.
The fitted target_model uses the GPy library, and can be investigated further:
End of explanation
bolfi.plot_discrepancy();
Explanation: It may be useful to see the acquired parameter values and the resulting discrepancies:
End of explanation
post2 = bolfi.extract_posterior(-1.)
Explanation: There could be an unnecessarily high number of points at parameter bounds. These could probably be decreased by lowering the covariance of the noise added to acquired points, defined by the optional acq_noise_var argument for the BOLFI constructor. Another possibility could be to add virtual derivative observations at the borders, though not yet implemented in ELFI.
BOLFI Posterior
Above, the fit method returned a BolfiPosterior object representing a BOLFI posterior (please see the paper for details). The fit method accepts a threshold parameter; if none is given, ELFI will use the minimum value of discrepancy estimate mean. Afterwards, one may request for a posterior with a different threshold:
End of explanation
post.plot(logpdf=True)
Explanation: One can visualize a posterior directly (remember that the priors form a triangle):
End of explanation
%time result_BOLFI = bolfi.sample(1000, info_freq=1000)
Explanation: Sampling
Finally, samples from the posterior can be acquired with an MCMC sampler. By default it runs 4 chains, and half of the requested samples are spent in adaptation/warmup. Note that depending on the smoothness of the GP approximation, the number of priors, their gradients etc., this may be slow.
End of explanation
result_BOLFI
result_BOLFI.plot_traces();
Explanation: The sampling algorithms may be fine-tuned with some parameters. The default No-U-Turn-Sampler is a sophisticated algorithm, and in some cases one may get warnings about diverged proposals, which are signs that something may be wrong and should be investigated. It is good to understand the cause of these warnings although they don't automatically mean that the results are unreliable. You could try rerunning the sample method with a higher target probability target_prob during adaptation, as its default 0.6 may be inadequate for a non-smooth posteriors, but this will slow down the sampling.
Note also that since MCMC proposals outside the region allowed by either the model priors or GP bounds are rejected, a tight domain may lead to suboptimal overall acceptance ratio. In our MA2 case the prior defines a triangle-shaped uniform support for the posterior, making it a good example of a difficult model for the NUTS algorithm.
Now we finally have a Sample object again, which has several convenience methods:
End of explanation
result_BOLFI.plot_marginals();
Explanation: The black vertical lines indicate the end of warmup, which by default is half of the number of iterations.
End of explanation |
13,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test pyIAST for match with competitive Langmuir model
In the case that the pure-component isotherms $N_{i,pure}(P)$ follow the Langmuir model with the same saturation loading $M$
Step1: Generate synthetic pure-component isotherm data, fit Langmuir models to them.
Model parameters ($M$, ${K_i}$)
Step2: Generate data according to Langmuir model, store in list of Pandas DataFrames
Step3: Use pyIAST to fit Lanmguir models to the data, then plot fits
Step4: Plot synthetic data all in one plot for paper
Step5: Compare pyIAST predicted component loadings to that of competitive Langmuir
Let us consider a tertiary mixture of components 0, 1, and 2 above at a total pressure of total_pressure bar.
Step6: We will explore gas phase composition space (${y_i}$) by generating random compositions and checking that they are within the triangle. We do not want to get too close to a pure phase boundary becuase of numerical instability, so we keep a distance dx away from pure phases. We will perform num_tests tests.
Step7: Generate the compositions and store in list compositions
Step9: Next, we assert that pyIAST gives the same result as the competitive Langmuir isotherm for each of these compositions.
Function to compute loading according to competitive Langmuir
Step11: Function to compute loading according to pyIAST
Step12: Loop over compositions, assert pyIAST agrees with competitive Langmuir for each component. If this runs, then there is agreement!
Step13: This is using a custom library to plot the phase diagrams for the paper.
Use ternary to plot phase diagram
https | Python Code:
import numpy as np
import pyiast
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
%config InlineBackend.rc = {'font.size': 13, 'lines.linewidth':3,\
'axes.facecolor':'w', 'legend.numpoints':1,\
'figure.figsize': (6.0, 4.0)}
%matplotlib inline
colors = ['b', 'g', 'r'] # for representing each component
component_names = {0: 'A', 1: 'B', 2:'C'}
Explanation: Test pyIAST for match with competitive Langmuir model
In the case that the pure-component isotherms $N_{i,pure}(P)$ follow the Langmuir model with the same saturation loading $M$:
$N_{i,pure} = M \frac{K_iP}{1+K_iP},$
The mixed gas adsorption isotherm follows the competitive Langmuir isotherm:
$N_i = M \frac{K_i p_i}{1 + \sum_j K_jp_j},$
where $p_i$ is the partial pressure of component $i$. Here, we generate synthetic pure-component adsorption isotherm data and confirm that pyIAST agrees with the competitive Langmuir isotherm for 3 components.
End of explanation
M = 1.0
langmuirKs = [2.0, 10.0, 20.0] # K_i
Explanation: Generate synthetic pure-component isotherm data, fit Langmuir models to them.
Model parameters ($M$, ${K_i}$)
End of explanation
pressure = np.logspace(-3, np.log10(10), 20)
dfs = [pd.DataFrame({'P': pressure,
'L': M * langmuirKs[i] * pressure / (
1.0 + langmuirKs[i] * pressure)})
for i in range(3)]
Explanation: Generate data according to Langmuir model, store in list of Pandas DataFrames
End of explanation
isotherms = [pyiast.ModelIsotherm(dfs[i], pressure_key='P',
loading_key='L', model='Langmuir')
for i in range(3)]
for i in range(len(isotherms)):
isotherms[i].print_params()
pyiast.plot_isotherm(isotherms[i])
Explanation: Use pyIAST to fit Lanmguir models to the data, then plot fits
End of explanation
p_plot = np.logspace(-3, np.log10(11)) # for plotting
fig = plt.figure(facecolor='w')
for i in range(len(isotherms)):
plt.scatter(dfs[i]['P'], dfs[i]['L'], color=colors[i],
s=50, label=None)
plt.plot(p_plot, M * langmuirKs[i] * p_plot / (1.0 + langmuirKs[i] * p_plot),
color=colors[i], linewidth=2, label=r'$N_%s(P) = \frac{%d P}{1+%dP}$' % (
component_names[i], langmuirKs[i], langmuirKs[i]))
plt.xlim([-.05 * 10, 1.05 * 10])
plt.ylim([-.05 * M, M * 1.05])
plt.xlabel('Pressure (bar)')
plt.ylabel('Gas uptake (mmol/g)')
plt.legend(loc='lower right')
plt.tight_layout()
plt.savefig('pure_component_Langmuir.pdf', format='pdf',
facecolor=fig.get_facecolor())
plt.show()
Explanation: Plot synthetic data all in one plot for paper
End of explanation
total_pressure = 1.0
Explanation: Compare pyIAST predicted component loadings to that of competitive Langmuir
Let us consider a tertiary mixture of components 0, 1, and 2 above at a total pressure of total_pressure bar.
End of explanation
dx = 0.0001
num_tests = 100
Explanation: We will explore gas phase composition space (${y_i}$) by generating random compositions and checking that they are within the triangle. We do not want to get too close to a pure phase boundary becuase of numerical instability, so we keep a distance dx away from pure phases. We will perform num_tests tests.
End of explanation
compositions = []
test_no = 0
while test_no < num_tests:
# generate random compoisitions
y1 = np.random.uniform(dx, 1.0 - dx)
y2 = np.random.uniform(dx, 1.0 - dx)
y3 = 1.0 - y2 - y1
# check that composition is within the triangle
if y3 < dx:
continue
# viable composition
compositions.append([y1, y2, y3])
# keep generating until we have num_tests
test_no += 1
Explanation: Generate the compositions and store in list compositions
End of explanation
def competitive_langmuir_loading(partial_pressures, i):
Calculate loading of component i according to competitive Langmuir
return M * langmuirKs[i] * partial_pressures[i] / (
1.0 + np.dot(langmuirKs, partial_pressures))
Explanation: Next, we assert that pyIAST gives the same result as the competitive Langmuir isotherm for each of these compositions.
Function to compute loading according to competitive Langmuir
End of explanation
def iast_loading(partial_pressures, i):
Calculate loading of component i according to IAST
partial_pressures: Array, partial pressures of each component
i: component in the mixture
component_loadings = pyiast.iast(partial_pressures, isotherms)
return component_loadings[i]
Explanation: Function to compute loading according to pyIAST
End of explanation
for i in range(num_tests):
partial_pressure = np.array(compositions[i]) * total_pressure
# for each component...
for c in range(len(langmuirKs)):
np.testing.assert_almost_equal(
competitive_langmuir_loading(partial_pressure, c),
iast_loading(partial_pressure, c), decimal=4)
Explanation: Loop over compositions, assert pyIAST agrees with competitive Langmuir for each component. If this runs, then there is agreement!
End of explanation
import ternary
scale = 10 # resolution in triangle
axis_colors = {'l':colors[1], 'r':colors[0], 'b':colors[2]}
cmaps = ["Blues", "Greens", "Reds"]
iast_or_lang = 'iast' # plot results for IAST or for Langmuir isotherm?
for c in range(3):
if iast_or_lang == 'lang':
f = lambda p: competitive_langmuir_loading(p, c)
else:
f = lambda p: iast_loading(p, c)
# loop over component
fig, ax = plt.subplots(facecolor='w')
ax.axis("off")
figure, tax = ternary.figure(ax=ax, scale=scale)
tax.heatmapf(f, boundary=False,
style="hexagonal", cmap=plt.cm.get_cmap(cmaps[c]),
vmax=M, vmin=0.0,
cbarlabel="%s uptake (mmol/g)" % component_names[c])
tax.boundary(linewidth=2.0, axes_colors=axis_colors)
tax.left_axis_label("$p_B$ (bar)", color=axis_colors['l'], offset=0.16)
tax.right_axis_label("$p_A$ (bar)", color=axis_colors['r'], offset=0.16)
tax.bottom_axis_label("$p_C$ (bar)", color=axis_colors['b'], offset=-0.06)
tax.gridlines(color="blue", multiple=1, linewidth=2,
horizontal_kwargs={'color':axis_colors['b']},
left_kwargs={'color':axis_colors['l']},
right_kwargs={'color':axis_colors['r']},
alpha=0.7) # Every 5th gridline, can be a float
tax.ticks(axis='rlb', linewidth=1, locations=np.arange(scale+1), clockwise=True,
axes_colors=axis_colors,
ticks=["%.1f" % (1.0 * i / scale) for i in range(scale+1)], offset=0.03)
tax.clear_matplotlib_ticks()
tax._redraw_labels()
# if iast_or_lang == 'iast':
# tax.set_title("IAST uptake, component %d" % c, y=1.08, fontsize=14)
# if iast_or_lang == 'lang':
# tax.set_title("Competitive Langmuir uptake, component %d" % c, y=1.08, fontsize=14)
plt.tight_layout()
if iast_or_lang == 'iast':
plt.savefig("Tertiary_diagram_IAST_component_%d.pdf" % c, format='pdf',
facecolor=fig.get_facecolor())
if iast_or_lang == 'lang':
plt.savefig("Tertiary_diagram_Langmuir_component_%d.pdf" % c, format='pdf',
facecolor=fig.get_facecolor())
tax.show()
Explanation: This is using a custom library to plot the phase diagrams for the paper.
Use ternary to plot phase diagram
https://github.com/marcharper/python-ternary
End of explanation |
13,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FLUORESCENCE BINDING ASSAY ANALYSIS
Experiment date
Step1: Calculating Molar Fluorescence (MF) of Free Ligand
1. Maximum likelihood curve-fitting
Find the maximum likelihood estimate, $\theta^$, i.e. the curve that minimizes the squared error $\theta^ = \text{argmin} \sum_i |y_i - f_\theta(x_i)|^2$ (assuming i.i.d. Gaussian noise)
Y = MF*L + BKG
Y
Step2: Curve-fitting to binding saturation curve
Fluorescence intensity vs added ligand
LR= ((X+Rtot+KD)-SQRT((X+Rtot+KD)^2-4XRtot))/2
L= X - LR
Y= BKG + MFL + FRMF*LR
Constants
Rtot | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from lxml import etree
import pandas as pd
import os
import matplotlib.cm as cm
import seaborn as sns
%pylab inline
# Get read and position data of each fluorescence reading section
def get_wells_from_section(path):
reads = path.xpath("*/Well")
wellIDs = [read.attrib['Pos'] for read in reads]
data = [(float(s.text), r.attrib['Pos'])
for r in reads
for s in r]
datalist = {
well : value
for (value, well) in data
}
welllist = [
[
datalist[chr(64 + row) + str(col)]
if chr(64 + row) + str(col) in datalist else None
for row in range(1,9)
]
for col in range(1,13)
]
return welllist
file_lig="MI_FLU_hsa_lig2_20150922_164254.xml"
file_name = os.path.splitext(file_lig1)[0]
label = file_name[0:25]
print label
root = etree.parse(file_lig)
#find data sections
Sections = root.xpath("/*/Section")
much = len(Sections)
print "****The xml file " + file_lig + " has %s data sections:****" % much
for sect in Sections:
print sect.attrib['Name']
#Work with topread
TopRead = root.xpath("/*/Section")[0]
welllist = get_wells_from_section(TopRead)
df_topread = pd.DataFrame(welllist, columns = ['A - HSA','B - Buffer','C - HSA','D - Buffer', 'E - HSA','F - Buffer','G - HSA','H - Buffer'])
df_topread.transpose()
# To generate cvs file
# df_topread.transpose().to_csv(label + Sections[0].attrib['Name']+ ".csv")
Explanation: FLUORESCENCE BINDING ASSAY ANALYSIS
Experiment date: 2015/09/22
Protein: HSA
Fluorescent ligand : dansyl glycine(lig2)
Xml parsing parts adopted from Sonya's assaytools/examples/fluorescence-binding-assay/Src-gefitinib fluorescence simple.ipynb
End of explanation
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
%matplotlib inline
def model(x,slope,intercept):
''' 1D linear model in the format scipy.optimize.curve_fit expects: '''
return x*slope + intercept
# generate some data
#X = np.random.rand(1000)
#true_slope=1.0
#true_intercept=0.0
#noise = np.random.randn(len(X))*0.1
#Y = model(X,slope=true_slope,intercept=true_intercept) + noise
#ligand titration
lig2=np.array([200.0000,86.6000,37.5000,16.2000,7.0200, 3.0400, 1.3200, 0.5700, 0.2470, 0.1070, 0.0462, 0.0200])
lig2
# Since I have 4 replicates
L=np.concatenate((lig2, lig2, lig2, lig2))
len(L)
# Fluorescence read
df_topread.loc[:,("B - Buffer", "D - Buffer", "F - Buffer", "H - Buffer")]
B=df_topread.loc[:,("B - Buffer")]
D=df_topread.loc[:,("D - Buffer")]
F=df_topread.loc[:,("F - Buffer")]
H=df_topread.loc[:,("H - Buffer")]
Y = np.concatenate((B.as_matrix(),D.as_matrix(),F.as_matrix(),H.as_matrix()))
(MF,BKG),_ = optimize.curve_fit(model,L,Y)
print('MF: {0:.3f}, BKG: {1:.3f}'.format(MF,BKG))
print('y = {0:.3f} * L + {1:.3f}'.format(MF, BKG))
Explanation: Calculating Molar Fluorescence (MF) of Free Ligand
1. Maximum likelihood curve-fitting
Find the maximum likelihood estimate, $\theta^$, i.e. the curve that minimizes the squared error $\theta^ = \text{argmin} \sum_i |y_i - f_\theta(x_i)|^2$ (assuming i.i.d. Gaussian noise)
Y = MF*L + BKG
Y: Fluorescence read (Flu unit)
L: Total ligand concentration (uM)
BKG: background fluorescence without ligand (Flu unit)
MF: molar fluorescence of free ligand (Flu unit/ uM)
End of explanation
def model2(x,kd,fr):
''' 1D linear model in the format scipy.optimize.curve_fit expects: '''
# lr =((x+rtot+kd)-((x+rtot+kd)**2-4*x*rtot)**(1/2))/2
# y = bkg + mf*(x - lr) + fr*mf*lr
bkg = 86.2
mf = 2.517
rtot = 0.5
return bkg + mf*(x - ((x+rtot+kd)-((x+rtot+kd)**2-4*x*rtot)**(1/2))/2) + fr*mf*(((x+rtot+kd)-((x+rtot+kd)**2-4*x*rtot)**(1/2))/2)
# Total HSA concentration (uM)
Rtot = 0.5
#Total ligand titration
X = L
len(X)
# Fluorescence read
df_topread.loc[:,("A - HSA", "C - HSA", "E - HSA", "G - HSA")]
A=df_topread.loc[:,("A - HSA")]
C=df_topread.loc[:,("C - HSA")]
E=df_topread.loc[:,("E - HSA")]
G=df_topread.loc[:,("G - HSA")]
Y = np.concatenate((A.as_matrix(),C.as_matrix(),E.as_matrix(),G.as_matrix()))
len(Y)
(Kd,FR),_ = optimize.curve_fit(model2, X, Y, p0=(5,1))
print('Kd: {0:.3f}, Fr: {1:.3f}'.format(Kd,FR))
Explanation: Curve-fitting to binding saturation curve
Fluorescence intensity vs added ligand
LR= ((X+Rtot+KD)-SQRT((X+Rtot+KD)^2-4XRtot))/2
L= X - LR
Y= BKG + MFL + FRMF*LR
Constants
Rtot: receptor concentration (uM)
BKG: background fluorescence without ligand (Flu unit)
MF: molar fluorescence of free ligand (Flu unit/ uM)
Parameters to fit
Kd: dissociation constant (uM)
FR: Molar fluorescence ratio of complex to free ligand (unitless)
complex flurescence = FRMFLR
Experimental data
Y: fluorescence measurement
X: total ligand concentration
L: free ligand concentration
End of explanation |
13,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka
back to the matplotlib-gallery at https
Step1: <font size="1.5em">More info about the %watermark extension</font>
Step2: <br>
<br>
Formatting I
Step3: <br>
<br>
m x n subplots
[back to top]
Step4: <br>
<br>
Labeling a subplot grid like a matrix
[back to top]
Step5: <br>
<br>
Shared X- and Y-axes
[back to top]
Step6: <br>
<br>
Setting title and labels
[back to top]
Step7: <br>
<br>
Hiding redundant subplots
[back to top]
Sometimes we create more subplots for a rectangular layout (here
Step8: <br>
<br>
Defining colors
[back to top]
<br>
<br>
3 ways to define colors
[back to top]
Matplotlib supports 3 different ways to encode colors, e.g, if we want to use the color blue, we can define colors as
RGB color values (range 0.0 to 1.0) -> (0.0, 0.0, 1.0)
matplotlib supported names -> 'blue' or 'b'
HTML hex values -> '#0000FF'
Step9: <br>
<br>
matplotlib color names
[back to top]
The color names that are supported by matplotlib are
b
Step10: <br>
<br>
Colormaps
[back to top]
More color maps are available at http
Step11: <br>
<br>
Gray-levels
[back to top]
Step12: <br>
<br>
Edgecolors for scatter plots
[back to top]
Step13: <br>
<br>
Color gradients
[back to top]
Step14: <br>
<br>
Marker styles
[back to top]
Step15: <br>
<br>
Line styles
[back to top]
Step16: <br>
<br>
Fancy and transparent legends
[back to top]
Step17: <br>
<br>
Hiding axes
[back to top]
<br>
<br>
Hiding axis ticks and labels
[back to top]
Step18: <br>
<br>
Removing frame and ticks
[back to top]
Step19: <br>
<br>
Aesthetic axis layout
[back to top]
Step20: <br>
<br>
Custom tick labels
[back to top]
<br>
<br>
Text and rotation
[back to top]
Step21: <br>
<br>
Adding a constant value to axis labels
[back to top]
Step22: <br>
<br>
<br>
<br>
Applying customization and settings globally
[back to top]
Everyone has a different perception of "style", and usually, we would make some little adjustments to matplotlib's default visuals here and there. After customization, it would be tedious to repeat the same code over and over again every time we produce a new plot.
However we have multiple options to apply the changes globally.
<br>
<br>
Settings for the active session only
[back to top]
Here, we are only interested in the settings for the current session. In this case, one way to customize matplotlibs defaults would be the 'rcParams' attribute (in the next session, you will see a handy reference for all the different matplotlib settings). E.g., if we want to make the font size of our titles larger for all plots that follow in the active session, we could type the following
Step23: Let's see how it looks like
Step24: And if we want to revert it back to the default settings, we can use the command
Step25: Note that we have to re-execute the matplotlib inline magic function afterwards
Step26: <br>
<br>
Modifying the matplotlibrc file
[back to top]
Let's assume that we decided to always prefer a particular setting over matplotlib's default (e.g., a larger font size for the title like in the previous section), we can make a change to the matplotlibrc file
Step27: If we open this file in an editor, we will see an overview of all the different matplotlib default settings and their default values. We can use this list either as a reference to apply changes dynamically (see the previous section), or we can un-comment this line here and change its default value.
E.g., we want to change the title size again, we could change the following line | Python Code:
%load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
Explanation: Sebastian Raschka
back to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery
End of explanation
%matplotlib inline
Explanation: <font size="1.5em">More info about the %watermark extension</font>
End of explanation
import numpy as np
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig, ax = plt.subplots(2)
for sp in ax:
sp.plot(x, y)
Explanation: <br>
<br>
Formatting I: subplots, markers, colors, axes
<br>
<br>
Sections
Subplots
m x n subplots
Labeling a subplot grid like a matrix
Shared X- and Y-axes
Setting title and labels
Hiding redundant subplots
Defining Colors
3 ways to define colors
matplotlib color names
Colormaps
Gray-levels
Edgecolors for scatter plots
Color gradients
Marker styles
Line styles
Fancy and transparent legends
Hiding axes
Hiding axis ticks and labels
Removing frame and ticks
Aesthetic axis layout
Custom tick labels
Text and rotation
Adding a constant value to axis labels
Applying customization and settings globally
Settings for the active session only
Modifying the matplotlibrc file
<br>
<br>
Subplots
[back to top]
End of explanation
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig, ax = plt.subplots(nrows=2,ncols=2)
for row in ax:
for col in row:
col.plot(x, y)
plt.show()
fig, ax = plt.subplots(nrows=2,ncols=2)
plt.subplot(2,2,1)
plt.plot(x, y)
plt.subplot(2,2,2)
plt.plot(x, y)
plt.subplot(2,2,3)
plt.plot(x, y)
plt.subplot(2,2,4)
plt.plot(x, y)
plt.show()
Explanation: <br>
<br>
m x n subplots
[back to top]
End of explanation
import matplotlib.pyplot as plt
import numpy as np
fig, axes = plt.subplots(nrows=3, ncols=3,
sharex=True, sharey=True,
figsize=(8,8)
)
x = range(5)
y = range(5)
for row in axes:
for col in row:
col.plot(x, y)
for ax, col in zip(axes[0,:], ['1', '2', '3']):
ax.set_title(col, size=20)
for ax, row in zip(axes[:,0], ['A', 'B', 'C']):
ax.set_ylabel(row, size=20, rotation=0, labelpad=15)
plt.show()
Explanation: <br>
<br>
Labeling a subplot grid like a matrix
[back to top]
End of explanation
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig, ax = plt.subplots(nrows=2,ncols=2, sharex=True, sharey=True)
for row in ax:
for col in row:
col.plot(x, y)
plt.show()
Explanation: <br>
<br>
Shared X- and Y-axes
[back to top]
End of explanation
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig, ax = plt.subplots(nrows=2,ncols=2)
for row in ax:
for col in row:
col.plot(x, y)
col.set_title('title')
col.set_xlabel('x-axis')
col.set_ylabel('x-axis')
fig.tight_layout()
plt.show()
Explanation: <br>
<br>
Setting title and labels
[back to top]
End of explanation
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig, axes = plt.subplots(nrows=3,ncols=3)
for cnt, ax in enumerate(axes.ravel()):
if cnt < 7:
ax.plot(x, y)
else:
ax.axis('off') # hide subplot
plt.show()
Explanation: <br>
<br>
Hiding redundant subplots
[back to top]
Sometimes we create more subplots for a rectangular layout (here: 3x3) than we actually need. Here is how we hide those redundant subplots. Let's assume that we only want to show the first 7 subplots:
End of explanation
import matplotlib.pyplot as plt
samples = range(1,4)
for i, col in zip(samples, [(0.0, 0.0, 1.0), 'blue', '#0000FF']):
plt.plot([0, 10], [0, i], lw=3, color=col)
plt.legend(['RGB values: (0.0, 0.0, 1.0)',
"matplotlib names: 'blue'",
"HTML hex values: '#0000FF'"],
loc='upper left')
plt.title('3 alternatives to define the color blue')
plt.show()
Explanation: <br>
<br>
Defining colors
[back to top]
<br>
<br>
3 ways to define colors
[back to top]
Matplotlib supports 3 different ways to encode colors, e.g, if we want to use the color blue, we can define colors as
RGB color values (range 0.0 to 1.0) -> (0.0, 0.0, 1.0)
matplotlib supported names -> 'blue' or 'b'
HTML hex values -> '#0000FF'
End of explanation
import matplotlib.pyplot as plt
cols = ['blue', 'green', 'red', 'cyan', 'magenta', 'yellow', 'black', 'white']
samples = range(1, len(cols)+1)
for i, col in zip(samples, cols):
plt.plot([0, 10], [0, i], label=col, lw=3, color=col)
plt.legend(loc='upper left')
plt.title('matplotlib color names')
plt.show()
Explanation: <br>
<br>
matplotlib color names
[back to top]
The color names that are supported by matplotlib are
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
where the first letter represents the shortcut version.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
fig, (ax0, ax1) = plt.subplots(1,2, figsize=(14, 7))
samples = range(1,16)
# Default Color Cycle
for i in samples:
ax0.plot([0, 10], [0, i], label=i, lw=3)
# Colormap
colormap = plt.cm.Paired
plt.gca().set_color_cycle([colormap(i) for i in np.linspace(0, 0.9, len(samples))])
for i in samples:
ax1.plot([0, 10], [0, i], label=i, lw=3)
# Annotation
ax0.set_title('Default color cycle')
ax1.set_title('plt.cm.Paired colormap')
ax0.legend(loc='upper left')
ax1.legend(loc='upper left')
plt.show()
Explanation: <br>
<br>
Colormaps
[back to top]
More color maps are available at http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
End of explanation
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(8,6))
samples = np.arange(0, 1.1, 0.1)
for i in samples:
plt.plot([0, 10], [0, i], label='gray-level %s'%i, lw=3,
color=str(i)) # ! gray level has to be parsed as string
plt.legend(loc='upper left')
plt.title('gray-levels')
plt.show()
Explanation: <br>
<br>
Gray-levels
[back to top]
End of explanation
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10,10))
samples = np.random.randn(30,2)
ax[0][0].scatter(samples[:,0], samples[:,1],
color='red',
label='color="red"')
ax[1][0].scatter(samples[:,0], samples[:,1],
c='red',
label='c="red"')
ax[0][1].scatter(samples[:,0], samples[:,1],
edgecolor='white',
c='red',
label='c="red", edgecolor="white"')
ax[1][1].scatter(samples[:,0], samples[:,1],
edgecolor='0',
c='1',
label='color="1.0", edgecolor="0"')
for row in ax:
for col in row:
col.legend(loc='upper left')
plt.show()
Explanation: <br>
<br>
Edgecolors for scatter plots
[back to top]
End of explanation
import matplotlib.pyplot as plt
import matplotlib.colors as col
import matplotlib.cm as cm
import numpy as np
# input data
mean_values = np.random.randint(1, 101, 100)
x_pos = range(len(mean_values))
fig = plt.figure(figsize=(20,5))
# create colormap
cmap = cm.ScalarMappable(col.Normalize(min(mean_values),
max(mean_values),
cm.hot))
# plot bars
plt.subplot(131)
plt.bar(x_pos, mean_values, align='center', alpha=0.5,
color=cmap.to_rgba(mean_values))
plt.ylim(0, max(mean_values) * 1.1)
plt.subplot(132)
plt.bar(x_pos, np.sort(mean_values), align='center', alpha=0.5,
color=cmap.to_rgba(mean_values))
plt.ylim(0, max(mean_values) * 1.1)
plt.subplot(133)
plt.bar(x_pos, np.sort(mean_values), align='center', alpha=0.5,
color=cmap.to_rgba(np.sort(mean_values)))
plt.ylim(0, max(mean_values) * 1.1)
plt.show()
Explanation: <br>
<br>
Color gradients
[back to top]
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
markers = [
'.', # point
',', # pixel
'o', # circle
'v', # triangle down
'^', # triangle up
'<', # triangle_left
'>', # triangle_right
'1', # tri_down
'2', # tri_up
'3', # tri_left
'4', # tri_right
'8', # octagon
's', # square
'p', # pentagon
'*', # star
'h', # hexagon1
'H', # hexagon2
'+', # plus
'x', # x
'D', # diamond
'd', # thin_diamond
'|', # vline
]
plt.figure(figsize=(13, 10))
samples = range(len(markers))
for i in samples:
plt.plot([i-1, i, i+1], [i, i, i], label=markers[i], marker=markers[i], markersize=10)
# Annotation
plt.title('Matplotlib Marker styles', fontsize=20)
plt.ylim([-1, len(markers)+1])
plt.legend(loc='lower right')
plt.show()
Explanation: <br>
<br>
Marker styles
[back to top]
End of explanation
import numpy as np
import matplotlib.pyplot as plt
linestyles = ['-.', '--', 'None', '-', ':']
plt.figure(figsize=(8, 5))
samples = range(len(linestyles))
for i in samples:
plt.plot([i-1, i, i+1], [i, i, i],
label='"%s"' %linestyles[i],
linestyle=linestyles[i],
lw=4
)
# Annotation
plt.title('Matplotlib line styles', fontsize=20)
plt.ylim([-1, len(linestyles)+1])
plt.legend(loc='lower right')
plt.show()
Explanation: <br>
<br>
Line styles
[back to top]
End of explanation
import numpy as np
import matplotlib.pyplot as plt
X1 = np.random.randn(100,2)
X2 = np.random.randn(100,2)
X3 = np.random.randn(100,2)
R1 = (X1**2).sum(axis=1)
R2 = (X2**2).sum(axis=1)
R3 = (X3**2).sum(axis=1)
plt.scatter(X1[:,0], X1[:,1],
c='blue',
marker='o',
s=32. * R1,
edgecolor='black',
label='Dataset X1',
alpha=0.7)
plt.scatter(X2[:,0], X2[:,1],
c='gray',
marker='s',
s=32. * R2,
edgecolor='black',
label='Dataset X2',
alpha=0.7)
plt.scatter(X2[:,0], X3[:,1],
c='green',
marker='^',
s=32. * R3,
edgecolor='black',
label='Dataset X3',
alpha=0.7)
plt.xlim([-3,3])
plt.ylim([-3,3])
leg = plt.legend(loc='upper left', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.show()
Explanation: <br>
<br>
Fancy and transparent legends
[back to top]
End of explanation
import numpy as np
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig = plt.gca()
plt.plot(x, y)
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
plt.show()
Explanation: <br>
<br>
Hiding axes
[back to top]
<br>
<br>
Hiding axis ticks and labels
[back to top]
End of explanation
import numpy as np
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
fig = plt.gca()
plt.plot(x, y)
# removing frame
fig.spines["top"].set_visible(False)
fig.spines["bottom"].set_visible(False)
fig.spines["right"].set_visible(False)
fig.spines["left"].set_visible(False)
# removing ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
plt.show()
Explanation: <br>
<br>
Removing frame and ticks
[back to top]
End of explanation
import numpy as np
import math
import matplotlib.pyplot as plt
X = np.random.normal(loc=0.0, scale=1.0, size=300)
width = 0.5
bins = np.arange(math.floor(X.min())-width,
math.ceil(X.max())+width,
width) # fixed bin size
ax = plt.subplot(111)
# remove axis at the top and to the right
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
plt.hist(X, alpha=0.5, bins=bins)
plt.grid()
plt.xlabel('x label')
plt.ylabel('y label')
plt.title('title')
plt.show()
Explanation: <br>
<br>
Aesthetic axis layout
[back to top]
End of explanation
import matplotlib.pyplot as plt
x = range(10)
y = range(10)
labels = ['super long axis label' for i in range(10)]
fig, ax = plt.subplots()
plt.plot(x, y)
# set custom tick labels
ax.set_xticklabels(labels, rotation=45, horizontalalignment='right')
plt.show()
Explanation: <br>
<br>
Custom tick labels
[back to top]
<br>
<br>
Text and rotation
[back to top]
End of explanation
import matplotlib.pyplot as plt
CONST = 10
x = range(10)
y = range(10)
labels = [i+CONST for i in x]
fig, ax = plt.subplots()
plt.plot(x, y)
plt.xlabel('x-value + 10')
# set custom tick labels
ax.set_xticklabels(labels)
plt.show()
Explanation: <br>
<br>
Adding a constant value to axis labels
[back to top]
End of explanation
import matplotlib as mpl
mpl.rcParams['axes.titlesize'] = '20'
Explanation: <br>
<br>
<br>
<br>
Applying customization and settings globally
[back to top]
Everyone has a different perception of "style", and usually, we would make some little adjustments to matplotlib's default visuals here and there. After customization, it would be tedious to repeat the same code over and over again every time we produce a new plot.
However we have multiple options to apply the changes globally.
<br>
<br>
Settings for the active session only
[back to top]
Here, we are only interested in the settings for the current session. In this case, one way to customize matplotlibs defaults would be the 'rcParams' attribute (in the next session, you will see a handy reference for all the different matplotlib settings). E.g., if we want to make the font size of our titles larger for all plots that follow in the active session, we could type the following:
End of explanation
from matplotlib import pyplot as plt
x = range(10)
y = range(10)
plt.plot(x, y)
plt.title('larger title')
plt.show()
Explanation: Let's see how it looks like:
End of explanation
mpl.rcdefaults()
Explanation: And if we want to revert it back to the default settings, we can use the command:
End of explanation
%matplotlib inline
plt.plot(x, y)
plt.title('default title size')
plt.show()
Explanation: Note that we have to re-execute the matplotlib inline magic function afterwards:
End of explanation
import matplotlib
matplotlib.matplotlib_fname()
Explanation: <br>
<br>
Modifying the matplotlibrc file
[back to top]
Let's assume that we decided to always prefer a particular setting over matplotlib's default (e.g., a larger font size for the title like in the previous section), we can make a change to the matplotlibrc file: This way we'd avoid to change the setting every time we start a new session or produce a new plot.
The matplotlibrc file can reside in different places depending on your sytem. An convenient way to find out the location is to use the matplotlib_fname() function.
End of explanation
from matplotlib import rc_file
rc_file('/path/to/matplotlibrc_journalX')
import matplotlib.pyplot as plt
Explanation: If we open this file in an editor, we will see an overview of all the different matplotlib default settings and their default values. We can use this list either as a reference to apply changes dynamically (see the previous section), or we can un-comment this line here and change its default value.
E.g., we want to change the title size again, we could change the following line:
axes.titlesize : 20
One thing to keep in mind is that this file becomes overwritten if we install a new version of matplotlib, and it is always recommended to keep backups as you change the settings (the file with its original default settings can be found here).
Sometimes, we even want to keep and use multiple matplotlibrc files, e.g., if we are writing articles for different journals and every journal has its own requirement for figure formats.
In this case, we can load our own rc files from a different local directory:
End of explanation |
13,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Researching a Pairs Trading Strategy
By Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Pairs trading is a nice example of a strategy based on mathematical analysis. The principle is as follows. Let's say you have a pair of securities X and Y that have some underlying economic link. An example might be two companies that manufacture the same product, or two companies in one supply chain. We'll start by constructing an artificial example.
Step1: Explaining the Concept
Step2: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
Step3: Def
Step4: Testing for Cointegration
That's an intuitive definition, but how do we test for this statisitcally? There is a convenient test that lives in statsmodels.tsa.stattools. We should see a very low p-value, as we've artifically created two series that are as cointegrated as physically possible.
Step5: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
Step6: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
Step7: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
Step8: Sure enough, the correlation is incredibly low, but the p-value shows perfect cointegration.
Def
Step9: Looking for Cointegrated Pairs of Alternative Energy Securities
I'm looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
get_pricing() is a Quantopian method that pulls in stock data, and loads it into a Python Pandas DataPanel object. Available fields are 'price', 'open_price', 'high', 'low', 'volume'. But for this example we will just use 'price' which is the daily closing price of the stock.
Step10: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
Step11: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
Step12: Now we'll run our method on the list and see if any pairs are cointegrated.
Step13: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
Step14: We'll plot the spread of the two series.
Step15: The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score. This way we associate probabilities to the signals we see. If we see a z-score of 1, we know that approximately 84% of all spread values will be smaller.
Step16: Simple Strategy
Step17: We can use the moving averages to compute the z-score of the difference at each given time. This will tell us how extreme the difference is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
Step18: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the differences were all negative and that's kinda confusing. | Python Code:
import numpy as np
import pandas as pd
import statsmodels
from statsmodels.tsa.stattools import coint
# just set the seed for the random number generator
np.random.seed(107)
import matplotlib.pyplot as plt
Explanation: Researching a Pairs Trading Strategy
By Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Pairs trading is a nice example of a strategy based on mathematical analysis. The principle is as follows. Let's say you have a pair of securities X and Y that have some underlying economic link. An example might be two companies that manufacture the same product, or two companies in one supply chain. We'll start by constructing an artificial example.
End of explanation
X_returns = np.random.normal(0, 1, 100) # Generate the daily returns
# sum them and shift all the prices up into a reasonable range
X = pd.Series(np.cumsum(X_returns), name='X') + 50
X.plot()
Explanation: Explaining the Concept: We start by generating two fake securities.
We model X's daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day.
End of explanation
some_noise = np.random.normal(0, 1, 100)
Y = X + 5 + some_noise
Y.name = 'Y'
pd.concat([X, Y], axis=1).plot()
Explanation: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
End of explanation
(Y-X).plot() # Plot the spread
plt.axhline((Y-X).mean(), color='red', linestyle='--') # Add the mean
Explanation: Def: Cointegration
We've constructed an example of two cointegrated series. Cointegration is a "different" form of correlation (very loosely speaking). The spread between two cointegrated timeseries will vary around a mean. The expected value of the spread over time must converge to the mean for pairs trading work work. Another way to think about this is that cointegrated timeseries might not necessarily follow a similar path to a same destination, but they both end up at this destination.
We'll plot the spread between the two now so we can see how this looks.
End of explanation
# compute the p-value of the cointegration test
# will inform us as to whether the spread btwn the 2 timeseries is stationary
# around its mean
score, pvalue, _ = coint(X,Y)
print pvalue
Explanation: Testing for Cointegration
That's an intuitive definition, but how do we test for this statisitcally? There is a convenient test that lives in statsmodels.tsa.stattools. We should see a very low p-value, as we've artifically created two series that are as cointegrated as physically possible.
End of explanation
X.corr(Y)
Explanation: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
End of explanation
X_returns = np.random.normal(1, 1, 100)
Y_returns = np.random.normal(2, 1, 100)
X_diverging = pd.Series(np.cumsum(X_returns), name='X')
Y_diverging = pd.Series(np.cumsum(Y_returns), name='Y')
pd.concat([X_diverging, Y_diverging], axis=1).plot()
print 'Correlation: ' + str(X_diverging.corr(Y_diverging))
score, pvalue, _ = coint(X_diverging,Y_diverging)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
End of explanation
Y2 = pd.Series(np.random.normal(0, 1, 1000), name='Y2') + 20
Y3 = Y2.copy()
# Y2 = Y2 + 10
Y3[0:100] = 30
Y3[100:200] = 10
Y3[200:300] = 30
Y3[300:400] = 10
Y3[400:500] = 30
Y3[500:600] = 10
Y3[600:700] = 30
Y3[700:800] = 10
Y3[800:900] = 30
Y3[900:1000] = 10
Y2.plot()
Y3.plot()
plt.ylim([0, 40])
# correlation is nearly zero
print 'Correlation: ' + str(Y2.corr(Y3))
score, pvalue, _ = coint(Y2,Y3)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
End of explanation
def find_cointegrated_pairs(securities_panel):
n = len(securities_panel.minor_axis)
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = securities_panel.keys
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = securities_panel.minor_xs(securities_panel.minor_axis[i])
S2 = securities_panel.minor_xs(securities_panel.minor_axis[j])
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < 0.05:
pairs.append((securities_panel.minor_axis[i], securities_panel.minor_axis[j]))
return score_matrix, pvalue_matrix, pairs
Explanation: Sure enough, the correlation is incredibly low, but the p-value shows perfect cointegration.
Def: Hedged Position
Because you'd like to protect yourself from bad markets, often times short sales will be used to hedge long investments. Because a short sale makes money if the security sold loses value, and a long purchase will make money if a security gains value, one can long parts of the market and short others. That way if the entire market falls off a cliff, we'll still make money on the shorted securities and hopefully break even. In the case of two securities we'll call it a hedged position when we are long on one security and short on the other.
The Trick: Where it all comes together
Because the securities drift towards and apart from each other, there will be times when the distance is high and times when the distance is low. The trick of pairs trading comes from maintaining a hedged position across X and Y. If both securities go down, we neither make nor lose money, and likewise if both go up. We make money on the difference of the two reverting to the mean. In order to do this we'll watch for when X and Y are far apart, then short Y and long X. Similarly we'll watch for when they're close together, and long Y and short X.
Finding real securities that behave like this
The best way to do this is to start with securities you suspect may be cointegrated and perform a statistical test. If you just run statistical tests over all pairs, you'll fall prey to multiple comparison bias.
Here's a method I wrote to look through a list of securities and test for cointegration between all pairs. It returns a cointegration test score matrix, a p-value matrix, and any pairs for which the p-value was less than 0.05.
End of explanation
symbol_list = ['ABGB', 'ASTI', 'CSUN', 'DQ', 'FSLR','SPY']
securities_panel = get_pricing(symbol_list, fields=['price']
, start_date='2014-01-01', end_date='2015-01-01')
securities_panel.minor_axis = map(lambda x: x.symbol, securities_panel.minor_axis)
Explanation: Looking for Cointegrated Pairs of Alternative Energy Securities
I'm looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
get_pricing() is a Quantopian method that pulls in stock data, and loads it into a Python Pandas DataPanel object. Available fields are 'price', 'open_price', 'high', 'low', 'volume'. But for this example we will just use 'price' which is the daily closing price of the stock.
End of explanation
securities_panel.loc['price'].head(5)
Explanation: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
End of explanation
securities_panel.minor_xs('SPY').head(5)
Explanation: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
End of explanation
# Heatmap to show the p-values of the cointegration test between each pair of
# stocks. Only show the value in the upper-diagonal of the heatmap
# (Just showing a '1' for everything in lower diagonal)
scores, pvalues, pairs = find_cointegrated_pairs(securities_panel)
import seaborn
seaborn.heatmap(pvalues, xticklabels=symbol_list, yticklabels=symbol_list, cmap='RdYlGn_r'
, mask = (pvalues >= 0.95)
)
print pairs
Explanation: Now we'll run our method on the list and see if any pairs are cointegrated.
End of explanation
S1 = securities_panel.loc['price']['ABGB']
S2 = securities_panel.loc['price']['FSLR']
score, pvalue, _ = coint(S1, S2)
pvalue
Explanation: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
End of explanation
diff_series = S1 - S2
diff_series.plot()
plt.axhline(diff_series.mean(), color='black')
Explanation: We'll plot the spread of the two series.
End of explanation
def zscore(series):
return (series - series.mean()) / np.std(series)
zscore(diff_series).plot()
plt.axhline(zscore(diff_series).mean(), color='black')
plt.axhline(1.0, color='red', linestyle='--')
plt.axhline(-1.0, color='green', linestyle='--')
Explanation: The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score. This way we associate probabilities to the signals we see. If we see a z-score of 1, we know that approximately 84% of all spread values will be smaller.
End of explanation
# Get the difference in prices between the 2 stocks
difference = S1 - S2
difference.name = 'diff'
# Get the 10 day moving average of the difference
diff_mavg10 = pd.rolling_mean(difference, window=10)
diff_mavg10.name = 'diff 10d mavg'
# Get the 60 day moving average
diff_mavg60 = pd.rolling_mean(difference, window=60)
diff_mavg60.name = 'diff 60d mavg'
pd.concat([diff_mavg60, diff_mavg10], axis=1).plot()
# pd.concat([diff_mavg60, diff_mavg10, difference], axis=1).plot()
Explanation: Simple Strategy:
Go "Long" the spread whenever the z-score is below -1.0
Go "Short" the spread when the z-score is above 1.0
Exit positions when the z-score approaches zero
Since we originally defined the "spread" as S1-S2, "Long" the spread would mean "Buy 1 share of S1, and Sell Short 1 share of S2" (and vice versa if you were going "Short" the spread)
This is just the tip of the iceberg, and only a very simplistic example to illustrate the concepts. In practice you would want to compute a more optimal weighting for how many shares to hold for S1 and S2. Some additional resources on pair trading are listed at the end of this notebook
Trading using constantly updating statistics
Def: Moving Average
A moving average is just an average over the last $n$ datapoints for each given time. It will be undefined for the first $n$ datapoints in our series.
End of explanation
# Take a rolling 60 day standard deviation
std_60 = pd.rolling_std(difference, window=60)
std_60.name = 'std 60d'
# Compute the z score for each day
zscore_60_10 = (diff_mavg10 - diff_mavg60)/std_60
zscore_60_10.name = 'z-score'
zscore_60_10.plot()
plt.axhline(0, color='black')
plt.axhline(1.0, color='red', linestyle='--')
plt.axhline(-1.0, color='green', linestyle='--')
Explanation: We can use the moving averages to compute the z-score of the difference at each given time. This will tell us how extreme the difference is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
End of explanation
two_stocks = securities_panel.loc['price'][['ABGB', 'FSLR']]
# Plot the prices scaled down along with the negative z-score
# just divide the stock prices by 10 to make viewing it on the plot easier
pd.concat([two_stocks/10, zscore_60_10], axis=1).plot()
Explanation: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the differences were all negative and that's kinda confusing.
End of explanation |
13,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying refutation tests to the Lalonde and IHDP datasets
Import the Dependencies
Step1: Loading the Dataset
Infant Health and Development Program Dataset (IHDP)
The measurements used are on the child—birth weight, head circumference, weeks bornpreterm, birth order, first born, neonatal health index (see Scott and Bauer 1989), sex, twinstatus—as well as behaviors engaged in during the pregnancy—smoked cigarettes, drankalcohol, took drugs—and measurements on the mother at the time she gave birth—age,marital status, educational attainment (did not graduate from high school, graduated fromhigh school, attended some college but did not graduate, graduated from college), whethershe worked during pregnancy, whether she received prenatal care—and the site (8 total) inwhich the family resided at the start of the intervention. There are 6 continuous covariatesand 19 binary covariates.
Reference
Hill, J. L. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1), 217-240. https
Step2: Lalonde Dataset
A data frame with 445 observations on the following 12 variables.
age
Step3: Step 1
Step4: Lalonde
Step5: Step 2
Step6: Lalonde
Step7: Step 3
Step8: Lalonde
Step9: Step 4
Step10: Replace Treatment with Placebo
Step11: Remove Random Subset of Data
Step12: Lalonde
Add Random Common Cause
Step13: Replace Treatment with Placebo
Step14: Remove Random Subset of Data | Python Code:
import dowhy
from dowhy import CausalModel
import pandas as pd
import numpy as np
# Config dict to set the logging level
import logging.config
DEFAULT_LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'loggers': {
'': {
'level': 'WARN',
},
}
}
logging.config.dictConfig(DEFAULT_LOGGING)
# Disabling warnings output
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
Explanation: Applying refutation tests to the Lalonde and IHDP datasets
Import the Dependencies
End of explanation
data = pd.read_csv("https://raw.githubusercontent.com/AMLab-Amsterdam/CEVAE/master/datasets/IHDP/csv/ihdp_npci_1.csv", header = None)
col = ["treatment", "y_factual", "y_cfactual", "mu0", "mu1" ,]
for i in range(1,26):
col.append("x"+str(i))
data.columns = col
data = data.astype({"treatment":'bool'}, copy=False)
data.head()
Explanation: Loading the Dataset
Infant Health and Development Program Dataset (IHDP)
The measurements used are on the child—birth weight, head circumference, weeks bornpreterm, birth order, first born, neonatal health index (see Scott and Bauer 1989), sex, twinstatus—as well as behaviors engaged in during the pregnancy—smoked cigarettes, drankalcohol, took drugs—and measurements on the mother at the time she gave birth—age,marital status, educational attainment (did not graduate from high school, graduated fromhigh school, attended some college but did not graduate, graduated from college), whethershe worked during pregnancy, whether she received prenatal care—and the site (8 total) inwhich the family resided at the start of the intervention. There are 6 continuous covariatesand 19 binary covariates.
Reference
Hill, J. L. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1), 217-240. https://doi.org/10.1198/jcgs.2010.08162
End of explanation
from rpy2.robjects import r as R
from os.path import expanduser
home = expanduser("~")
%reload_ext rpy2.ipython
# %R install.packages("Matching")
%R library(Matching)
%R data(lalonde)
%R -o lalonde
lalonde = lalonde.astype({'treat':'bool'}, copy=False)
lalonde.head()
Explanation: Lalonde Dataset
A data frame with 445 observations on the following 12 variables.
age:
age in years.
educ:
years of schooling.
black:
indicator variable for blacks.
hisp:
indicator variable for Hispanics.
married:
indicator variable for martial status.
nodegr:
indicator variable for high school diploma.
re74:
real earnings in 1974.
re75:
real earnings in 1975.
re78:
real earnings in 1978.
u74:
indicator variable for earnings in 1974 being zero.
u75:
indicator variable for earnings in 1975 being zero.
treat:
an indicator variable for treatment status.
References
Dehejia, Rajeev and Sadek Wahba. 1999.``Causal Effects in Non-Experimental Studies: Re-Evaluating the Evaluation of Training Programs.'' Journal of the American Statistical Association 94 (448): 1053-1062.
LaLonde, Robert. 1986. ``Evaluating the Econometric Evaluations of Training Programs.'' American Economic Review 76:604-620.
End of explanation
# Create a causal model from the data and given common causes
common_causes = []
for i in range(1, 26):
common_causes += ["x"+str(i)]
ihdp_model = CausalModel(
data=data,
treatment='treatment',
outcome='y_factual',
common_causes=common_causes
)
ihdp_model.view_model(layout="dot")
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
Explanation: Step 1: Building the model
IHDP
End of explanation
lalonde_model = CausalModel(
data=lalonde,
treatment='treat',
outcome='re78',
common_causes='nodegr+black+hisp+age+educ+married'.split('+')
)
lalonde_model.view_model(layout="dot")
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
Explanation: Lalonde
End of explanation
#Identify the causal effect for the ihdp dataset
ihdp_identified_estimand = ihdp_model.identify_effect(proceed_when_unidentifiable=True)
print(ihdp_identified_estimand)
Explanation: Step 2: Identification
IHDP
End of explanation
#Identify the causal effect for the lalonde dataset
lalonde_identified_estimand = lalonde_model.identify_effect(proceed_when_unidentifiable=True)
print(lalonde_identified_estimand)
Explanation: Lalonde
End of explanation
ihdp_estimate = ihdp_model.estimate_effect(
ihdp_identified_estimand,
method_name="backdoor.propensity_score_weighting"
)
print("The Causal Estimate is " + str(ihdp_estimate.value))
Explanation: Step 3: Estimation (using propensity score weighting)
IHDP
End of explanation
lalonde_estimate = lalonde_model.estimate_effect(
lalonde_identified_estimand,
method_name="backdoor.propensity_score_weighting"
)
print("The Causal Estimate is " + str(lalonde_estimate.value))
Explanation: Lalonde
End of explanation
ihdp_refute_random_common_cause = ihdp_model.refute_estimate(
ihdp_identified_estimand,
ihdp_estimate,
method_name="random_common_cause"
)
print(ihdp_refute_random_common_cause)
Explanation: Step 4: Refutation
IHDP
Add Random Common Cause
End of explanation
ihdp_refute_placebo_treatment = ihdp_model.refute_estimate(
ihdp_identified_estimand,
ihdp_estimate,
method_name="placebo_treatment_refuter",
placebo_type="permute"
)
print(ihdp_refute_placebo_treatment)
Explanation: Replace Treatment with Placebo
End of explanation
ihdp_refute_random_subset = ihdp_model.refute_estimate(
ihdp_identified_estimand,
ihdp_estimate,
method_name="data_subset_refuter",
subset_fraction=0.8
)
print(ihdp_refute_random_subset)
Explanation: Remove Random Subset of Data
End of explanation
lalonde_refute_random_common_cause = lalonde_model.refute_estimate(
lalonde_identified_estimand,
lalonde_estimate,
method_name="random_common_cause"
)
print(lalonde_refute_random_common_cause)
Explanation: Lalonde
Add Random Common Cause
End of explanation
lalonde_refute_placebo_treatment = lalonde_model.refute_estimate(
lalonde_identified_estimand,
lalonde_estimate,
method_name="placebo_treatment_refuter",
placebo_type="permute"
)
print(lalonde_refute_placebo_treatment)
Explanation: Replace Treatment with Placebo
End of explanation
lalonde_refute_random_subset = lalonde_model.refute_estimate(
lalonde_identified_estimand,
lalonde_estimate,
method_name="data_subset_refuter",
subset_fraction=0.9
)
print(lalonde_refute_random_subset)
Explanation: Remove Random Subset of Data
End of explanation |
13,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
10 Secret trigonometry functions you never heard of!
Are you intrigued? Was the heading sufficiently click-baity? See
Step1: Approach 2
Step2: Speed test
Step3: Roughly 13 X slower than using C math directly
Remember, we used types for everything. What about just plain Python? | Python Code:
%%cython -a
# cython: boundscheck=False
from math import sin, cos
cdef inline double versine(double x):
return 1.0 - cos(x)
def versine_array_py(double[:] x):
cdef int i, n = x.shape[0]
for i in range(n):
x[i] = versine(x[i])
Explanation: 10 Secret trigonometry functions you never heard of!
Are you intrigued? Was the heading sufficiently click-baity? See:
https://en.wikipedia.org/wiki/Versine and this blog
Let's make a Cython library for them!
Versine: versin(θ)=1-cos(θ)
Vercosine: vercosin(θ)=1+cos(θ)
Coversine: coversin(θ)=1-sin(θ)
Covercosine: covercosine(θ)=1+sin(θ)
Haversine: haversin(θ)=versin(θ)/2
Havercosine: havercosin(θ)=vercosin(θ)/2
Hacoversine: hacoversin(θ)=coversin(θ)/2
Hacovercosine: hacovercosin(θ)=covercosin(θ)/2
Exsecant: exsec(θ)=sec(θ)-1
Excosecant: excsc(θ)=csc(θ)-1
Approach 1: Using math from Python
Note: this is a Cython cell, and all variables are typed. This should be very fast
End of explanation
%%cython -a
# cython: boundscheck=False
from libc.math cimport sin, cos
cdef inline double versine(double x):
return 1.0 - cos(x)
def versine_array_cy(double[:] x):
cdef int i, n = x.shape[0]
for i in range(n):
x[i] = versine(x[i])
Explanation: Approach 2: Using math from the C Standard Library
This code is <u>exactly the same</u> as method1, all that's different is the call to a sin() function.
End of explanation
import numpy
data = numpy.random.rand(10000)
%timeit versine_array_py(data)
data = numpy.random.rand(10000)
%timeit versine_array_cy(data)
Explanation: Speed test
End of explanation
from math import cos
def versine_array_pyonly(x):
for i in range(len(x)):
x[i] = 1 - cos(x[i])
data = numpy.random.rand(10000)
%timeit versine_array_pyonly(data)
Explanation: Roughly 13 X slower than using C math directly
Remember, we used types for everything. What about just plain Python?
End of explanation |
13,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex')
Explanation: Answer: 61.62%
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: 78.68%
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Pclass', ["Age > 10", "Sex == 'female'"])
Explanation: Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female' and passenger['Pclass'] < 3:
predictions.append(1)
elif passenger['Age'] < 6:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
13,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display of Rich Output
In Python, objects can declare their textual representation using the __repr__ method.
Step1: Overriding the __repr__ method
Step2: IPython expands on this idea and allows objects to declare other, rich representations including
Step3: A few points
Step4: Images
To work with images (JPEG, PNG) use the Image class.
Step5: Returning an Image object from an expression will automatically display it
Step6: An image can also be displayed from raw data or a URL.
Step8: HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
Step9: You can also use the %%html cell magic to accomplish the same thing.
Step10: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected.
JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
Step11: Pass a string of JavaScript source code to the JavaScript object and then display it.
Step12: The same thing can be accomplished using the %%javascript cell magic
Step14: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
Step15: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
Step16: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur
Step17: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load
Step18: External sites
You can even embed an entire page from another site in an iframe; for example this is IPython's home page
Step19: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object
Step20: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. | Python Code:
class Ball(object):
pass
b = Ball()
b.__repr__()
print(b)
Explanation: Display of Rich Output
In Python, objects can declare their textual representation using the __repr__ method.
End of explanation
class Ball(object):
def __repr__(self):
return 'TEST'
b = Ball()
print(b)
Explanation: Overriding the __repr__ method:
End of explanation
from IPython.display import display
Explanation: IPython expands on this idea and allows objects to declare other, rich representations including:
HTML
JSON
PNG
JPEG
SVG
LaTeX
A single object can declare some or all of these representations; all of them are handled by IPython's display system. .
Basic display imports
The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.
End of explanation
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
Explanation: A few points:
Calling display on an object will send all possible representations to the Notebook.
These representations are stored in the Notebook document.
In general the Notebook will use the richest available representation.
If you want to display a particular representation, there are specific functions for that:
End of explanation
from IPython.display import Image
i = Image(filename='./ipython-image.png')
display(i)
Explanation: Images
To work with images (JPEG, PNG) use the Image class.
End of explanation
i
Explanation: Returning an Image object from an expression will automatically display it:
End of explanation
Image(url='http://python.org/images/python-logo.gif')
Explanation: An image can also be displayed from raw data or a URL.
End of explanation
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display_HTML(h)
Explanation: HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
End of explanation
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
%%html
<style>
#notebook {
background-color: skyblue;
font-family: times new roman;
}
</style>
Explanation: You can also use the %%html cell magic to accomplish the same thing.
End of explanation
from IPython.display import Javascript
Explanation: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected.
JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
End of explanation
js = Javascript('alert("hi")');
display(js)
Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it.
End of explanation
%%javascript
alert("hi");
Explanation: The same thing can be accomplished using the %%javascript cell magic:
End of explanation
Javascript(
$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("./flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
End of explanation
from IPython.display import Audio
Audio("./scrubjay.mp3")
Explanation: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
End of explanation
import numpy as np
max_time = 3
f1 = 120.0
f2 = 124.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
Explanation: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur:
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
Explanation: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
End of explanation
from IPython.display import IFrame
IFrame('https://ipython.org', width='100%', height=350)
Explanation: External sites
You can even embed an entire page from another site in an iframe; for example this is IPython's home page:
End of explanation
from IPython.display import FileLink, FileLinks
FileLink('../Visualization/Matplotlib.ipynb')
Explanation: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:
End of explanation
FileLinks('./')
Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
End of explanation |
13,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to "Doing Science" in Python for REAL Beginners
Python is one of many languages you can use for research and HW purposes. In the next few days, we will work through many of the tool, tips, and tricks that we as graduate students (and PhD researchers) use on a daily basis. We will NOT attempt to teach you all of Python--there isn't time. We will however build up a set of code(s) that will allow you to read and write data, make beautiful publish-worthy plots, fit a line (or any function) to data, and set up algorithms. You will also begin to learn the syntax of Python and can hopefuly apply this knowledge to your current and future work.
Before we begin, a few words on navigating the iPython Notebook
Step1: Unfortunately, the output of your calculations won't be saved anywhere, so you can't use them later in your calculations.
There's a way to get around this
Step2: You can also write over variables with new values, but your previous values will be gone.
Step3: Next, let's create a list of numbers
Step4: How many elements or numbers does the list numList contain? Yes, this is easy to count now, but you will eventually work with lists that contains MANY numbers. To get the length of a list, use len().
Step5: You can also access particular elements in an array by indexing. The syntac for this is the following
Step6: How would you access the number 5 in numList?
Step7: Let's try making more complicated list
Step8: Now you know the basics of Python, let's see how it can be used as a graphing calculator
B. Our first plot!
Python is a fantastic language because it is very powerful and flexible. Also, it is like modular furniture or modular building. You have the Python foundation and choose which modules you want/need and load them before you start working. One of the most loved here at UMD is the matplotlib (https
Step9: When using modules (also sometimes called libraries or packages) you can use a nickname through the as keyword so you don't have to type the long module name every time. For example, matplotlib.pyplot is typically shortened to plt like below.
Step10: Now let's do a quick simple plot using the list we defined earlier!
Step11: You can change a lot of attributes about plots, like the style of the line, the color, and the thickness of the line. You can add titles, axis labels, and legends. You can also put more than one line on the same plot. This link includes all the ways you can modify plots
Step12: EXERCISE 1
Step13: C. Logic, If/Else, and Loops
Let's now switch gears a bit and discuss logic in Python. Conditional (logic) statements form the backbone of programming. These statements in Python return either True or False and have a special name in programming
Step14: Think of the statement $x<y$ as asking the question "is x less than y?" If it is, then it returns True and if x is not less than y it returns False.
Step15: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.
There are three logical operators that are handy to know
Step16: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.
An if/else statement (or simply an if statement) are segments of code that have a conditional statement built into it, such that the code within that segment doesn't activate unless the conditional statement is true.
Here's an example. Play around with the variables x and y to see what happens.
Step17: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.
Note that Python requires you to indent these segments of code, and WILL NOT like it if you don't. Some languages don't require it, but Python is very particular when it comes to this point. (The parentheses around the conditional statement, however, are optional.)
You also do not always need an "else" segment, which effectively means that if the condition isn't True, then that segment of code doesn't do anything, and Python will just continue on past the if statement.
Here is an example of such a case. Play around with it to see what happens when you change the values of x and y.
Step18: Here's a more complicated case. Here, we introduce some logic that helps you figure out if two objects are equal or not.
There's the == operator and the != operator. Can you figure out what they mean?
Step19: While-loops are similar to if statements, in the sense that they also have a conditional statement built into them. The code inside the loop will execute when the conditional is True. And then it will check the conditional and, if it evaluates to True, the code will execute again. And so on and so forth...
The funny thing about while-loops is that they will KEEP executing that segment of code until the conditional statement evaluates to False...which hopefully will happen...right?
Although this seems a bit strange, you can get the hang of it!
For example, let's say we want Python to count from 1 to 10.
Step20: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.
It is sometimes handy to define what is known as a DUMMY VARIABLE, whose only job is to count the number of times the loop has been executed. Let's call this dummy variable i.
Step21: Now we want to combine lists with loops! You can use the dummy variable as a way to access a value in the list through its index. In exercise 1 we asked you to square the elements in a given list by hand, let's now do it by using a loop.
In Python, the command to square something is **. So 3**2 will give you 9.
Step22: Isn't that much easier than squaring everything by hand? Loops are your friends in programming and will make menial, reptitive tasks go by very quickly.
All this information with logic, loops, and lists may be confusing, but you will get the hang of it with practice! And by combining these concepts, your programming in Python can be very powerful. Let's try an example where we use an if/then nested inside of a loop by finding how many times the number 2 shows up in the following list. Remember that indentation is very important in Python!
Step23: Notice how the indentation is set up. What happens if you indent the print statement? How about removing the indentation on the if statement? Play around with it so you get the hang of indentation in nested code.
Before you continue on to your second exercise, there is one more type of loop we should introduce
Step24: See how the for-loop prints the elements of the list without you having to specifically access them or even add 1 to your dummy variable? This is the magic of a for-loop! Python will automatically "assign" the first element in the list to the variable i, print it, then go to the next element until the list ends. You can produce the same output with the below while-loop
Step25: It's a bit more effort than the for-loop. However, we will encourage you to use while-loops when you're beginning programming so you can keep track of your dummy variables and clearly see what ends the loop. It's essential to practice good programming habits right when you start!
EXERCISE 2 | Python Code:
## You can use Python as a calculator:
5*7 #This is a comment and does not affect your code.
#You can have as many as you want.
#Comments help explain your code to others and yourself
#No worries.
5+7
5-7
5/7
Explanation: Introduction to "Doing Science" in Python for REAL Beginners
Python is one of many languages you can use for research and HW purposes. In the next few days, we will work through many of the tool, tips, and tricks that we as graduate students (and PhD researchers) use on a daily basis. We will NOT attempt to teach you all of Python--there isn't time. We will however build up a set of code(s) that will allow you to read and write data, make beautiful publish-worthy plots, fit a line (or any function) to data, and set up algorithms. You will also begin to learn the syntax of Python and can hopefuly apply this knowledge to your current and future work.
Before we begin, a few words on navigating the iPython Notebook:
There are two main types of cells : Code and Text
In "code" cells "#" at the beginning of a line marks the line as comment
In "code" cells every non commented line is intepreted
In "code" cells, commands that are preceded by % are "magics" and are special commands in IPython to add some functionality to the runtime interactive environment.
Shift+Return shortcut to execute a cell
Alt+Return shortcut to execute a cell and create another one below
Here you can find a complete documentation about the notebook.
http://ipython.org/ipython-doc/1/interactive/notebook.html
In particular have a look at the section about the keyboard shortcuts.
And remember that :
Indentation has a meaning (we'll talk about this when we cover loops)
Indexes start from 0
We will discuss more about these concepts while doing things. Let's get started now!!!!
A. Numbers, Calculations, and Lists
Before we start coding, let's play around with the Jupyter environment. Make a new cell below using the Alt+Return shortcut
Take your newly created cell and write something in it. Switch the type of the cell between a code cell and a text/markdown cell by using the selection box in the top of the screen. See how it changes?
Insert a comment to yourself (this is always a great idea) by using the # symbol.
End of explanation
a = 10
b = 7
print(a)
print(b)
print(a*b , a+b, a/b)
Explanation: Unfortunately, the output of your calculations won't be saved anywhere, so you can't use them later in your calculations.
There's a way to get around this: by assigning them to variables. A variable is a way of referring to a memory location used by a computer program that can contain values, text, or even more complicated types. Think of variables as containers to store something so you can use or change it later. Variables can be a single letter (like x or y) but they are usually more helpful when they have descriptive names (like age, stars, total_sum).
Let's assign some variables and print() them to the screen.
End of explanation
a = 5
b = 7
print(a*b, a+b, a/b)
Explanation: You can also write over variables with new values, but your previous values will be gone.
End of explanation
numList = [0,1,2,3,4,5,6,7,8,9]
print(numList)
Explanation: Next, let's create a list of numbers
End of explanation
L = len(numList)
print(L)
Explanation: How many elements or numbers does the list numList contain? Yes, this is easy to count now, but you will eventually work with lists that contains MANY numbers. To get the length of a list, use len().
End of explanation
numList[4]
Explanation: You can also access particular elements in an array by indexing. The syntac for this is the following:
numList[index_number]
This will return the value in the list that corresponds to the index number. For example, getting the 4th item in the list you would need to type:
numList[4]
Arrays are numbered starting from 0, such that
First position = 0
Second position = 1
Third position = 2
etc.
It is a bit confusing, but after a bit of time, this becomes quite natural. Try accessing elements of the list you just created:
End of explanation
# your code here
x = numList[5]
print(x)
Explanation: How would you access the number 5 in numList?
End of explanation
fibList = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
fibList[5]
Explanation: Let's try making more complicated list:
End of explanation
# Run this code
%matplotlib inline
# this "magic" command puts the plots right in the jupyter notebook
import matplotlib
Explanation: Now you know the basics of Python, let's see how it can be used as a graphing calculator
B. Our first plot!
Python is a fantastic language because it is very powerful and flexible. Also, it is like modular furniture or modular building. You have the Python foundation and choose which modules you want/need and load them before you start working. One of the most loved here at UMD is the matplotlib (https://matplotlib.org/), which provides lots of functionality for making beautiful, publishable plots.
End of explanation
# Run this code
import matplotlib.pyplot as plt
Explanation: When using modules (also sometimes called libraries or packages) you can use a nickname through the as keyword so you don't have to type the long module name every time. For example, matplotlib.pyplot is typically shortened to plt like below.
End of explanation
x = numList
y = numList
p = plt.plot(x,y)
Explanation: Now let's do a quick simple plot using the list we defined earlier!
End of explanation
# Clear the plotting field.
plt.clf() # No need to add anything inside these parentheses.
# First line
plt.plot(x, y, color = 'blue', linestyle = '-', linewidth = 1, label = 'num')
# Second line
z = fibList
# you can shorten the keywords like "color" to be just "c" for quicker typing
plt.plot(x, z, c = 'r', ls = '--', lw = 3, label = 'fib')
# add the labels and titles
plt.xlabel('x values')
plt.ylabel('y values')
plt.title('My First Plot')
plt.legend(loc = 'best')
#Would you like to save your plot? Uncomment the below line. Here, we use savefig('nameOffigure')
#It should save to the folder you are currently working out of.
#plt.savefig('MyFirstFigure.jpg')
Explanation: You can change a lot of attributes about plots, like the style of the line, the color, and the thickness of the line. You can add titles, axis labels, and legends. You can also put more than one line on the same plot. This link includes all the ways you can modify plots: https://matplotlib.org/api/as_gen/matplotlib.pyplot.plot.html. Here is a quick example showing a few of the things you can do with _matplotlib:
End of explanation
# defining lists
list1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
list2 = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
plt.clf()
plt.plot(list1, list2, c = 'purple', ls = '-.', lw = 2, label = 'Sqr')
plt.xlabel('x values')
plt.ylabel('y values')
plt.title('Exercise 1')
plt.legend(loc = 'best')
plt.savefig('exercise1')
Explanation: EXERCISE 1:
Create two lists of numbers: list1 will be the integers from 0 to 9 and list2 will be the elements of list1 squared.
Plot the two lists with matplotlib and make some changes to the color, linestyle, or linewidth.
Add labels, a title, and a legend to your plot.
Save the plot once you are done.
Be creative and feel free to look up the different linestyles using the link above.
End of explanation
#Example conditional statements
x = 1
y = 2
x<y #x is less than y
Explanation: C. Logic, If/Else, and Loops
Let's now switch gears a bit and discuss logic in Python. Conditional (logic) statements form the backbone of programming. These statements in Python return either True or False and have a special name in programming: Booleans. Sometimes this type of logic is also called Boolean logic.
End of explanation
#x is greater than y
x>y
#x is less-than or equal to y
x<=y
#x is greater-than or equal to y
x>=y
Explanation: Think of the statement $x<y$ as asking the question "is x less than y?" If it is, then it returns True and if x is not less than y it returns False.
End of explanation
#Example of and operator
(1<2) and (2<3)
#Example of or operator
(1<2) or (2>3)
#Example of not operator
not(1<2)
Explanation: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.
There are three logical operators that are handy to know:
And operator: a and b
outputs True only if both a and b are True
Or operator: a or b
outputs True if at least one of a and b are True
Not operator: not(a)
outputs the negation of a
End of explanation
x = 1
y = 2
if (x < y):
print("Yup, totally true!")
else:
print("Nope, completely wrong!")
Explanation: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.
An if/else statement (or simply an if statement) are segments of code that have a conditional statement built into it, such that the code within that segment doesn't activate unless the conditional statement is true.
Here's an example. Play around with the variables x and y to see what happens.
End of explanation
x = 2
y = 1
if (x > y):
print("x is greater than y")
Explanation: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.
Note that Python requires you to indent these segments of code, and WILL NOT like it if you don't. Some languages don't require it, but Python is very particular when it comes to this point. (The parentheses around the conditional statement, however, are optional.)
You also do not always need an "else" segment, which effectively means that if the condition isn't True, then that segment of code doesn't do anything, and Python will just continue on past the if statement.
Here is an example of such a case. Play around with it to see what happens when you change the values of x and y.
End of explanation
x = 2
y = 2
if (x == y):
print("x and y are equal")
if (x != y):
print("x and y are not equal")
if (x > y or x < y):
print("x and y are not equal (again!)")
Explanation: Here's a more complicated case. Here, we introduce some logic that helps you figure out if two objects are equal or not.
There's the == operator and the != operator. Can you figure out what they mean?
End of explanation
x = 0
while (x <= 10):
print(x)
x = x+1
Explanation: While-loops are similar to if statements, in the sense that they also have a conditional statement built into them. The code inside the loop will execute when the conditional is True. And then it will check the conditional and, if it evaluates to True, the code will execute again. And so on and so forth...
The funny thing about while-loops is that they will KEEP executing that segment of code until the conditional statement evaluates to False...which hopefully will happen...right?
Although this seems a bit strange, you can get the hang of it!
For example, let's say we want Python to count from 1 to 10.
End of explanation
x = 2
i = 0 #dummy variable
while (i<10):
x = 2*x
print(x)
i = i+1
Explanation: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.
It is sometimes handy to define what is known as a DUMMY VARIABLE, whose only job is to count the number of times the loop has been executed. Let's call this dummy variable i.
End of explanation
myList = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# we want to end the loop at the end of the list i.e., the length of the list
end = len(myList)
i = 0
while i < end:
num = myList[i]
print(num**2)
i = i + 1
Explanation: Now we want to combine lists with loops! You can use the dummy variable as a way to access a value in the list through its index. In exercise 1 we asked you to square the elements in a given list by hand, let's now do it by using a loop.
In Python, the command to square something is **. So 3**2 will give you 9.
End of explanation
twoList = [2, 5, 6, 2, 4, 1, 5, 7, 3, 2, 5, 2]
# this variable will count up how many times the number 2 appears in the above list
count = 0
end = len(twoList)
i = 0
while i < end:
if twoList[i] == 2:
count = count + 1
i = i + 1
print(count)
Explanation: Isn't that much easier than squaring everything by hand? Loops are your friends in programming and will make menial, reptitive tasks go by very quickly.
All this information with logic, loops, and lists may be confusing, but you will get the hang of it with practice! And by combining these concepts, your programming in Python can be very powerful. Let's try an example where we use an if/then nested inside of a loop by finding how many times the number 2 shows up in the following list. Remember that indentation is very important in Python!
End of explanation
grades = [94, 83, 71, 78, 88, 90]
for i in grades:
print(i)
Explanation: Notice how the indentation is set up. What happens if you indent the print statement? How about removing the indentation on the if statement? Play around with it so you get the hang of indentation in nested code.
Before you continue on to your second exercise, there is one more type of loop we should introduce: the for-loop. Instead of using a conditional to end the loop, the for-loop will iterate over the elements in a sequence or list. See the below example:
End of explanation
grades = [94, 83, 71, 78, 88, 90]
end = len(grades)
i = 0
while i < end:
print(grades[i])
i = i + 1
Explanation: See how the for-loop prints the elements of the list without you having to specifically access them or even add 1 to your dummy variable? This is the magic of a for-loop! Python will automatically "assign" the first element in the list to the variable i, print it, then go to the next element until the list ends. You can produce the same output with the below while-loop:
End of explanation
x = [True, True, False, False]
y = [True, False, True, False]
print('x and y')
i = 0
while i < len(x):
print (x[i] and y[i])
i = i+1
print('x or y')
i = 0
while i < len(y):
print (x[i] or y[i])
i = i+1
Explanation: It's a bit more effort than the for-loop. However, we will encourage you to use while-loops when you're beginning programming so you can keep track of your dummy variables and clearly see what ends the loop. It's essential to practice good programming habits right when you start!
EXERCISE 2: Truth Table
A truth table is a way of showing the True and False values of different operations. They typically start with values for two variables and then find the result when they are combined with different operators. Using two separate while-loops, generate two columns of a truth table for the lists x and y, defined below. That is, find the values for each element in the list for x and y in one loop and then the values of x or y in another. Note: you can always check your answer by doing it in your head! Checking your work is a good habit to have for the future :)
End of explanation |
13,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading (and Writing) Files (without Pandas)
This notebook is going to help introduce how to work with text files in Python without relying on the underlying magic of Pandas. In addition to being interesting and useful in many cases, understanding what is happening behind the scenes inside Pandas can help you identify what may be going wrong if you run into trouble while reading csv or txt files.
We will proceed as follows
Step1: The output of each iteration of the for-loop is a string which contains the entire line of the file.
Writing
We gave two letters for writing access
Step2: Notice that all of the file contents that were printed before ("First line of file", "Second line of file", etc...) were deleted because we used w when we wrote. Notice what happens when we simply append to our file.
Step3: Notice that when we used a as our permissions that it simply added new text to the end of the file when we wrote. We will now return the file to its original state so that we can run this file again. | Python Code:
f = open('filespython.txt', 'r')
for line in f:
print(line)
f.close()
Explanation: Reading (and Writing) Files (without Pandas)
This notebook is going to help introduce how to work with text files in Python without relying on the underlying magic of Pandas. In addition to being interesting and useful in many cases, understanding what is happening behind the scenes inside Pandas can help you identify what may be going wrong if you run into trouble while reading csv or txt files.
We will proceed as follows:
First, we will show how to read a file into Python.
Secondly, we will show how to interact (read/write) with its contents.
Finally, we will show how this can be useful in getting and cleaning data from the internet
Interacting with Files in Python
Python uses the function open to open a file so that Python is able to see it. Many of the things that you are able to do to the file correspond directly to what you might do if you had opened the file in a text editor or in Excel. When you open a file, you will need to specify the level of access that you need to the file. Python has the following access levels:
Reading (r): Open the file with only enough permissions to read it. This is the default level of permission and will probably be the most used
Writing (w or a): Open the file with enough permissions for me to change the file.
Creating and Writing (x): Create a new file with the specified name and open it for me with write access.
When you finish interacting with a file, it is important to remember to close it so that you don't accidentally do anything to the file after you're done with it. A workflow for interacting with a file should look like this:
```python
f = open('myfile.txt', 'r')
Do Stuff to file
f.close()
```
We will give an example of how to use each of these permissions.
Reading
We will first illustrate how we can read a file. When a file is read by Python, it brings the file in with a variety of methods. Our typically use will be to read a file line-by-line, so that is how we will start.
Python will allow us to iterate over the lines of a file within a for-loop. We illustrate this below.
End of explanation
f = open('filespython.txt', 'w')
f.write('This is another line\n')
f.close()
f = open('filespython.txt', 'r')
print(f.read())
f.close()
Explanation: The output of each iteration of the for-loop is a string which contains the entire line of the file.
Writing
We gave two letters for writing access: a and w. The difference between these two lies in where anything we write to the file will be placed. If we use w then anything in the file will be deleted when we write new material to the file. If we use a (for append) then anything written will be placed at the end of the file. We can illustrate this below
End of explanation
f = open('filespython.txt', 'a')
f.write('This is another line\n')
f.close()
f = open('filespython.txt', 'r')
print(f.read())
f.close()
Explanation: Notice that all of the file contents that were printed before ("First line of file", "Second line of file", etc...) were deleted because we used w when we wrote. Notice what happens when we simply append to our file.
End of explanation
# Open file
f = open('filespython.txt', 'w')
# Will use this string in each line so create it first
lof = " line of file\n"
for currline in ["First", "Second", "Third", "Last"]:
f.write(currline + lof)
f.close()
f = open('filespython.txt', 'r')
print(f.read())
f.close()
Explanation: Notice that when we used a as our permissions that it simply added new text to the end of the file when we wrote. We will now return the file to its original state so that we can run this file again.
End of explanation |
13,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sampling from a transformed parameter space
This example shows you how to run (and compare) Bayesian inference using a transformed parameter space.
Searching in a transformed space can improve performance (and robustness) of many sampling methods, and make some methods applicable to problems that cannot otherwise be tackled.
Unlike transforming error measures, for probability density functions (PDFs), in order to make sure the probability for any arbitrary interval within the PDFs conserves between parameter transformations, we cannot simply transform the model parameters by using a model wrapper.
We need what is called the Jacobian adjustment to 'correct' the transformed PDFs or rather to ensure this conservation (as explained in pints.TransformedLogPDF).
An example notebook here shows how things can go wrong with a naive wrapper without the Jacobian adjustment.
All of these (easy-to-miss) adjustments are done behind the scenes by our pints.Transformation and pints.MCMCController as shown in this example.
We start by loading a pints.Forwardmodel implementation, in this case a logistic model.
Step1: We then define some parameters and set up the problem for the Bayesian inference.
Step2: In this example, we will pick some considerably difficult starting points for the MCMC chains.
Step3: Let's run an Adaptive Covariance MCMC without doing any parameter transformation to check its performance.
Step4: The MCMC samples are not ideal, because we've started the MCMC run from some difficult starting points.
We can use MCMCSummary to inspect the efficiency of the MCMC run.
Step5: Now, we create a create a pints.Transformation object for log-transformation and re-run the MCMC to see if it makes any difference.
Step6: The MCMC samples using parameter transformation looks very similar to the one in another example notebook which we had some good starting points and without parameter transformation.
This is a good sign! It suggests the transformation did not mess anything up.
Now we check the efficiency again | Python Code:
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Set some random seed so this notebook can be reproduced
np.random.seed(10)
# Load a forward model
model = toy.LogisticModel()
Explanation: Sampling from a transformed parameter space
This example shows you how to run (and compare) Bayesian inference using a transformed parameter space.
Searching in a transformed space can improve performance (and robustness) of many sampling methods, and make some methods applicable to problems that cannot otherwise be tackled.
Unlike transforming error measures, for probability density functions (PDFs), in order to make sure the probability for any arbitrary interval within the PDFs conserves between parameter transformations, we cannot simply transform the model parameters by using a model wrapper.
We need what is called the Jacobian adjustment to 'correct' the transformed PDFs or rather to ensure this conservation (as explained in pints.TransformedLogPDF).
An example notebook here shows how things can go wrong with a naive wrapper without the Jacobian adjustment.
All of these (easy-to-miss) adjustments are done behind the scenes by our pints.Transformation and pints.MCMCController as shown in this example.
We start by loading a pints.Forwardmodel implementation, in this case a logistic model.
End of explanation
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 1000)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 10
values = org_values + np.random.normal(0, noise, org_values.shape)
real_parameters = np.array(real_parameters + [noise])
# Get properties of the noise sample
noise_sample_mean = np.mean(values - org_values)
noise_sample_std = np.std(values - org_values)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.001, 10, noise*0.1],
[1.0, 1000, noise*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
Explanation: We then define some parameters and set up the problem for the Bayesian inference.
End of explanation
# Choose starting points for 3 mcmc chains
xs = [
[0.7, 20, 2],
[0.005, 900, 100],
[0.01, 100, 500],
]
Explanation: In this example, we will pick some considerably difficult starting points for the MCMC chains.
End of explanation
# Create mcmc routine with four chains
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(1000)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm up
chains = chains[:, 2000:, :]
# Look at distribution across all chains
pints.plot.pairwise(np.vstack(chains), kde=False, parameter_names=[r'$r$', r'$K$', r'$\sigma$'])
# Show graphs
plt.show()
Explanation: Let's run an Adaptive Covariance MCMC without doing any parameter transformation to check its performance.
End of explanation
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=["r", "k", "sigma"])
print(results)
Explanation: The MCMC samples are not ideal, because we've started the MCMC run from some difficult starting points.
We can use MCMCSummary to inspect the efficiency of the MCMC run.
End of explanation
# Create parameter transformation
transformation = pints.LogTransformation(n_parameters=len(xs[0]))
# Create mcmc routine with four chains
mcmc = pints.MCMCController(log_posterior, 3, xs,
method=pints.HaarioBardenetACMC,
transformation=transformation)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(1000)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm up
chains = chains[:, 2000:, :]
# Look at distribution across all chains
pints.plot.pairwise(np.vstack(chains), kde=False, parameter_names=[r'$r$', r'$K$', r'$\sigma$'])
# Show graphs
plt.show()
Explanation: Now, we create a create a pints.Transformation object for log-transformation and re-run the MCMC to see if it makes any difference.
End of explanation
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=["r", "k", "sigma"])
print(results)
Explanation: The MCMC samples using parameter transformation looks very similar to the one in another example notebook which we had some good starting points and without parameter transformation.
This is a good sign! It suggests the transformation did not mess anything up.
Now we check the efficiency again:
End of explanation |
13,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tally Arithmetic
This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.
Step1: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
Step2: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six planes.
Step4: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step5: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step6: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
Step7: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
Step8: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
Step9: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
Step10: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step11: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
Step12: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-eff as
Step13: Notice that even though the neutron production rate, absorption rate, and current are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Often in textbooks you'll see k-eff represented using the six-factor formula $$k_{eff} = p \epsilon f \eta P_{FNL} P_{TNL}.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T + \langle L \rangle_T}{\langle\Sigma_a\phi\rangle + \langle L \rangle_T}$$ where the subscript $T$ means thermal energies.
Step14: The fast fission factor can be calculated as
$$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
Step15: The thermal flux utilization is calculated as
$$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$
where the superscript $F$ denotes fuel.
Step16: The next factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
Step17: There are two leakage factors to account for fast and thermal leakage. The fast non-leakage probability is computed as $$P_{FNL} = \frac{\langle \Sigma_a\phi \rangle + \langle L \rangle_T}{\langle \Sigma_a \phi \rangle + \langle L \rangle}$$
Step18: The final factor is the thermal non-leakage probability and is computed as $$P_{TNL} = \frac{\langle \Sigma_a\phi \rangle_T}{\langle \Sigma_a \phi \rangle_T + \langle L \rangle_T}$$
Step19: Now we can calculate $k_{eff}$ using the product of the factors form the four-factor formula.
Step20: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
Step21: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
Step22: The same idea can be used not only for scores but also for filters and nuclides.
Step23: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format. | Python Code:
import glob
from IPython.display import Image
import numpy as np
import openmc
Explanation: Tally Arithmetic
This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create boundary planes to surround the geometry
# Use both reflective and vacuum boundaries to make life interesting
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-100., boundary_type='vacuum')
max_z = openmc.ZPlane(z0=+100., boundary_type='vacuum')
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six planes.
End of explanation
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 20
inactive = 5
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -100., 0.63, 0.63, 100.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
End of explanation
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.width = [1.26, 1.26]
plot.pixels = [250, 250]
plot.color_by = 'material'
# Show plot
openmc.plot_inline(plot)
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Create Tallies to compute microscopic multi-group cross-sections
# Instantiate energy filter for multi-group cross-section Tallies
energy_filter = openmc.EnergyFilter([0., 0.625, 20.0e6])
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='flux')
tally.filters = [openmc.CellFilter([fuel_cell, moderator_cell])]
tally.filters.append(energy_filter)
tally.scores = ['flux']
tallies_file.append(tally)
# Instantiate reaction rate Tally in fuel
tally = openmc.Tally(name='fuel rxn rates')
tally.filters = [openmc.CellFilter(fuel_cell)]
tally.filters.append(energy_filter)
tally.scores = ['nu-fission', 'scatter']
tally.nuclides = ['U238', 'U235']
tallies_file.append(tally)
# Instantiate reaction rate Tally in moderator
tally = openmc.Tally(name='moderator rxn rates')
tally.filters = [openmc.CellFilter(moderator_cell)]
tally.filters.append(energy_filter)
tally.scores = ['absorption', 'total']
tally.nuclides = ['O16', 'H1']
tallies_file.append(tally)
# Instantiate a tally mesh
mesh = openmc.RegularMesh(mesh_id=1)
mesh.dimension = [1, 1, 1]
mesh.lower_left = [-0.63, -0.63, -100.]
mesh.width = [1.26, 1.26, 200.]
meshsurface_filter = openmc.MeshSurfaceFilter(mesh)
# Instantiate thermal, fast, and total leakage tallies
leak = openmc.Tally(name='leakage')
leak.filters = [meshsurface_filter]
leak.scores = ['current']
tallies_file.append(leak)
thermal_leak = openmc.Tally(name='thermal leakage')
thermal_leak.filters = [meshsurface_filter, openmc.EnergyFilter([0., 0.625])]
thermal_leak.scores = ['current']
tallies_file.append(thermal_leak)
fast_leak = openmc.Tally(name='fast leakage')
fast_leak.filters = [meshsurface_filter, openmc.EnergyFilter([0.625, 20.0e6])]
fast_leak.scores = ['current']
tallies_file.append(fast_leak)
# K-Eigenvalue (infinity) tallies
fiss_rate = openmc.Tally(name='fiss. rate')
abs_rate = openmc.Tally(name='abs. rate')
fiss_rate.scores = ['nu-fission']
abs_rate.scores = ['absorption']
tallies_file += (fiss_rate, abs_rate)
# Resonance Escape Probability tallies
therm_abs_rate = openmc.Tally(name='therm. abs. rate')
therm_abs_rate.scores = ['absorption']
therm_abs_rate.filters = [openmc.EnergyFilter([0., 0.625])]
tallies_file.append(therm_abs_rate)
# Thermal Flux Utilization tallies
fuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')
fuel_therm_abs_rate.scores = ['absorption']
fuel_therm_abs_rate.filters = [openmc.EnergyFilter([0., 0.625]),
openmc.CellFilter([fuel_cell])]
tallies_file.append(fuel_therm_abs_rate)
# Fast Fission Factor tallies
therm_fiss_rate = openmc.Tally(name='therm. fiss. rate')
therm_fiss_rate.scores = ['nu-fission']
therm_fiss_rate.filters = [openmc.EnergyFilter([0., 0.625])]
tallies_file.append(therm_fiss_rate)
# Instantiate energy filter to illustrate Tally slicing
fine_energy_filter = openmc.EnergyFilter(np.logspace(np.log10(1e-2), np.log10(20.0e6), 10))
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='need-to-slice')
tally.filters = [openmc.CellFilter([fuel_cell, moderator_cell])]
tally.filters.append(fine_energy_filter)
tally.scores = ['nu-fission', 'scatter']
tally.nuclides = ['H1', 'U238']
tallies_file.append(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
End of explanation
# Run OpenMC!
openmc.run()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint('statepoint.20.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
End of explanation
# Get the fission and absorption rate tallies
fiss_rate = sp.get_tally(name='fiss. rate')
abs_rate = sp.get_tally(name='abs. rate')
# Get the leakage tally
leak = sp.get_tally(name='leakage')
leak = leak.summation(filter_type=openmc.MeshSurfaceFilter, remove_filter=True)
# Compute k-infinity using tally arithmetic
keff = fiss_rate / (abs_rate + leak)
keff.get_pandas_dataframe()
Explanation: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-eff as:
$$k_{eff} = \frac{\langle \nu \Sigma_f \phi \rangle}{\langle \Sigma_a \phi \rangle + \langle L \rangle}$$
In this notation, $\langle \cdot \rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.
End of explanation
# Compute resonance escape probability using tally arithmetic
therm_abs_rate = sp.get_tally(name='therm. abs. rate')
thermal_leak = sp.get_tally(name='thermal leakage')
thermal_leak = thermal_leak.summation(filter_type=openmc.MeshSurfaceFilter, remove_filter=True)
res_esc = (therm_abs_rate + thermal_leak) / (abs_rate + thermal_leak)
res_esc.get_pandas_dataframe()
Explanation: Notice that even though the neutron production rate, absorption rate, and current are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Often in textbooks you'll see k-eff represented using the six-factor formula $$k_{eff} = p \epsilon f \eta P_{FNL} P_{TNL}.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T + \langle L \rangle_T}{\langle\Sigma_a\phi\rangle + \langle L \rangle_T}$$ where the subscript $T$ means thermal energies.
End of explanation
# Compute fast fission factor factor using tally arithmetic
therm_fiss_rate = sp.get_tally(name='therm. fiss. rate')
fast_fiss = fiss_rate / therm_fiss_rate
fast_fiss.get_pandas_dataframe()
Explanation: The fast fission factor can be calculated as
$$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
End of explanation
# Compute thermal flux utilization factor using tally arithmetic
fuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')
therm_util = fuel_therm_abs_rate / therm_abs_rate
therm_util.get_pandas_dataframe()
Explanation: The thermal flux utilization is calculated as
$$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$
where the superscript $F$ denotes fuel.
End of explanation
# Compute neutrons produced per absorption (eta) using tally arithmetic
eta = therm_fiss_rate / fuel_therm_abs_rate
eta.get_pandas_dataframe()
Explanation: The next factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
End of explanation
p_fnl = (abs_rate + thermal_leak) / (abs_rate + leak)
p_fnl.get_pandas_dataframe()
Explanation: There are two leakage factors to account for fast and thermal leakage. The fast non-leakage probability is computed as $$P_{FNL} = \frac{\langle \Sigma_a\phi \rangle + \langle L \rangle_T}{\langle \Sigma_a \phi \rangle + \langle L \rangle}$$
End of explanation
p_tnl = therm_abs_rate / (therm_abs_rate + thermal_leak)
p_tnl.get_pandas_dataframe()
Explanation: The final factor is the thermal non-leakage probability and is computed as $$P_{TNL} = \frac{\langle \Sigma_a\phi \rangle_T}{\langle \Sigma_a \phi \rangle_T + \langle L \rangle_T}$$
End of explanation
keff = res_esc * fast_fiss * therm_util * eta * p_fnl * p_tnl
keff.get_pandas_dataframe()
Explanation: Now we can calculate $k_{eff}$ using the product of the factors form the four-factor formula.
End of explanation
# Compute microscopic multi-group cross-sections
flux = sp.get_tally(name='flux')
flux = flux.get_slice(filters=[openmc.CellFilter], filter_bins=[(fuel_cell.id,)])
fuel_rxn_rates = sp.get_tally(name='fuel rxn rates')
mod_rxn_rates = sp.get_tally(name='moderator rxn rates')
fuel_xs = fuel_rxn_rates / flux
fuel_xs.get_pandas_dataframe()
Explanation: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
End of explanation
# Show how to use Tally.get_values(...) with a CrossScore
nu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])
print(nu_fiss_xs)
Explanation: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
End of explanation
# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide
u235_scatter_xs = fuel_xs.get_values(nuclides=['(U235 / total)'],
scores=['(scatter / flux)'])
print(u235_scatter_xs)
# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore
fast_scatter_xs = fuel_xs.get_values(filters=[openmc.EnergyFilter],
filter_bins=[((0.625, 20.0e6),)],
scores=['(scatter / flux)'])
print(fast_scatter_xs)
Explanation: The same idea can be used not only for scores but also for filters and nuclides.
End of explanation
# "Slice" the nu-fission data into a new derived Tally
nu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])
nu_fission_rates.get_pandas_dataframe()
# "Slice" the H-1 scatter data in the moderator Cell into a new derived Tally
need_to_slice = sp.get_tally(name='need-to-slice')
slice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H1'],
filters=[openmc.CellFilter], filter_bins=[(moderator_cell.id,)])
slice_test.get_pandas_dataframe()
Explanation: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.
End of explanation |
13,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Development of Python HDBSCAN Compared to the Reference Implementation in Java
Or, why I still use Python for high performance scientific computing
Python is a great high level language for easily expressing ideas, but people don't tend to think of it as a high performance language; for that you would want a compiled language -- ideally C or C++ but Java would do. This notebook started out as a simple benchmarking of the hdbscan clustering library written in Python against the reference implementation written in Java. It still does that, but it has expanded into an explanation of why I choose to use Python for performance critical scientific computing code.
Some quick background
In 2013 Campello, Moulavi and Sander published a paper on a new clustering algorithm that they called HDBSCAN. In mid-2014 I was doing some general research on the current state of clustering, particularly with regard to exploratory data analysis. At the time DBSCAN or OPTICS appeared to be the most promising algorithm available. A colleague ran across the HDBSCAN paper in her literature survey, and suggested we look into how well it performed. We spent an afternoon learning the algorithm and coding it up and found that it gave remarkably good results for the range of test data we had. Things stayed in that state for some time, with the intention being to use a good HDBSCAN implementation when one became available. By early 2015 our needs for clustering grew and, having no good implementation of HDBSCAN to hand, I set about writing our own. Since the first version, coded up in an afternoon, had been in python I stuck with that choice -- but obviously performance might be an issue. In July 2015, after our implementation was well underway Campello, Moulavi and Sander published a new HDBSCAN paper, and released Java code to peform HDBSCAN clustering. Since one of our goals had been to get good scaling it became necessary to see how our python version compared to the high performance reference implementation in Java.
This is the story of how our codebase evolved and was optimized, and how it compares with the Java version at different stages of that journey.
To make the comparisons we'll need data on runtimes of both algorithms, ranging over dataset size, and dataset dimension. To save time and space I've done that work in another notebook and will just load the data in here.
Step1: Why I chose Python
Step2: Next we'll join together the reference timings with v0.1 hdbscan library timings, keeping track of which implementation is which, so that we can fit tidily into the seaborn library's lmplot routine.
Step3: And now we plot the results. First we plot the raw timings ranging over increasing dataset sizes, using a different plot for datasets of different dimensions. Below that we have the log/log plots of the same.
(Double click on plots to make them larger)
Step4: The result is perhaps a little surprising if you haven't worked with numpy that much
Step5: With a little optimization via Cython, the 0.3 version of hdbscan was now outperforming the reference implementation in Java! In fact in dimension 2 the hdbscan library is getting close to being an order of magnitude faster. It would seem that python isn't such a poor choice for performant code after all ...
Why I chose Python
Step6: Now we are really starting to really separate out from the reference implementation. Only in the higher dimensional cases can you even see separation between the hdbscan library line and the x-axis. In the log/log plots we can see the difference really show, especially in low dimensions. The $O(N\log N)$ performance isn't showing up there though, so obviously we may still have some work to do.
Why I chose Python
Step7: Now we can see a real difference in slopes in the log/log plot, with the implementation performance diverging in log scale for large dataset sizes (particularly in dimension 2). By the time we are dealing with datasets of size $10^5$ the python implementation is two orders of magnitude faster in dimension two! And that is only going to get better for the python implementation as we scale to larger and larger data sizes.
But there's still more -- there are still performance gains to be had for the python implementation, some to be delivered in the 0.6 release. | Python Code:
import pandas as pd
import numpy as np
reference_timing_series = pd.read_csv('reference_impl_external_timings.csv',
index_col=(0,1), names=('dim', 'size', 'time'))['time']
hdbscan_v01_timing_series = pd.read_csv('hdbscan01_timings.csv',
index_col=(0,1), names=('dim', 'size', 'time'))['time']
hdbscan_v03_timing_series = pd.read_csv('hdbscan03_timings.csv',
index_col=(0,1), names=('dim', 'size', 'time'))['time']
hdbscan_v04_timing_series = pd.read_csv('hdbscan04_timings.csv',
index_col=(0,1), names=('dim', 'size', 'time'))['time']
hdbscan_v05_timing_series = pd.read_csv('hdbscan05_timings.csv',
index_col=(0,1), names=('dim', 'size', 'time'))['time']
hdbscan_v06_timing_series = pd.read_csv('hdbscan06_timings.csv',
index_col=(0,1), names=('dim', 'size', 'time'))['time']
Explanation: The Development of Python HDBSCAN Compared to the Reference Implementation in Java
Or, why I still use Python for high performance scientific computing
Python is a great high level language for easily expressing ideas, but people don't tend to think of it as a high performance language; for that you would want a compiled language -- ideally C or C++ but Java would do. This notebook started out as a simple benchmarking of the hdbscan clustering library written in Python against the reference implementation written in Java. It still does that, but it has expanded into an explanation of why I choose to use Python for performance critical scientific computing code.
Some quick background
In 2013 Campello, Moulavi and Sander published a paper on a new clustering algorithm that they called HDBSCAN. In mid-2014 I was doing some general research on the current state of clustering, particularly with regard to exploratory data analysis. At the time DBSCAN or OPTICS appeared to be the most promising algorithm available. A colleague ran across the HDBSCAN paper in her literature survey, and suggested we look into how well it performed. We spent an afternoon learning the algorithm and coding it up and found that it gave remarkably good results for the range of test data we had. Things stayed in that state for some time, with the intention being to use a good HDBSCAN implementation when one became available. By early 2015 our needs for clustering grew and, having no good implementation of HDBSCAN to hand, I set about writing our own. Since the first version, coded up in an afternoon, had been in python I stuck with that choice -- but obviously performance might be an issue. In July 2015, after our implementation was well underway Campello, Moulavi and Sander published a new HDBSCAN paper, and released Java code to peform HDBSCAN clustering. Since one of our goals had been to get good scaling it became necessary to see how our python version compared to the high performance reference implementation in Java.
This is the story of how our codebase evolved and was optimized, and how it compares with the Java version at different stages of that journey.
To make the comparisons we'll need data on runtimes of both algorithms, ranging over dataset size, and dataset dimension. To save time and space I've done that work in another notebook and will just load the data in here.
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context('poster')
%matplotlib inline
Explanation: Why I chose Python: Easy development
The very first implementation of HDBSCAN that we did was coded up in an afternoon, and that code was in python. Why? There were a few reasons; the test data for clustering was already loaded in a notebook (we were using sklearn for testing many of the different clustering algorithms available); the notebook interface itself was very useful for the sort of iterative "what did we get at the end of this step" coding that occurs when you are both learning and coding a new algorithm at the same time; most of all though, python made the development easy. As a high level language, python simply made development that much easier by getting out of the way -- instead of battling with the language we could focus on battling with our understanding of the algorithm.
Easy development comes at a cost of course. That initial experimental implementation was terribly slow, taking thirty seconds or more to cluster only a few thousand points. That was to be expected to some extent: we were still learning and understanding the algorithm, and hence implemented things in a very literal and naive way. The benefit was in being able to get a working implementation put together well enough to test of real data and see the results -- because it was the remarkable promise of those results that made us pick HDBSCAN as the ideal clustering algorithm for exploratory data analysis.
Why I chose Python: Great libraries
When push came to shove and it was decided that we needed to just write and implementation of HDBSCAN, I stuck with python. This was done despite the fact that the initial naive implementation was essentially abandoned, and a fresh start made. What motivated the decision this time? The many great libraries available for python. For a start there is numpy which provides access to highly optimized numerical array operations -- if you can phrase things in vectorized numpy operations then things will run fast, and the library is sufficiently flexible and expressive that it is usually not hard to phrase your problem that way. Next there is scipy and the excellent sklearn libraries. By inhertiting from the sklearn estimator and cluster base classes (and making use of associated sklearn functions) the initial implementation supported input validation and conversion for a wide variety of input formats, a huge range of distance functions, and a standardised calling API all for practically free. In the early development stages I also benefitted from the power of libraries like pandas and networkx which provided easy and efficient access to database-like functionality and graphs and graph analytics.
When you combine that with easy checking and comparison with the naive implementation within the original clustering evaluation notebooks it just made a great deal of sense. With powerful and optimized libraries like numpy and sklearn doing the heavy lifting and a less naive implementation, hopefully performance wouldn't suffer too much...
We can compare that initial implementation with the reference implementation. First we need to load some plotting libraries so we can visualize the results.
End of explanation
reference_series = pd.DataFrame(reference_timing_series.copy()).reset_index()
reference_series['implementation'] = 'Reference Implemetation'
hdbscan_series = pd.DataFrame(hdbscan_v01_timing_series.copy()).reset_index()
hdbscan_series['implementation'] = 'hdbscan library'
hdbscan_series.columns = ('dim', 'size', 'time', 'implementation')
combined_data = pd.concat([reference_series, hdbscan_series])
combined_data['log(time)'] = np.log10(combined_data.time)
combined_data['log(size)'] = np.log10(combined_data['size'])
Explanation: Next we'll join together the reference timings with v0.1 hdbscan library timings, keeping track of which implementation is which, so that we can fit tidily into the seaborn library's lmplot routine.
End of explanation
base_plot = sns.lmplot(x='size', y='time', hue='implementation', col='dim',
data=combined_data.reset_index(), order=2, size=5)
base_plot.set_xticklabels(np.arange(8)*20000, rotation=75)
log_plot = sns.lmplot(x='log(size)', y='log(time)', hue='implementation', col='dim',
data=combined_data.reset_index(), size=5)
Explanation: And now we plot the results. First we plot the raw timings ranging over increasing dataset sizes, using a different plot for datasets of different dimensions. Below that we have the log/log plots of the same.
(Double click on plots to make them larger)
End of explanation
reference_series = pd.DataFrame(reference_timing_series.copy()).reset_index()
reference_series['implementation'] = 'Reference Implemetation'
hdbscan_series = pd.DataFrame(hdbscan_v03_timing_series.copy()).reset_index()
hdbscan_series['implementation'] = 'hdbscan library'
hdbscan_series.columns = ('dim', 'size', 'time', 'implementation')
combined_data = pd.concat([reference_series, hdbscan_series])
combined_data['log(time)'] = np.log10(combined_data.time)
combined_data['log(size)'] = np.log10(combined_data['size'])
base_plot = sns.lmplot(x='size', y='time', hue='implementation', col='dim',
data=combined_data.reset_index(), order=2, size=5)
base_plot.set_xticklabels(np.arange(8)*20000, rotation=75)
log_plot = sns.lmplot(x='log(size)', y='log(time)', hue='implementation', col='dim',
data=combined_data.reset_index(), size=5)
Explanation: The result is perhaps a little surprising if you haven't worked with numpy that much: the python code doesn't scale as well the Java code, but it is not all that far off -- in fact when working with 25 or 50 dimensional data it is actually faster!
Why I chose Python: Cython for bottlenecks
At this point in development I was still unaware of the reference implementation, and was comapring performance with other clustering algorithms such as single linkage, DBSCAN, and K-Means. From that point of view the hdbscan library still performed very poorly, and certainly didn't scale out to the size of datasets I potentially wanted to cluster. That meant it was time to roll up my sleeves and see if I could wring some further optimizations out. This is where the next win for python came: the easy gradient toward C. While numpy provided easy python access to fast routines written in C, not everything sat entirely within numpy. On the other hand Cython provided an easy way to take my existing working python implementation and simply decorate it with C type information to allow the Cython compiler to generate efficient C code from my python. This allowed me get fast performance without having to rewrite anything -- I could modify the existing python code and be sure everything was still working (by running in the now familiar cluster testing notebooks). Better still, I only needed to spend effort on those parts of the code that were significant bottlenecks, everything else could simply remain as it was (the Cython compiler will happily work with pure undecorated python code).
I should point out that I could equally well have done similar things using Numba, but I had more familiarity with Cython at the time and it fit in better with sklearn practice and dependencies.
So, how did performance look at around this point? Let's have a look...
End of explanation
reference_series = pd.DataFrame(reference_timing_series.copy()).reset_index()
reference_series['implementation'] = 'Reference Implemetation'
hdbscan_series = pd.DataFrame(hdbscan_v04_timing_series.copy()).reset_index()
hdbscan_series['implementation'] = 'hdbscan library'
hdbscan_series.columns = ('dim', 'size', 'time', 'implementation')
combined_data = pd.concat([reference_series, hdbscan_series])
combined_data['log(time)'] = np.log10(combined_data.time)
combined_data['log(size)'] = np.log10(combined_data['size'])
base_plot = sns.lmplot(x='size', y='time', hue='implementation', col='dim',
data=combined_data.reset_index(), order=2, size=5)
base_plot.set_xticklabels(np.arange(8)*20000, rotation=75)
log_plot = sns.lmplot(x='log(size)', y='log(time)', hue='implementation', col='dim',
data=combined_data.reset_index(), size=5)
Explanation: With a little optimization via Cython, the 0.3 version of hdbscan was now outperforming the reference implementation in Java! In fact in dimension 2 the hdbscan library is getting close to being an order of magnitude faster. It would seem that python isn't such a poor choice for performant code after all ...
Why I chose Python: Because ultimately algorithms matter more
While I had been busy optimizing code, I had also been doing research on the side. The core of the HDBSCAN algorithm relied on a modified single linkage algorithm, which in turn relied on something akin to a minimum spanning tree algorithm. The optimal algorithm for that, according to the literature, is Prims algorithm. The catch is that this is an optimal choice for graphs where the number of edges is usually some (small) constant multiple of the number of vertices. The graph problem for HDBSCAN is a weighted complete graph with $N^2$ edges! That means that in practice we are stuck with an $O(N^2)$ algorithm. Other people, however, had been looking at what can be done when dealing with minimum spanning trees in the pathological case of complete graphs, and as long as you can embed your points into a metric space it turns out that there are other options. A paper by March, Ram and Gray described an algorithm using kd-trees that had $O(N \log N)$ complexity for small dimensional data.
Faced with the difference between $O(N^2)$ and $O(N\log N)$ for large $N$ the choice of development language becomes much less significant -- what matters more is getting that $O(N\log N)$ algorithm implemented. Fortunately python made that easy. As in the case of the first naive versions of HDBSCAN, the notebook provided an excellent interface for exploratory interactive development while learning the algorithm. As in step two, great libraries made a difference: sklearn comes equipped with high performance implementations of kd-trees and ball trees that I could simpy make use of. Finally, as in step three, once I had a decent algorithm, I could turn to Cython to tighten up the bottlenecks and make it fast.
What sort of performance did we achieve with new algorithms?
End of explanation
reference_series = pd.DataFrame(reference_timing_series.copy()).reset_index()
reference_series['implementation'] = 'Reference Implemetation'
hdbscan_series = pd.DataFrame(hdbscan_v05_timing_series.copy()).reset_index()
hdbscan_series['implementation'] = 'hdbscan library'
hdbscan_series.columns = ('dim', 'size', 'time', 'implementation')
combined_data = pd.concat([reference_series, hdbscan_series])
combined_data['log(time)'] = np.log10(combined_data.time)
combined_data['log(size)'] = np.log10(combined_data['size'])
base_plot = sns.lmplot(x='size', y='time', hue='implementation', col='dim',
data=combined_data.reset_index(), order=2, size=5)
base_plot.set_xticklabels(np.arange(8)*20000, rotation=75)
log_plot = sns.lmplot(x='log(size)', y='log(time)', hue='implementation', col='dim',
data=combined_data.reset_index(), size=5)
Explanation: Now we are really starting to really separate out from the reference implementation. Only in the higher dimensional cases can you even see separation between the hdbscan library line and the x-axis. In the log/log plots we can see the difference really show, especially in low dimensions. The $O(N\log N)$ performance isn't showing up there though, so obviously we may still have some work to do.
Why I chose Python: Because it makes optimization easy
The 0.4 release was a huge step forward in performance, but the $O(N\log N)$ scaling I was expecting to see (at least for low dimensions) wasn't apparent. Now the problem became tracking down where exactly time was being spent that perhaps it shouldn't be. Again python provided some nice benefits here. I was already doing my testing and benchmarking in the notebook (for the sake of plotting the benchmarks if nothing else). Merely adding %prun or %lprun to the top of cells got me profiling and even line level profiling information quickly and easily. From there it was easy to see that portions of code I had previously left written in very simple naive forms because they had negligible impact on performance were now, suddenly, a bottleneck. Going back to Cython, and particularly making use of reports produced by cython -a which provide excellent information about how your python code is being converted to C, it was not hard to speed up these routines. The result was the 0.5 release with performance below:
End of explanation
reference_series = pd.DataFrame(reference_timing_series.copy()).reset_index()
reference_series['implementation'] = 'Reference Implemetation'
hdbscan_series = pd.DataFrame(hdbscan_v06_timing_series.copy()).reset_index()
hdbscan_series['implementation'] = 'hdbscan library'
hdbscan_series.columns = ('dim', 'size', 'time', 'implementation')
combined_data = pd.concat([reference_series, hdbscan_series])
combined_data['log(time)'] = np.log10(combined_data.time)
combined_data['log(size)'] = np.log10(combined_data['size'])
base_plot = sns.lmplot(x='size', y='time', hue='implementation', col='dim',
data=combined_data.reset_index(), order=2, size=5)
base_plot.set_xticklabels(np.arange(8)*20000, rotation=75)
log_plot = sns.lmplot(x='log(size)', y='log(time)', hue='implementation', col='dim',
data=combined_data.reset_index(), size=5)
Explanation: Now we can see a real difference in slopes in the log/log plot, with the implementation performance diverging in log scale for large dataset sizes (particularly in dimension 2). By the time we are dealing with datasets of size $10^5$ the python implementation is two orders of magnitude faster in dimension two! And that is only going to get better for the python implementation as we scale to larger and larger data sizes.
But there's still more -- there are still performance gains to be had for the python implementation, some to be delivered in the 0.6 release.
End of explanation |
13,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download hourly EPA emissions
Step1: Final script
This downloads hourly data from the ftp server over a range of years, and saves all of the file names/last update times in a list. The downloads can take some time depending on how much data is being retrieved.
Some of the code below assumes that we only need to retrieve new or modified files. If you are retrieving this data for the first time, create an empty dataframe named already_downloaded with column names file name and last updated.
Step2: Export file names and update timestamp
Step3: Check number of columns and column names
Step4: This takes ~100 ms per file.
Step5: From the table below, recent years always have the units after an emission name. Before 2009 some files have the units and some don't. UNITID is consistent through all years, but UNIT_ID was added in after 2008 (not the same thing).
Step6: Correct column names and export all files (1 file per year)
Also convert the OP_DATE column to a datetime object
Using joblib for this.
Joblib on windows requires the if name == 'main' | Python Code:
import io, time, json
import requests
from bs4 import BeautifulSoup
import pandas as pd
import urllib, urllib2
import re
import os
import numpy as np
import ftplib
from ftplib import FTP
import timeit
Explanation: Download hourly EPA emissions
End of explanation
# Replace the filename with whatever csv stores already downloaded file info
path = os.path.join('EPA downloads', 'name_time 2015-2016.csv')
already_downloaded = pd.read_csv(path, parse_dates=['last updated'])
# Uncomment the line below to create an empty dataframe
# already_downloaded = pd.DataFrame(columns=['file name', 'last updated'])
already_downloaded.head()
# Timestamp
start_time = timeit.default_timer()
name_time_list = []
# Open ftp connection and navigate to the correct folder
print 'Opening ftp connection'
ftp = FTP('ftp.epa.gov')
ftp.login()
ftp.cwd('/dmdnload/emissions/hourly/monthly')
for year in [2015, 2016, 2017]:
print year
year_str = str(year)
print 'Change directory to', year_str
try:
ftp.cwd(year_str)
except ftplib.all_errors as e:
print e
break
# Use ftplib to get the list of filenames
print 'Fetch filenames'
fnames = ftp.nlst()
# Create new directory path if it doesn't exist
new_path = os.path.join('EPA downloads', year_str)
try:
os.mkdir(new_path)
except:
pass
# Look for files without _HLD in the name
name_list = []
time_list = []
print 'Find filenames without _HLD and time last updated'
for name in fnames:
if '_HLD' not in name:
try:
# The ftp command "MDTM" asks what time a file was last modified
# It returns a code and the date/time
# If the file name isn't already downloaded, or the time isn't the same
tm = pd.to_datetime(ftp.sendcmd('MDTM '+ name).split()[-1])
if name not in already_downloaded['file name'].values:
time_list.append(tm)
name_list.append(name)
elif already_downloaded.loc[already_downloaded['file name']==name, 'last updated'].values[0] != tm:
tm = ftp.sendcmd('MDTM '+ name)
time_list.append(pd.to_datetime(tm.split()[-1]))
name_list.append(name)
except ftplib.all_errors as e:
print e
# If ftp.sendcmd didn't work, assume the connection was lost
ftp = FTP('ftp.epa.gov')
ftp.login()
ftp.cwd('/dmdnload/emissions/hourly/monthly')
ftp.cwd(year_str)
tm = ftp.sendcmd('MDTM '+ name)
time_list.append(pd.to_datetime(tm.split()[-1]))
name_list.append(name)
# Store all filenames and update times
print 'Store names and update times'
name_time_list.extend(zip(name_list, time_list))
# Download and store data
print 'Downloading data'
for name in name_list:
try:
with open(os.path.join('EPA downloads', year_str, name), 'wb') as f:
ftp.retrbinary('RETR %s' % name, f.write)
except ftplib.all_errors as e:
print e
try:
ftp.quit()
except ftplib.all_errors as e:
print e
pass
ftp = FTP('ftp.epa.gov')
ftp.login()
ftp.cwd('/dmdnload/emissions/hourly/monthly')
ftp.cwd(year_str)
with open(os.path.join('EPA downloads', year_str, name), 'wb') as f:
ftp.retrbinary('RETR %s' % name, f.write)
print 'Download finished'
print round((timeit.default_timer() - start_time)/60.0,2), 'min so far'
# Go back up a level on the ftp server
ftp.cwd('..')
# Timestamp
elapsed = round((timeit.default_timer() - start_time)/60.0,2)
print 'Data download completed in %s mins' %(elapsed)
Explanation: Final script
This downloads hourly data from the ftp server over a range of years, and saves all of the file names/last update times in a list. The downloads can take some time depending on how much data is being retrieved.
Some of the code below assumes that we only need to retrieve new or modified files. If you are retrieving this data for the first time, create an empty dataframe named already_downloaded with column names file name and last updated.
End of explanation
name_time_df = pd.DataFrame(name_time_list, columns=['file name', 'last updated'])
name_time_df.head()
len(name_time_df)
path = os.path.join('EPA downloads', 'name_time 2015-2016.csv')
name_time_df.to_csv(path, index=False)
Explanation: Export file names and update timestamp
End of explanation
import csv
import zipfile
import StringIO
from collections import Counter
Explanation: Check number of columns and column names
End of explanation
base_path = 'EPA downloads'
num_cols = {}
col_names = {}
for year in range(2001, 2017):
n_cols_list = []
col_name_list = []
path = os.path.join(base_path, str(year))
fnames = os.listdir(path)
for name in fnames:
csv_name = name.split('.')[0] + '.csv'
fullpath = os.path.join(path, name)
filehandle = open(fullpath, 'rb')
zfile = zipfile.ZipFile(filehandle)
data = StringIO.StringIO(zfile.read(csv_name)) #don't forget this line!
reader = csv.reader(data)
columns = reader.next()
# Add the column names to the large list
col_name_list.extend(columns)
# Add the number of columns to the list
n_cols_list.append(len(columns))
col_names[year] = Counter(col_name_list)
num_cols[year] = Counter(n_cols_list)
Explanation: This takes ~100 ms per file.
End of explanation
pd.DataFrame(col_names)
pd.DataFrame(col_names).index
pd.DataFrame(num_cols)
Explanation: From the table below, recent years always have the units after an emission name. Before 2009 some files have the units and some don't. UNITID is consistent through all years, but UNIT_ID was added in after 2008 (not the same thing).
End of explanation
col_name_map = {'CO2_MASS' : 'CO2_MASS (tons)',
'CO2_RATE' : 'CO2_RATE (tons/mmBtu)',
'GLOAD' : 'GLOAD (MW)',
'HEAT_INPUT' : 'HEAT_INPUT (mmBtu)',
'NOX_MASS' : 'NOX_MASS (lbs)',
'NOX_RATE' : 'NOX_RATE (lbs/mmBtu)',
'SLOAD' : 'SLOAD (1000lb/hr)',
'SLOAD (1000 lbs)' : 'SLOAD (1000lb/hr)',
'SO2_MASS' : 'SO2_MASS (lbs)',
'SO2_RATE' : 'SO2_RATE (lbs/mmBtu)'
}
from joblib import Parallel, delayed
from scripts import import_clean_epa
if __name__ == '__main__':
start_time = timeit.default_timer()
base_path = 'EPA downloads'
for year in range(2015, 2017):
print 'Starting', str(year)
df_list = []
path = os.path.join(base_path, str(year))
fnames = os.listdir(path)
df_list = Parallel(n_jobs=-1)(delayed(import_clean_epa)(path, name, col_name_map) for name in fnames)
print 'Combining data'
df = pd.concat(df_list)
print 'Saving file'
path_out = os.path.join('Clean data', 'EPA emissions', 'EPA emissions ' + str(year) + '.csv')
df.to_csv(path_out, index=False)
print round((timeit.default_timer() - start_time)/60.0,2), 'min so far'
# Timestamp
elapsed = round((timeit.default_timer() - start_time)/60.0,2)
Explanation: Correct column names and export all files (1 file per year)
Also convert the OP_DATE column to a datetime object
Using joblib for this.
Joblib on windows requires the if name == 'main': statement. And in a Jupyter notebook the function needs to be imported from an external script. I probably should have done the parallel part at a higher level - the longest part is saving the csv files. Could use this method - disable a check - to speed up the process.
Joblib has to be at least version 10.0, which is only available through pip - got some errors when using the version installed by conda.
Create a dictionary mapping column names. Any values on the left (keys) should be replaced by values on the right (values).
End of explanation |
13,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comprehensions
In addition to sequence operations and list methods, Python includes a more advanced operation called a list comprehension.
List comprehensions allow us to build out lists using a different notation. You can think of it as essentially a one line for loop built inside of brackets. For a simple example
Step1: This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example
Step2: Example 3
Lets see how to add in if statements
Step3: Example 4
Can also do more complicated arithmetic
Step4: Example 5
We can also perform nested list comprehensions, for example | Python Code:
# Grab every letter in string
lst = [x for x in 'word']
# Check
lst
Explanation: Comprehensions
In addition to sequence operations and list methods, Python includes a more advanced operation called a list comprehension.
List comprehensions allow us to build out lists using a different notation. You can think of it as essentially a one line for loop built inside of brackets. For a simple example:
Example 1
End of explanation
# Square numbers in range and turn into list
lst = [x**2 for x in range(0,11)]
lst
Explanation: This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example: x^2 : x in { 0,1,2...10}
Lets see a few more example of list comprehensions in Python:
Example 2
End of explanation
# Check for even numbers in a range
lst = [x for x in range(11) if x % 2 == 0]
lst
Explanation: Example 3
Lets see how to add in if statements:
End of explanation
# Convert Celsius to Fahrenheit
celsius = [0,10,20.1,34.5]
fahrenheit = [ ((float(9)/5)*temp + 32) for temp in celsius ]
fahrenheit
Explanation: Example 4
Can also do more complicated arithmetic:
End of explanation
lst = [ x**2 for x in [x**2 for x in range(11)]]
lst
Explanation: Example 5
We can also perform nested list comprehensions, for example:
End of explanation |
13,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style="width
Step1: Script setup
Step2: Settings
Choose download option
The original data can either be downloaded from the original data sources as specified below or from the opsd-Server. Default option is to download from the original sources as the aim of the project is to stay as close to original sources as possible. However, if problems with downloads e.g. due to changing urls occur, you can still run the script with the original data from the opsd_server.
Step3: Update the download links
The download link for the UK is updated at the end of each quarter by the source provider, BEIS. We keep up with those changes by extracting the download link automatically from the web page it is on. That way, the link does not have to be updated manually.
Note
Step4: Note that, as of August 25, 2020, the following sources are available only from the OPSD server and the data will be downloaded from it even if download_from is set to 'original_sources'
Step5: Set up the NUTS converter
The NUTSConverter class in the util package uses the information on each facility's postcode, municipalty name, municipality code, longitude, and latitude to assign it correct NUTS 2016 level 1, 2, and 3 codes.
Here, we instantiate the converter so that we can use it later.
Step6: Setup translation dictionaries
Column and value names of the original data sources will be translated to English and standardized across different sources. Standardized column names, e.g. "electrical_capacity" are required to merge data in one DataFrame.<br>
The column and the value translation lists are provided in the input folder of the Data Package.
Step7: Download and process per country
For one country after the other, the original data is downloaded, read, processed, translated, eventually georeferenced and saved. If respective files are already in the local folder, these will be utilized.
To process the provided data pandas DataFrame is applied.<br>
Germany DE
Download and read
The data which will be processed below is provided by the following data sources
Step8: Translate column names
To standardise the DataFrame the original column names from the German TSOs and the BNetzA wil be translated and new English column names wil be assigned to the DataFrame. The unique column names are required to merge the DataFrame.<br>
The column_translation_list is provided here as csv in the input folder. It is loaded in 2.3 Setup of translation dictionaries.
Step9: Add information and choose columns
All data source names and for the BNetzA-PV data the energy source level 2 will added.
Step10: Merge DataFrames
The individual DataFrames from the TSOs (Netztransparenz.de) and BNetzA are merged.
Step11: Translate values and harmonize energy source level 2
Different German terms for energy source level 2, energy source level 3, technology and voltage levels are translated and harmonized across the individual data sources. The value_translation_list is provided here as csv in the input folder. It is loaded in 2.3 Setup of translation dictionaries.
Step12: Separate and assign energy source level 1 - 3 and technology
Step13: According to the OPSD energy hierarchy, the power plants whose energy_source_level_2 is either Storage or Other fossil fuels do not belong to the class of renewable-energy facilities. Therefore, we can remove them.
Step14: Summary of DataFrame
Step15: Transform electrical capacity from kW to MW
Step16: Georeferencing
Get coordinates by postcode
(for data with no existing geocoordinates)
The available post code in the original data provides a first approximation for the geocoordinates of the RE power plants.<br>
The BNetzA data provides the full zip code whereas due to data privacy the TSOs only report the first three digits of the power plant's post code (e.g. 024xx) and no address. Subsequently a centroid of the post code region polygon is used to find the coordinates.
With data from
* http
Step17: Merge geometry information by using the postcode
Step18: Transform geoinformation
(for data with already existing geoinformation)
In this section the existing geoinformation (in UTM-format) will be transformed into latidude and longitude coordiates as a uniform standard for geoinformation.
The BNetzA data set offers UTM Geoinformation with the columns utm_zone (UTM-Zonenwert), utm_east and utm_north. Most of utm_east-values include the utm_zone-value 32 at the beginning of the number. In order to properly standardize and transform this geoinformation into latitude and longitude it is necessary to remove this utm_zone value. For all UTM entries the utm_zone 32 is used by the BNetzA.
|utm_zone| utm_east| utm_north| comment|
|---|---|---| ----|
|32| 413151.72| 6027467.73| proper coordinates|
|32| 32912159.6008| 5692423.9664| caused error by 32|
How many different utm_zone values are in the data set?
Step19: Remove the utm_zone "32" from the utm_east value
Step20: Conversion UTM to latitude and longitude
Step21: Check
Step22: Remove temporary columns
Step23: Save temporary Pickle (to have a point to quickly return to if things break after this point)
Step24: Clean data
Step25: Assign NUTS codes
Step26: Visualize
Step27: Save
The merged, translated, cleaned, DataFrame will be saved temporily as a pickle file, which stores a Python object fast.
Step28: Denmark DK
Download and read
The data which will be processed below is provided by the following data sources
Step29: The function for reading the data on the wind turbines.
Step30: Translate column names
Step31: Add data source and missing information
Step32: Correct the dates
Some dates in the Energinet dataset are equal to 1970-01-01, which should be NaN instead
Step33: Translate values and harmonize energy source level 2
Step34: Georeferencing
UTM32 to latitude and longitude (Data from Energistyrelsen)
The Energistyrelsen data set offers UTM Geoinformation with the columns utm_east and utm_north belonging to the UTM zone 32. In this section the existing geoinformation (in UTM-format) will be transformed into latidude and longitude coordiates as a uniform standard for geoinformation.
Step35: Postcode to lat/lon (WGS84)
(for data from Energinet.dk)
The available post code in the original data provides an approximation for the geocoordinates of the solar power plants.<br>
The postcode will be assigned to latitude and longitude coordinates with the help of the postcode table.
Step36: Merge DataFrames, add NUTS information and choose columns
Step37: Let us check geoinformation on the facilities for which NUTS codes could not be determined.
Step38: As we see, no information on municipality and latitude/longitude coordinates are present for those power plants, so there was no possibility to assign them their NUTS codes.
Select columns
Step39: Remove duplicate rows
Step40: Transform electrical_capacity from kW to MW
Step41: Visualize
Step42: Save
Step43: France FR
The data which will be processed below is provided by the following data sources
Step44: ODRE data
Load the data
Step45: Translate column names
Step46: Add data source
Step47: Translate values
Step48: Correct site names
Some facilites do not come with their names. Instead, strings such as Agrégation des installations de moins de 36KW, Confidentiel and confidentiel are used. Here, we correct this by setting all such names to np.nan.
Step49: Replace suspicious dates with N/A
The commissioning dates of some solar and wind plants are set in the early 20th and late 19th centuries. We replace those dates with N/A since they do not make sense.
Step50: Check missing values
Now, we will drop all the columns and all the rows which contain only null values.
Step51: As we see above, no column contains only the null value, so we do not need to drop any.
Step52: No row contains only the null values, so no need to for filtering on that basis.
Standardize the energy types and technologies
Now, we proceed with standardizing the energy types and technologies present in the data according to the OPSD energy hierarchy.
Step53: In order to facilitate further processing, we can remove the rows that we know for sure we won't need.
Those are the rows satisfying either of the following conditions
Step54: Standardize source levels 1-3 and technology
Let us see the energy types and technologies present in the filtered data.
Step55: First, let us standardize the values for energy source level 2 and technology.
1. We will use np.nan to indicate that technology should not be specified for the respective kind of sources according to the OPSD hierarchy.
2. 'Other or unspecified technology' will mean that technology should be specified but it was unclear or missing in the original dataset.
That means that we need to apply the following correction rules to the current data
Step56: Let us now deal with the third level of the energy hierarchy. Only Bioenergy has the third level. Information on it can be found in the column energy_source_level_3 (whose original name was combustible).
Step57: We see that only the following two corrections are needed
Step58: Finally, we declare all the plants as renewable and show the final hierarchy.
Step59: Georeferencing
First, we will determine the plants' longitude and latitude coordinates, and then assign them their NUTS codes.
Municipality (INSEE) code to lon/lat
Step60: Determine NUTS codes
Step61: Let us now check the facilities without NUTS classification.
Step62: We see that no row with known longitude and latitude was left unclassified.
What we also see is that some municipality codes did not translate to the corresponding NUTS codes. Further inspection shows that those codes are not present in the official NUTS translation tables.
Step63: We also see that problematic municipality names are either not present in the official translation tables or more than one municipality in the tables bears them.
Step64: Therefore, we can confirm that NUTS classification codes were determined with the highest precision possible.
Convert electrical capacity to MW
Step65: Old data
Step66: This French data source contains number of installations and sum of installed capacity per energy source per municipality. The list is limited to the plants which are covered by article 10 of february 2000 by an agreement to a purchase commitment.
Step67: Add data source
Step68: Translate values and harmonize energy source level 2
Kept secret if number of installations < 3
If the number of installations is less than 3, it is marked with an s instead of the number 1 or 2 due to statistical confidentiality (as explained by the data provider). Here, the s is changed to < 3. This is done in the same step as the other value translations of the energy sources.
Step69: Separate and assign energy source level 1-3 and technology
Step70: Show the hierarchy of the energy types present in the data.
Step71: Georeferencing
Municipality (INSEE) code to lat/lon
Step72: Determine NUTS codes
Step73: As we can see, the NUTS codes were determined successfully for all the facilities in the dataset.
Integrate old and new data
Some municipalities are not covered by the new data set, provided by ODRE. Now, we find those municipalities and integrate them with the new data.
The only column present in the old data, but not in the new, is number_of_installations. Since the old data
were aggregated on the municipality level, the column in question refers to the numbers of power plants in the
municipalitis. Since the new data covers individual plants, if we set the column number_of_installations to 1
for all the plants in the the new data, we will make the two sets consistent with one another and be able
to concatenate them.
We will set site_name to 'Aggregated data for municipality' for all the rows from the old data, where municipality refers to the name of the municipality for which the row has been compiled.
Note
Step74: Select the columns
Now, we select the columns we want to keep.
Step75: Visualize
Step76: Save
Step77: Poland PL
Download
The data which will be processed below is provided by the following data source
Step78: Load and explore the data
The dataset comes in the csv format. Let us open it, inspect its columns and clean it a bit before processing it further.
Step79: There are only five columns
Step80: To ease the work, we can translate the columns' names to English using the OPSD translation tables.
Step81: Inspect the data
Let us do few quick checks to see state of the data
Step82: We can see that each name comes in two forms
Step83: Now, let us check the strings for districts (powiats).
Step84: As we see in the list, the same district can be referred to by more than one string. We identify the following ways a district is referred to in the dataset
Step85: Harmonising energy levels
Step86: Georeferencing (NUTS classification)
We have already seen that the district names are not standardized and observed that we cannot use them directly to get the corresponding NUTS codes.
There is a way to get around this issue. We can do it as folows
Step87: We can now apply a heuristic method for finding the corresponding name in the GeoNames data. It is based on similarity between strings. It turns out that it works fine, except for a couple of cases, which we deal with manually.
Step88: The following districts have not been mapped correctly
Step89: Show the rows for which we could not find postcodes.
Step90: There are only 17 such power plants and all of them are placed in the districts which we deliberately left out for manual classification.
Add NUTS information
We add the NUTS information as usual, using the converter. After that, we manually add the codes for the left-out districts as follows
Step91: Add data source and year
Step92: Select columns
Step93: Save
Step94: Switzerland CH
Download and read
The data which will be processed below is provided by the following data sources
Step95: Translate column names
Step96: Add data source
Step97: Harmonize energy source hierarchy and translate values
Step98: Separate and assign energy source level 1-3 and technology
Step99: The power plants with energy_source_level_3=Biomass and biogas and technology=Steam turbine do not belong to the renewable energy power plants, so we can remove them.
Step100: Replace the rest of the original terms with their OPSD equivalents
Step101: Georeferencing
Postcode to lat/lon (WGS84)
Step102: Add NUTS information
Step103: Let us check the stations for which NUTS codes could not be determined.
Step104: We see that the municipalities of only plants for which we could not determine the NUTS codes cannot be found in the official translation tables, so there was no possibility to assign them their NUTS classification codes.
Transform electrical_capacity from kW to MW
Step105: Select columns to keep
Step106: Visualize
Step107: Save
Step108: Check and validation of the renewable power plants list as well as the creation of CSV/XLSX/SQLite files can be found in Part 2 of this script. It also generates a daily time series of cumulated installed capacities by energy source.
United Kingdom UK
The data for the UK are provided by the following sources
Step109: Clean the data
The downloaded dataset has to be cleaned
Step110: Translate column names
Step111: Add data source
Step112: Translate values and harmonise energy source levels 1-3 and technology
Step113: Georeferencing
The facilities' location details comprise of the information on the address, county, region, country (England, Scotland, Wales, Northern Ireland), post code, and Easting (X) and Northing (Y) coordinates of each facility in the OSGB georeferencing system. To convert the easting and northing cordinates to standard WG84 latitude and longitude, we use package bng_latlon.
Step114: Cases with unknown Easting and Northing coordinates
If the Easting and Northing coordinates of a facility are not provided, its latitude and longitude cannot be determined. For such sources, we look up the WGS84 coordinates in the geodataset provided by geonames.org, where the UK postcodes are paired with their latitudes and longitudes.
Step115: Cases for approximation
In the cases where the full post code was not present in geonames.org, use its prefix to find the latitude / longitude pairs of locations covered by that prefix. Then, approximate those facilities' locations by the centroids of their prefix areas.
Step116: Add NUTS information
Step117: Let us see the facilities for which the NUTS codes could not be determined.
Step118: There are two such rows only. The langitude and longitude coordinates, as well as municipality codes, are missing from the data set, so NUTS codes could not have been determined.
Visualize the data
Step119: We see that some facilities appear to be located in the sea. Let us plot the original OSGB coordinates to see if translation to the standard longitude and latitude coordinates failed for some locations.
Step120: As we can see, the maps are basically the same, which confirms that translation to the longitude and latitude coordinates is done correctly and that they reflect the positions specified by the original X and Y OSGB coordinates.
Keep only the columns of interest
Step121: Save
Step122: Sweden
The data for Sweden are provided by the following sources
Step123: Load the data
Step124: Clean the data
Drop empty rows and columns.
Make sure that the column Uppfört is of the date type.
Keep only operational wind farms (Status is Beviljat (permission granted) or Uppfört (the farm exists)).
Remove the farms whose capacity is not known.
Standardize string columns.
Step125: Translate column names
Step126: Correct the dates
Some wind farms are declared to be commissioned in the year 1900. We set those dates to np.nan.
Step127: Add source
Step128: Translate values and harmonize energy source levels
Step129: Georeferencing
The coordinates in the columns sweref99tm_north and sweref99tm_east are specified in the SWEREF 99 TM coordinate system, used in Sweden. To convert those coordinates to the usual WGS84 latitudes and longitudes, we use the function sweref99tm_latlon_transform from the module util.helper, provided by Jon Olauson.
Step130: Assigning NUTS codes
Step131: Select the columns to keep
Step132: Visualize
Step133: Save
Step134: Czech Republic
The data for Czech Republic are provided by the following source
Step135: Let's inspect the dataframe's columns
Step136: It contains 30 columns
Step137: As of April 2020, as we can see in the output above, there are only 4 sites which use more than one type of renewable energy, and there are 193 sites which do not use renewable energy at all.
Clean the data
Step138: Reformat the data
There are sites which use different types of renewable source to produce electric energy. Those are the sites where at least two of the following columns are not equal to zero
Step139: Let us see what is this restructured dataframe like.
Step140: The number of columns has been reduced as we have transformed the data to the long format. The rows representning conventional power plants have been excluded. Since only few sites use multiple types of energy, the total number of rows has not increased.
Translate column names
Step141: Translate values and harmonize energy levels
Step142: Add data source
Step143: Georeferencing
Step144: Assign NUTS codes
Step145: Select the columns to keep
Step146: Drop duplicates
Step147: Visualuze
Step148: Save
Step149: Zip the raw data | Python Code:
version = '2020-08-25'
Explanation: <div style="width:100%; background-color: #D9EDF7; border: 1px solid #CFCFCF; text-align: left; padding: 10px;">
<b>Renewable power plants: Download and process notebook</b>
<ul>
<li><a href="main.ipynb">Main notebook</a></li>
<li>Download and process notebook</li>
<li><a href="validation_and_output.ipynb">Validation and output notebook</a></li>
</ul>
<br>This notebook is part of the <a href="http://data.open-power-system-data.org/renewable_power_plants"> Renewable power plants Data Package</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>.
</div>
This script downlads and extracts the original data of renewable power plant lists from the data sources, processes and merges them. It subsequently adds the geolocation for each power plant. Finally it saves the DataFrames as pickle-files. Make sure you run the download and process Notebook before the validation and output Notebook.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Script-setup" data-toc-modified-id="Script-setup-1"><span class="toc-item-num">1 </span>Script setup</a></span></li><li><span><a href="#Settings" data-toc-modified-id="Settings-2"><span class="toc-item-num">2 </span>Settings</a></span><ul class="toc-item"><li><span><a href="#Choose-download-option" data-toc-modified-id="Choose-download-option-2.1"><span class="toc-item-num">2.1 </span>Choose download option</a></span></li><li><span><a href="#Update-the-download-links" data-toc-modified-id="Update-the-download-links-2.2"><span class="toc-item-num">2.2 </span>Update the download links</a></span></li><li><span><a href="#Set-up-the-downloader-for-data-sources" data-toc-modified-id="Set-up-the-downloader-for-data-sources-2.3"><span class="toc-item-num">2.3 </span>Set up the downloader for data sources</a></span></li><li><span><a href="#Set-up-the-NUTS-converter" data-toc-modified-id="Set-up-the-NUTS-converter-2.4"><span class="toc-item-num">2.4 </span>Set up the NUTS converter</a></span></li><li><span><a href="#Setup-translation-dictionaries" data-toc-modified-id="Setup-translation-dictionaries-2.5"><span class="toc-item-num">2.5 </span>Setup translation dictionaries</a></span></li></ul></li><li><span><a href="#Download-and-process-per-country" data-toc-modified-id="Download-and-process-per-country-3"><span class="toc-item-num">3 </span>Download and process per country</a></span><ul class="toc-item"><li><span><a href="#Germany-DE" data-toc-modified-id="Germany-DE-3.1"><span class="toc-item-num">3.1 </span>Germany DE</a></span><ul class="toc-item"><li><span><a href="#Download-and-read" data-toc-modified-id="Download-and-read-3.1.1"><span class="toc-item-num">3.1.1 </span>Download and read</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.1.2"><span class="toc-item-num">3.1.2 </span>Translate column names</a></span></li><li><span><a href="#Add-information-and-choose-columns" data-toc-modified-id="Add-information-and-choose-columns-3.1.3"><span class="toc-item-num">3.1.3 </span>Add information and choose columns</a></span></li><li><span><a href="#Merge-DataFrames" data-toc-modified-id="Merge-DataFrames-3.1.4"><span class="toc-item-num">3.1.4 </span>Merge DataFrames</a></span></li><li><span><a href="#Translate-values-and-harmonize-energy-source-level-2" data-toc-modified-id="Translate-values-and-harmonize-energy-source-level-2-3.1.5"><span class="toc-item-num">3.1.5 </span>Translate values and harmonize energy source level 2</a></span></li><li><span><a href="#Transform-electrical-capacity-from-kW-to-MW" data-toc-modified-id="Transform-electrical-capacity-from-kW-to-MW-3.1.6"><span class="toc-item-num">3.1.6 </span>Transform electrical capacity from kW to MW</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.1.7"><span class="toc-item-num">3.1.7 </span>Georeferencing</a></span><ul class="toc-item"><li><span><a href="#Get-coordinates-by-postcode" data-toc-modified-id="Get-coordinates-by-postcode-3.1.7.1"><span class="toc-item-num">3.1.7.1 </span>Get coordinates by postcode</a></span></li><li><span><a href="#Transform-geoinformation" data-toc-modified-id="Transform-geoinformation-3.1.7.2"><span class="toc-item-num">3.1.7.2 </span>Transform geoinformation</a></span></li></ul></li><li><span><a href="#Clean-data" data-toc-modified-id="Clean-data-3.1.8"><span class="toc-item-num">3.1.8 </span>Clean data</a></span></li><li><span><a href="#Assign-NUTS-codes" data-toc-modified-id="Assign-NUTS-codes-3.1.9"><span class="toc-item-num">3.1.9 </span>Assign NUTS codes</a></span></li><li><span><a href="#Visualize" data-toc-modified-id="Visualize-3.1.10"><span class="toc-item-num">3.1.10 </span>Visualize</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.1.11"><span class="toc-item-num">3.1.11 </span>Save</a></span></li></ul></li><li><span><a href="#Denmark-DK" data-toc-modified-id="Denmark-DK-3.2"><span class="toc-item-num">3.2 </span>Denmark DK</a></span><ul class="toc-item"><li><span><a href="#Download-and-read" data-toc-modified-id="Download-and-read-3.2.1"><span class="toc-item-num">3.2.1 </span>Download and read</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.2.2"><span class="toc-item-num">3.2.2 </span>Translate column names</a></span></li><li><span><a href="#Add-data-source-and-missing-information" data-toc-modified-id="Add-data-source-and-missing-information-3.2.3"><span class="toc-item-num">3.2.3 </span>Add data source and missing information</a></span></li><li><span><a href="#Correct-the-dates" data-toc-modified-id="Correct-the-dates-3.2.4"><span class="toc-item-num">3.2.4 </span>Correct the dates</a></span></li><li><span><a href="#Translate-values-and-harmonize-energy-source-level-2" data-toc-modified-id="Translate-values-and-harmonize-energy-source-level-2-3.2.5"><span class="toc-item-num">3.2.5 </span>Translate values and harmonize energy source level 2</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.2.6"><span class="toc-item-num">3.2.6 </span>Georeferencing</a></span></li><li><span><a href="#Merge-DataFrames,-add-NUTS-information-and-choose-columns" data-toc-modified-id="Merge-DataFrames,-add-NUTS-information-and-choose-columns-3.2.7"><span class="toc-item-num">3.2.7 </span>Merge DataFrames, add NUTS information and choose columns</a></span></li><li><span><a href="#Select-columns" data-toc-modified-id="Select-columns-3.2.8"><span class="toc-item-num">3.2.8 </span>Select columns</a></span></li><li><span><a href="#Remove-duplicate-rows" data-toc-modified-id="Remove-duplicate-rows-3.2.9"><span class="toc-item-num">3.2.9 </span>Remove duplicate rows</a></span></li><li><span><a href="#Transform-electrical_capacity-from-kW-to-MW" data-toc-modified-id="Transform-electrical_capacity-from-kW-to-MW-3.2.10"><span class="toc-item-num">3.2.10 </span>Transform electrical_capacity from kW to MW</a></span></li><li><span><a href="#Visualize" data-toc-modified-id="Visualize-3.2.11"><span class="toc-item-num">3.2.11 </span>Visualize</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.2.12"><span class="toc-item-num">3.2.12 </span>Save</a></span></li></ul></li><li><span><a href="#France-FR" data-toc-modified-id="France-FR-3.3"><span class="toc-item-num">3.3 </span>France FR</a></span><ul class="toc-item"><li><span><a href="#ODRE-data" data-toc-modified-id="ODRE-data-3.3.1"><span class="toc-item-num">3.3.1 </span>ODRE data</a></span><ul class="toc-item"><li><span><a href="#Load-the-data" data-toc-modified-id="Load-the-data-3.3.1.1"><span class="toc-item-num">3.3.1.1 </span>Load the data</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.3.1.2"><span class="toc-item-num">3.3.1.2 </span>Translate column names</a></span></li><li><span><a href="#Add-data-source" data-toc-modified-id="Add-data-source-3.3.1.3"><span class="toc-item-num">3.3.1.3 </span>Add data source</a></span></li><li><span><a href="#Translate-values" data-toc-modified-id="Translate-values-3.3.1.4"><span class="toc-item-num">3.3.1.4 </span>Translate values</a></span></li><li><span><a href="#Correct-site-names" data-toc-modified-id="Correct-site-names-3.3.1.5"><span class="toc-item-num">3.3.1.5 </span>Correct site names</a></span></li><li><span><a href="#Replace-suspicious-dates-with-N/A" data-toc-modified-id="Replace-suspicious-dates-with-N/A-3.3.1.6"><span class="toc-item-num">3.3.1.6 </span>Replace suspicious dates with N/A</a></span></li><li><span><a href="#Check-missing-values" data-toc-modified-id="Check-missing-values-3.3.1.7"><span class="toc-item-num">3.3.1.7 </span>Check missing values</a></span></li><li><span><a href="#Standardize-the-energy-types-and-technologies" data-toc-modified-id="Standardize-the-energy-types-and-technologies-3.3.1.8"><span class="toc-item-num">3.3.1.8 </span>Standardize the energy types and technologies</a></span></li><li><span><a href="#Standardize-source-levels-1-3-and-technology" data-toc-modified-id="Standardize-source-levels-1-3-and-technology-3.3.1.9"><span class="toc-item-num">3.3.1.9 </span>Standardize source levels 1-3 and technology</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.3.1.10"><span class="toc-item-num">3.3.1.10 </span>Georeferencing</a></span></li><li><span><a href="#Convert-electrical-capacity-to-MW" data-toc-modified-id="Convert-electrical-capacity-to-MW-3.3.1.11"><span class="toc-item-num">3.3.1.11 </span>Convert electrical capacity to MW</a></span></li></ul></li><li><span><a href="#Old-data" data-toc-modified-id="Old-data-3.3.2"><span class="toc-item-num">3.3.2 </span>Old data</a></span><ul class="toc-item"><li><span><a href="#Add-data-source" data-toc-modified-id="Add-data-source-3.3.2.1"><span class="toc-item-num">3.3.2.1 </span>Add data source</a></span></li><li><span><a href="#Translate-values-and-harmonize-energy-source-level-2" data-toc-modified-id="Translate-values-and-harmonize-energy-source-level-2-3.3.2.2"><span class="toc-item-num">3.3.2.2 </span>Translate values and harmonize energy source level 2</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.3.2.3"><span class="toc-item-num">3.3.2.3 </span>Georeferencing</a></span></li></ul></li><li><span><a href="#Integrate-old-and-new-data" data-toc-modified-id="Integrate-old-and-new-data-3.3.3"><span class="toc-item-num">3.3.3 </span>Integrate old and new data</a></span></li><li><span><a href="#Select-the-columns" data-toc-modified-id="Select-the-columns-3.3.4"><span class="toc-item-num">3.3.4 </span>Select the columns</a></span></li><li><span><a href="#Visualize" data-toc-modified-id="Visualize-3.3.5"><span class="toc-item-num">3.3.5 </span>Visualize</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.3.6"><span class="toc-item-num">3.3.6 </span>Save</a></span></li></ul></li><li><span><a href="#Poland-PL" data-toc-modified-id="Poland-PL-3.4"><span class="toc-item-num">3.4 </span>Poland PL</a></span><ul class="toc-item"><li><span><a href="#Download" data-toc-modified-id="Download-3.4.1"><span class="toc-item-num">3.4.1 </span>Download</a></span></li><li><span><a href="#Load-and-explore-the-data" data-toc-modified-id="Load-and-explore-the-data-3.4.2"><span class="toc-item-num">3.4.2 </span>Load and explore the data</a></span></li><li><span><a href="#Inspect-the-data" data-toc-modified-id="Inspect-the-data-3.4.3"><span class="toc-item-num">3.4.3 </span>Inspect the data</a></span></li><li><span><a href="#Harmonising-energy-levels" data-toc-modified-id="Harmonising-energy-levels-3.4.4"><span class="toc-item-num">3.4.4 </span>Harmonising energy levels</a></span></li><li><span><a href="#Georeferencing-(NUTS-classification)" data-toc-modified-id="Georeferencing-(NUTS-classification)-3.4.5"><span class="toc-item-num">3.4.5 </span>Georeferencing (NUTS classification)</a></span><ul class="toc-item"><li><span><a href="#Add-NUTS-information" data-toc-modified-id="Add-NUTS-information-3.4.5.1"><span class="toc-item-num">3.4.5.1 </span>Add NUTS information</a></span></li></ul></li><li><span><a href="#Add-data-source-and-year" data-toc-modified-id="Add-data-source-and-year-3.4.6"><span class="toc-item-num">3.4.6 </span>Add data source and year</a></span></li><li><span><a href="#Select-columns" data-toc-modified-id="Select-columns-3.4.7"><span class="toc-item-num">3.4.7 </span>Select columns</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.4.8"><span class="toc-item-num">3.4.8 </span>Save</a></span></li></ul></li><li><span><a href="#Switzerland-CH" data-toc-modified-id="Switzerland-CH-3.5"><span class="toc-item-num">3.5 </span>Switzerland CH</a></span><ul class="toc-item"><li><span><a href="#Download-and-read" data-toc-modified-id="Download-and-read-3.5.1"><span class="toc-item-num">3.5.1 </span>Download and read</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.5.2"><span class="toc-item-num">3.5.2 </span>Translate column names</a></span></li><li><span><a href="#Add-data-source" data-toc-modified-id="Add-data-source-3.5.3"><span class="toc-item-num">3.5.3 </span>Add data source</a></span></li><li><span><a href="#Harmonize-energy-source-hierarchy-and-translate-values" data-toc-modified-id="Harmonize-energy-source-hierarchy-and-translate-values-3.5.4"><span class="toc-item-num">3.5.4 </span>Harmonize energy source hierarchy and translate values</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.5.5"><span class="toc-item-num">3.5.5 </span>Georeferencing</a></span><ul class="toc-item"><li><span><a href="#Postcode-to-lat/lon-(WGS84)" data-toc-modified-id="Postcode-to-lat/lon-(WGS84)-3.5.5.1"><span class="toc-item-num">3.5.5.1 </span>Postcode to lat/lon (WGS84)</a></span></li><li><span><a href="#Add-NUTS-information" data-toc-modified-id="Add-NUTS-information-3.5.5.2"><span class="toc-item-num">3.5.5.2 </span>Add NUTS information</a></span></li></ul></li><li><span><a href="#Transform-electrical_capacity-from-kW-to-MW" data-toc-modified-id="Transform-electrical_capacity-from-kW-to-MW-3.5.6"><span class="toc-item-num">3.5.6 </span>Transform electrical_capacity from kW to MW</a></span></li><li><span><a href="#Select-columns-to-keep" data-toc-modified-id="Select-columns-to-keep-3.5.7"><span class="toc-item-num">3.5.7 </span>Select columns to keep</a></span></li><li><span><a href="#Visualize" data-toc-modified-id="Visualize-3.5.8"><span class="toc-item-num">3.5.8 </span>Visualize</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.5.9"><span class="toc-item-num">3.5.9 </span>Save</a></span></li></ul></li><li><span><a href="#United-Kingdom-UK" data-toc-modified-id="United-Kingdom-UK-3.6"><span class="toc-item-num">3.6 </span>United Kingdom UK</a></span><ul class="toc-item"><li><span><a href="#Download-and-Read" data-toc-modified-id="Download-and-Read-3.6.1"><span class="toc-item-num">3.6.1 </span>Download and Read</a></span></li><li><span><a href="#Clean-the-data" data-toc-modified-id="Clean-the-data-3.6.2"><span class="toc-item-num">3.6.2 </span>Clean the data</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.6.3"><span class="toc-item-num">3.6.3 </span>Translate column names</a></span></li><li><span><a href="#Add-data-source" data-toc-modified-id="Add-data-source-3.6.4"><span class="toc-item-num">3.6.4 </span>Add data source</a></span></li><li><span><a href="#Translate-values-and-harmonise-energy-source-levels-1-3-and-technology" data-toc-modified-id="Translate-values-and-harmonise-energy-source-levels-1-3-and-technology-3.6.5"><span class="toc-item-num">3.6.5 </span>Translate values and harmonise energy source levels 1-3 and technology</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.6.6"><span class="toc-item-num">3.6.6 </span>Georeferencing</a></span><ul class="toc-item"><li><span><a href="#Cases-with-unknown-Easting-and-Northing-coordinates" data-toc-modified-id="Cases-with-unknown-Easting-and-Northing-coordinates-3.6.6.1"><span class="toc-item-num">3.6.6.1 </span>Cases with unknown Easting and Northing coordinates</a></span></li><li><span><a href="#Cases-for-approximation" data-toc-modified-id="Cases-for-approximation-3.6.6.2"><span class="toc-item-num">3.6.6.2 </span>Cases for approximation</a></span></li><li><span><a href="#Add-NUTS-information" data-toc-modified-id="Add-NUTS-information-3.6.6.3"><span class="toc-item-num">3.6.6.3 </span>Add NUTS information</a></span></li><li><span><a href="#Visualize-the-data" data-toc-modified-id="Visualize-the-data-3.6.6.4"><span class="toc-item-num">3.6.6.4 </span>Visualize the data</a></span></li></ul></li><li><span><a href="#Keep-only-the-columns-of-interest" data-toc-modified-id="Keep-only-the-columns-of-interest-3.6.7"><span class="toc-item-num">3.6.7 </span>Keep only the columns of interest</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.6.8"><span class="toc-item-num">3.6.8 </span>Save</a></span></li></ul></li><li><span><a href="#Sweden" data-toc-modified-id="Sweden-3.7"><span class="toc-item-num">3.7 </span>Sweden</a></span><ul class="toc-item"><li><span><a href="#Load-the-data" data-toc-modified-id="Load-the-data-3.7.1"><span class="toc-item-num">3.7.1 </span>Load the data</a></span></li><li><span><a href="#Clean-the-data" data-toc-modified-id="Clean-the-data-3.7.2"><span class="toc-item-num">3.7.2 </span>Clean the data</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.7.3"><span class="toc-item-num">3.7.3 </span>Translate column names</a></span></li><li><span><a href="#Correct-the-dates" data-toc-modified-id="Correct-the-dates-3.7.4"><span class="toc-item-num">3.7.4 </span>Correct the dates</a></span></li><li><span><a href="#Add-source" data-toc-modified-id="Add-source-3.7.5"><span class="toc-item-num">3.7.5 </span>Add source</a></span></li><li><span><a href="#Translate-values-and-harmonize-energy-source-levels" data-toc-modified-id="Translate-values-and-harmonize-energy-source-levels-3.7.6"><span class="toc-item-num">3.7.6 </span>Translate values and harmonize energy source levels</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.7.7"><span class="toc-item-num">3.7.7 </span>Georeferencing</a></span></li><li><span><a href="#Assigning-NUTS-codes" data-toc-modified-id="Assigning-NUTS-codes-3.7.8"><span class="toc-item-num">3.7.8 </span>Assigning NUTS codes</a></span></li><li><span><a href="#Select-the-columns-to-keep" data-toc-modified-id="Select-the-columns-to-keep-3.7.9"><span class="toc-item-num">3.7.9 </span>Select the columns to keep</a></span></li><li><span><a href="#Visualize" data-toc-modified-id="Visualize-3.7.10"><span class="toc-item-num">3.7.10 </span>Visualize</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.7.11"><span class="toc-item-num">3.7.11 </span>Save</a></span></li></ul></li><li><span><a href="#Czech-Republic" data-toc-modified-id="Czech-Republic-3.8"><span class="toc-item-num">3.8 </span>Czech Republic</a></span><ul class="toc-item"><li><span><a href="#Download-and-read-the-data" data-toc-modified-id="Download-and-read-the-data-3.8.1"><span class="toc-item-num">3.8.1 </span>Download and read the data</a></span></li><li><span><a href="#Clean-the-data" data-toc-modified-id="Clean-the-data-3.8.2"><span class="toc-item-num">3.8.2 </span>Clean the data</a></span></li><li><span><a href="#Reformat-the-data" data-toc-modified-id="Reformat-the-data-3.8.3"><span class="toc-item-num">3.8.3 </span>Reformat the data</a></span></li><li><span><a href="#Translate-column-names" data-toc-modified-id="Translate-column-names-3.8.4"><span class="toc-item-num">3.8.4 </span>Translate column names</a></span></li><li><span><a href="#Translate-values-and-harmonize-energy-levels" data-toc-modified-id="Translate-values-and-harmonize-energy-levels-3.8.5"><span class="toc-item-num">3.8.5 </span>Translate values and harmonize energy levels</a></span></li><li><span><a href="#Add-data-source" data-toc-modified-id="Add-data-source-3.8.6"><span class="toc-item-num">3.8.6 </span>Add data source</a></span></li><li><span><a href="#Georeferencing" data-toc-modified-id="Georeferencing-3.8.7"><span class="toc-item-num">3.8.7 </span>Georeferencing</a></span></li><li><span><a href="#Assign-NUTS-codes" data-toc-modified-id="Assign-NUTS-codes-3.8.8"><span class="toc-item-num">3.8.8 </span>Assign NUTS codes</a></span></li><li><span><a href="#Select-the-columns-to-keep" data-toc-modified-id="Select-the-columns-to-keep-3.8.9"><span class="toc-item-num">3.8.9 </span>Select the columns to keep</a></span></li><li><span><a href="#Drop-duplicates" data-toc-modified-id="Drop-duplicates-3.8.10"><span class="toc-item-num">3.8.10 </span>Drop duplicates</a></span></li><li><span><a href="#Visualuze" data-toc-modified-id="Visualuze-3.8.11"><span class="toc-item-num">3.8.11 </span>Visualuze</a></span></li><li><span><a href="#Save" data-toc-modified-id="Save-3.8.12"><span class="toc-item-num">3.8.12 </span>Save</a></span></li></ul></li></ul></li><li><span><a href="#Zip-the-raw-data" data-toc-modified-id="Zip-the-raw-data-4"><span class="toc-item-num">4 </span>Zip the raw data</a></span></li></ul></div>
End of explanation
import logging
import os
import posixpath
import urllib.parse
import urllib.request
import re
import zipfile
import pickle
import urllib
import shutil
import datetime
import numpy as np
import pandas as pd
import utm # for transforming geoinformation in the utm format
import requests
import fake_useragent
from string import Template
from IPython.display import display
import xlrd
import bs4
import bng_to_latlon
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
# for visualizing locations on maps
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io import shapereader
import geopandas
import shapely
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%d %b %Y %H:%M:%S'
)
logger = logging.getLogger()
# Create input, intermediate and output folders if they don't exist.
# If the paths are relative, the correspoding folders will be created
# inside the current working directory.
input_directory_path = os.path.join('input', 'original_data')
intermediate_directory_path = 'intermediate'
output_directory_path = os.path.join('output', 'renewable_power_plants')
os.makedirs(input_directory_path, exist_ok=True)
os.makedirs(intermediate_directory_path, exist_ok=True)
os.makedirs(output_directory_path, exist_ok=True)
# Create the folder to which the Eurostat files with data at the level of the whole EU/Europe
#are going to be downloaded
eurostat_eu_directory_path = os.path.join('input', 'eurostat_eu')
os.makedirs(eurostat_eu_directory_path, exist_ok=True)
# Define the path of the file with the list of sources.
source_list_filepath = os.path.join('input', 'sources.csv')
# Import the utility functions and classes from the util package
import util.helper
from util.visualizer import visualize_points
Explanation: Script setup
End of explanation
download_from = 'original_sources'
#download_from = 'opsd_server'
Explanation: Settings
Choose download option
The original data can either be downloaded from the original data sources as specified below or from the opsd-Server. Default option is to download from the original sources as the aim of the project is to stay as close to original sources as possible. However, if problems with downloads e.g. due to changing urls occur, you can still run the script with the original data from the opsd_server.
End of explanation
source_df = pd.read_csv(source_list_filepath)
uk_main_page = 'https://www.gov.uk/government/publications/renewable-energy-planning-database-monthly-extract'
current_link = util.helper.get_beis_link(uk_main_page)
current_filename = current_link.split('/')[-1]
source_df.loc[(source_df['country'] == 'UK') & (source_df['source'] == 'BEIS'), 'url'] = current_link
source_df.loc[(source_df['country'] == 'UK') & (source_df['source'] == 'BEIS'), 'filename'] = current_filename
source_df.to_csv(source_list_filepath, index=False, header=True)
source_df.fillna('')
Explanation: Update the download links
The download link for the UK is updated at the end of each quarter by the source provider, BEIS. We keep up with those changes by extracting the download link automatically from the web page it is on. That way, the link does not have to be updated manually.
Note: you must be connected to the Internet if you want to execute this step.
End of explanation
import util.downloader
from util.downloader import Downloader
downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
Explanation: Note that, as of August 25, 2020, the following sources are available only from the OPSD server and the data will be downloaded from it even if download_from is set to 'original_sources':
- Energinet (DK)
- Eurostat files which contain correspondence tables between postal codes and NUTS.
The original links which should be downloaded from OPSD are marked as inactive in the column active in the above dataframe.
Set up the downloader for data sources
The Downloader class in the util package is responsible for downloading the original files to appropriate folders. In order to access its functionality, we have to instantiate it first.
End of explanation
#import importlib
#importlib.reload(util.nuts_converter)
#importlib.reload(util.downloader)
#from util.downloader import Downloader
#downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
from util.nuts_converter import NUTSConverter
nuts_converter = NUTSConverter(downloader, eurostat_eu_directory_path)
Explanation: Set up the NUTS converter
The NUTSConverter class in the util package uses the information on each facility's postcode, municipalty name, municipality code, longitude, and latitude to assign it correct NUTS 2016 level 1, 2, and 3 codes.
Here, we instantiate the converter so that we can use it later.
End of explanation
# Get column translation list
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
columnnames.head(2)
# Get value translation list
valuenames = pd.read_csv(os.path.join('input', 'value_translation_list.csv'))
valuenames.head(2)
Explanation: Setup translation dictionaries
Column and value names of the original data sources will be translated to English and standardized across different sources. Standardized column names, e.g. "electrical_capacity" are required to merge data in one DataFrame.<br>
The column and the value translation lists are provided in the input folder of the Data Package.
End of explanation
# Define the lists of source names
downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
tsos = ['50Hertz', 'Amprion', 'TenneT', 'TransnetBW']
datasets = ['50Hertz', 'Amprion', 'TenneT', 'TransnetBW','bnetza','bnetza_pv','bnetza_pv_historic']
# Download the files and get the local file paths indexed by source names
filepaths = downloader.download_data_for_country('DE')
# Remove the Eurostat NUTS file as it's a geoinformation source
DE_postcode2nuts_filepath = filepaths.pop('Eurostat')
# Open all data sets before processing.
filenames = {}
for source in filepaths:
filepath = filepaths[source]
print(source, filepath)
if os.path.splitext(filepath)[1] != '.xlsx' and zipfile.is_zipfile(filepath):
filenames[source] = zipfile.ZipFile(filepath)
else:
filenames[source] = filepath
# Read TSO data from the zip files
dfs = {}
basenames_by_tso = {
'50Hertz': '50Hertz Transmission GmbH EEG-Zahlungen Stammdaten 2019',
'Amprion': 'Amprion GmbH EEG-Zahlungen Anlagenstammdaten 2019',
'TenneT': 'TenneT TSO GmbH Anlagenstammdaten 2019',
'TransnetBW': 'TransnetBW GmbH Anlagenstammdaten 2019',
}
for tso in tsos:
filename = basenames_by_tso[tso]+'.csv'
print('Reading', filename)
#print(filenames[tso].namelist())
dfs[tso] = pd.read_csv(
filenames[tso].open(filename),
sep=';',
thousands='.',
decimal=',',
# Headers have to have the same order for all TSOs. Therefore just define headers here.
# Remove the following three lines if for next version, headers should be read out initially
# to then check if order is the same everywhere.
names=['EEG-Anlagenschlüssel', 'MASTR_Nr_EEG','Netzbetreiber Betriebsnummer','Netzbetreiber Name',
'Strasse_flurstueck','PLZ','Ort / Gemarkung','Gemeindeschlüssel','Bundesland',
'Installierte Leistung','Energieträger','Spannungsebene','Leistungsmessung','Regelbarkeit',
'Inbetriebnahme','Außerbetriebnahme','Netzzugang','Netzabgang'],
header=None,
skiprows=1,
parse_dates=[14, 15, 16, 17], #[11, 12, 13, 14]
#infer_datetime_format=True,
date_parser = lambda x: pd.to_datetime(x, errors='coerce', format='%d.%m.%Y'),
encoding='iso-8859-1',
dayfirst=True,
low_memory=False
)
print('Done reading ' + filename)
for filename in filenames.values():
if(isinstance(filename, zipfile.ZipFile)):
#print(filename)
filename.close()
# define the date parser
def date_parser(x):
if type(x) == str:
return datetime.datetime.strptime(x, '%D.%M.%Y')
elif type(x) == float and pd.isnull(x):
return pd.NaT
def inspect(x):
try:
converted = datetime.datetime.strptime(x, '%d.%m.%Y')
return False
except:
return True
# Read BNetzA register
print('Reading bnetza: '+filenames['bnetza'])
dfs['bnetza'] = pd.read_excel(filenames['bnetza'],
sheet_name='Gesamtübersicht',
header=0,
converters={'4.9 Postleit-zahl': str, 'Gemeinde-Schlüssel': str}
)
skiprows = {'bnetza_pv_historic': 10, 'bnetza_pv': 9}
for dataset in ['bnetza_pv', 'bnetza_pv_historic']:
print(dataset)
print('Reading ' + dataset + ': ' + filenames[dataset])
xls_handle = pd.ExcelFile(filenames[dataset])
print('Concatenating all '+dataset+' sheets into one dataframe')
dfs[dataset] = pd.concat(
(xls_handle.parse(
sheet,
skiprows=skiprows[dataset],
converters={'Anlage \nPLZ': str}
) for sheet in xls_handle.sheet_names),
sort=True
)
# Make sure that the column `Inbetriebnahme-datum *)` (commissioning date) in the bnetza_pv set is datetime.
mask = dfs['bnetza_pv']['Inbetriebnahme-datum *)'].apply(lambda x: type(x) == int)
dfs['bnetza_pv']['Inbetriebnahme-datum *)'] = pd.to_datetime(dfs['bnetza_pv']['Inbetriebnahme-datum *)'],
errors='coerce',
dayfirst=True,
infer_datetime_format=True)
dfs['bnetza_pv']['Inbetriebnahme-datum *)'] = dfs['bnetza_pv']['Inbetriebnahme-datum *)'].apply(
lambda x: x.to_datetime64()
)
dfs['bnetza_pv_historic'] = dfs['bnetza_pv_historic'].drop(['Unnamed: 7'], axis=1)
pickle.dump( dfs, open( "intermediate/temp_dfs_DE_after_reading.pickle", "wb" ) )
dfs = pickle.load( open( "intermediate/temp_dfs_DE_after_reading.pickle", "rb" ) )
Explanation: Download and process per country
For one country after the other, the original data is downloaded, read, processed, translated, eventually georeferenced and saved. If respective files are already in the local folder, these will be utilized.
To process the provided data pandas DataFrame is applied.<br>
Germany DE
Download and read
The data which will be processed below is provided by the following data sources:
Netztransparenz.de - Official grid transparency platform from the German Transmission System Operators (TSOs): 50Hertz, Amprion, TenneT and TransnetBW.
Bundesnetzagentur (BNetzA) - German Federal Network Agency for Electricity, Gas, Telecommunications, Posts and Railway (In separate files for data for roof-mounted PV power plants and for all other renewable energy power plants.)
Data URL for BNetzA gets updated every few month. To be sure, always check if the links (url_bnetza; url_bnetza_pv) are up to date.
End of explanation
# Choose the translation terms for Germany, create dictionary and show dictionary
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
idx_DE = columnnames[columnnames['country'] == 'DE'].index
column_dict_DE = columnnames.loc[idx_DE].set_index('original_name')['opsd_name'].to_dict()
column_dict_DE
# Start the column translation process for each original data source
print('Translation...')
for dataset in dfs:
# Remove newlines and any other duplicate whitespaces in column names:
dfs[dataset] = dfs[dataset].rename(columns={col: re.sub(r"\s+", ' ', col) for col in dfs[dataset].columns})
# Do column name translations
print(dataset)
#print(list(dfs[dataset].columns))
dfs[dataset].rename(columns=column_dict_DE, inplace=True)
#print(list(dfs[dataset].columns).index('decommissioning_date'))
#print('--------------------------------------------')
print('done.')
Explanation: Translate column names
To standardise the DataFrame the original column names from the German TSOs and the BNetzA wil be translated and new English column names wil be assigned to the DataFrame. The unique column names are required to merge the DataFrame.<br>
The column_translation_list is provided here as csv in the input folder. It is loaded in 2.3 Setup of translation dictionaries.
End of explanation
# Add data source names to the DataFrames
for tso in tsos:
dfs[tso]['data_source'] = tso
dfs[tso]['tso'] = tso
dfs['bnetza']['data_source'] = 'BNetzA'
dfs['bnetza_pv']['data_source'] = 'BNetzA_PV'
dfs['bnetza_pv_historic']['data_source'] = 'BNetzA_PV_historic'
# Add for the BNetzA PV data the energy source level 2
dfs['bnetza_pv']['energy_source_level_2'] = 'Photovoltaics'
dfs['bnetza_pv_historic']['energy_source_level_2'] = 'Photovoltaics'
# Select those columns of the original data which are utilised further
dfs['bnetza'] = dfs['bnetza'].loc[:, ('commissioning_date', 'decommissioning_date',
'notification_reason', 'energy_source_level_2',
'electrical_capacity_kW', 'thermal_capacity_kW',
'voltage_level', 'dso', 'eeg_id', 'bnetza_id',
'federal_state', 'postcode', 'municipality_code',
'municipality', 'address', 'address_number',
'utm_zone', 'utm_east', 'utm_north',
'data_source')]
for dataset in datasets: print(dataset+':'); display(dfs[dataset].tail(2))
Explanation: Add information and choose columns
All data source names and for the BNetzA-PV data the energy source level 2 will added.
End of explanation
# Merge DataFrames of each original source into a common DataFrame DE_renewables
dfs_list = []
for dataset in datasets:
dfs_list.append(dfs[dataset])
DE_renewables = pd.concat(dfs_list, sort=True)
DE_renewables.head(2)
DE_renewables.reset_index(drop=True, inplace=True)
DE_renewables.head(2)
Explanation: Merge DataFrames
The individual DataFrames from the TSOs (Netztransparenz.de) and BNetzA are merged.
End of explanation
# Choose the translation terms for Germany, create dictionary and show dictionary
valuenames = pd.read_csv(os.path.join('input', 'value_translation_list.csv'))
idx_DE = valuenames[valuenames['country'] == 'DE'].index
value_dict_DE = valuenames.loc[idx_DE].set_index('original_name')['opsd_name'].to_dict()
value_dict_DE
print('replacing...')
# Replace all original value names by the OPSD value names.
# Running time: some minutes.
DE_renewables.replace(value_dict_DE, inplace=True)
print('Done!')
DE_renewables['postcode'] = DE_renewables['postcode'].apply(pd.to_numeric, errors='ignore')
Explanation: Translate values and harmonize energy source level 2
Different German terms for energy source level 2, energy source level 3, technology and voltage levels are translated and harmonized across the individual data sources. The value_translation_list is provided here as csv in the input folder. It is loaded in 2.3 Setup of translation dictionaries.
End of explanation
# Create dictionary in order to assign energy_source to its subtype
energy_source_dict_DE = valuenames.loc[idx_DE].set_index(
'opsd_name')['energy_source_level_2'].to_dict()
# Column energy_source partly contains energy source level 3 and technology information,
# thus this column is copied to new column technology...
DE_renewables['technology'] = DE_renewables['energy_source_level_2']
# ...and the energy source level 2 values are replaced by the higher level classification
DE_renewables['energy_source_level_2'].replace(energy_source_dict_DE, inplace=True)
# Choose energy source level 2 entries where energy_source is "Bioenergy" in order to
# separate Bioenergy subtypes to "energy_source_level_3" and subtypes for the rest to "technology"
idx_DE_Bioenergy = DE_renewables[DE_renewables['energy_source_level_2'] == 'Bioenergy'].index
# Assign technology to energy source level 3 for all entries where energy source level 2 is
# Bioenergy and delete those entries from technology
DE_renewables[['energy_source_level_3']] = DE_renewables.iloc[idx_DE_Bioenergy][['technology']]
DE_renewables.loc[idx_DE_Bioenergy]['technology'] = np.nan
# Assign energy source level 1 to the dataframe
DE_renewables['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of the energy types present in the frame
energy_columns = ['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']
DE_renewables[energy_columns].drop_duplicates().sort_values(by='energy_source_level_2')
Explanation: Separate and assign energy source level 1 - 3 and technology
End of explanation
drop_mask = DE_renewables['energy_source_level_2'].isin(['Other fossil fuels', 'Storage'])
DE_renewables.drop(DE_renewables.index[drop_mask], axis=0, inplace=True)
Explanation: According to the OPSD energy hierarchy, the power plants whose energy_source_level_2 is either Storage or Other fossil fuels do not belong to the class of renewable-energy facilities. Therefore, we can remove them.
End of explanation
# Electrical capacity per energy source level 2 (in MW)
DE_renewables.groupby(['energy_source_level_2'])['electrical_capacity_kW'].sum() / 1000
Explanation: Summary of DataFrame
End of explanation
# kW to MW
DE_renewables[['electrical_capacity_kW', 'thermal_capacity_kW']] /= 1000
# adapt column name
DE_renewables.rename(columns={'electrical_capacity_kW': 'electrical_capacity',
'thermal_capacity_kW': 'thermal_capacity'}, inplace=True)
Explanation: Transform electrical capacity from kW to MW
End of explanation
# Read generated postcode/location file
postcode = pd.read_csv(os.path.join('input', 'de_tso_postcode_full.csv'))
# Drop possible duplicates in postcodes
postcode.drop_duplicates('postcode', keep='last', inplace=True)
# Show first entries
postcode.head(2)
Explanation: Georeferencing
Get coordinates by postcode
(for data with no existing geocoordinates)
The available post code in the original data provides a first approximation for the geocoordinates of the RE power plants.<br>
The BNetzA data provides the full zip code whereas due to data privacy the TSOs only report the first three digits of the power plant's post code (e.g. 024xx) and no address. Subsequently a centroid of the post code region polygon is used to find the coordinates.
With data from
* http://www.suche-postleitzahl.org/downloads?download=plz-gebiete.shp.zip
* http://www.suche-postleitzahl.org/downloads?download_file=plz-3stellig.shp.zip
* http://www.suche-postleitzahl.org/downloads
a CSV-file for all existing German post codes with matching geocoordinates has been compiled. The latitude and longitude coordinates were generated by running a PostgreSQL + PostGIS database. Additionally the respective TSO has been added to each post code. (A Link to the SQL script will follow here later)
(License: http://www.suche-postleitzahl.org/downloads, Open Database Licence for free use. Source of data: © OpenStreetMap contributors)
End of explanation
# Take postcode and longitude/latitude information
postcode = postcode[['postcode', 'lon', 'lat']]
# Cast DE_renewables['postcode'] to int64 in order to do the natural join of the dataframes
DE_renewables['postcode'] = pd.to_numeric(DE_renewables['postcode'], errors='coerce')
# Join two dataframes
DE_renewables = DE_renewables.merge(postcode, on=['postcode'], how='left')
Explanation: Merge geometry information by using the postcode
End of explanation
DE_renewables.groupby(['utm_zone'])['utm_zone'].count()
Explanation: Transform geoinformation
(for data with already existing geoinformation)
In this section the existing geoinformation (in UTM-format) will be transformed into latidude and longitude coordiates as a uniform standard for geoinformation.
The BNetzA data set offers UTM Geoinformation with the columns utm_zone (UTM-Zonenwert), utm_east and utm_north. Most of utm_east-values include the utm_zone-value 32 at the beginning of the number. In order to properly standardize and transform this geoinformation into latitude and longitude it is necessary to remove this utm_zone value. For all UTM entries the utm_zone 32 is used by the BNetzA.
|utm_zone| utm_east| utm_north| comment|
|---|---|---| ----|
|32| 413151.72| 6027467.73| proper coordinates|
|32| 32912159.6008| 5692423.9664| caused error by 32|
How many different utm_zone values are in the data set?
End of explanation
# Find entries with 32 value at the beginning
idx_32 = (DE_renewables['utm_east'].astype(str).str[:2] == '32')
idx_notnull = DE_renewables['utm_east'].notnull()
# Remove 32 from utm_east entries
DE_renewables.loc[idx_32, 'utm_east'] = DE_renewables.loc[idx_32,
'utm_east'].astype(str).str[2:].astype(float)
def convert_to_latlon(utm_east, utm_north, utm_zone):
try:
return utm.to_latlon(utm_east, utm_north, utm_zone, 'U')
except:
return ''
DE_renewables['latlon'] = DE_renewables.loc[idx_notnull, ['utm_east', 'utm_north', 'utm_zone']].apply(
lambda x: convert_to_latlon(x[0], x[1], x[2]), axis=1).astype(str)
Explanation: Remove the utm_zone "32" from the utm_east value
End of explanation
lat = []
lon = []
for row in DE_renewables['latlon']:
try:
# Split tuple format into the column lat and lon
row = row.lstrip('(').rstrip(')')
parts = row.split(',')
if(len(parts)<2):
raise Exception('This is not a proper tuple. So go to exception block.')
lat.append(parts[0])
lon.append(parts[1])
except:
# set NaN
lat.append(np.NaN)
lon.append(np.NaN)
DE_renewables['latitude'] = pd.to_numeric(lat)
DE_renewables['longitude'] = pd.to_numeric(lon)
# Add new values to DataFrame lon and lat
DE_renewables['lat'] = DE_renewables[['lat', 'latitude']].apply(
lambda x: x[1] if pd.isnull(x[0]) else x[0],
axis=1)
DE_renewables['lon'] = DE_renewables[['lon', 'longitude']].apply(
lambda x: x[1] if pd.isnull(x[0]) else x[0],
axis=1)
Explanation: Conversion UTM to latitude and longitude
End of explanation
#DE_renewables[DE_renewables['data_source'] == '50Hertz'].to_excel('test.xlsx')
print('Missing coordinates ', DE_renewables.lat.isnull().sum())
display(
DE_renewables[DE_renewables.lat.isnull()].groupby(
['energy_source_level_2','data_source']
)['data_source'].count()
)
print('Share of missing coordinates (note that NaN can mean it\'s all fine):')
DE_renewables[DE_renewables.lat.isnull()].groupby(
['energy_source_level_2','data_source']
)['data_source'].count() / DE_renewables.groupby(
['energy_source_level_2','data_source']
)['data_source'].count()
Explanation: Check: missing coordinates by data source and type
End of explanation
# drop lonlat column that contains both, latitute and longitude
DE_renewables.drop(['latlon', 'longitude', 'latitude'], axis=1, inplace=True)
Explanation: Remove temporary columns
End of explanation
pickle.dump(DE_renewables, open( "intermediate/temp_dfs_DE_before_cleaning.pickle", "wb" ) )
DE_renewables = pickle.load( open( "intermediate/temp_dfs_DE_before_cleaning.pickle", "rb" ) )
Explanation: Save temporary Pickle (to have a point to quickly return to if things break after this point):
End of explanation
# Remove out-of-range dates
# Keep only values between 1900 and 2100 to rule out outliers / wrong values.
# Also, Excel doesn't support dates before 1900..
mask = ((DE_renewables['commissioning_date']>pd.Timestamp('1900')) &
(DE_renewables['commissioning_date']<pd.Timestamp('2100')))
DE_renewables = DE_renewables[mask]
DE_renewables['municipality_code'] = DE_renewables['municipality_code'].astype(str)
# Remove spaces from municipality code
DE_renewables['municipality_code'] = DE_renewables['municipality_code'].str.replace(' ', '', regex=False)
DE_renewables['municipality_code'] = pd.to_numeric(DE_renewables['municipality_code'], errors='coerce', downcast='integer')
# Merge address and address_number
to_string = lambda x: str(x) if not pd.isnull(x) else ''
DE_renewables['address'] = DE_renewables['address'].map(to_string) + ' ' + DE_renewables['address_number'].map(to_string)
# Make sure that the column has no whitespaces at the beginning and the end
DE_renewables['address'] = DE_renewables['address'].str.strip()
# Remove the column with address numbers as it is not needed anymore
del DE_renewables['address_number']
Explanation: Clean data
End of explanation
# Set up a temporary postcode column as a string column for joining with the appropriate NUTS correspondence table
DE_renewables['postcode_str'] = DE_renewables['postcode'].astype(str).str[:-2]
DE_renewables = nuts_converter.add_nuts_information(DE_renewables, 'DE', DE_postcode2nuts_filepath,
postcode_column='postcode_str',
how=['postcode', 'municipality_code', 'municipality', 'latlon'])
# Drop the temporary column
DE_renewables.drop('postcode_str', axis='columns', inplace=True)
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = DE_renewables['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', DE_renewables.shape[0], 'facilities in DE.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = DE_renewables['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', DE_renewables.shape[0], 'facilities in DE.')
Explanation: Assign NUTS codes
End of explanation
visualize_points(DE_renewables['lat'],
DE_renewables['lon'],
'Germany',
categories=DE_renewables['energy_source_level_2']
)
Explanation: Visualize
End of explanation
DE_renewables.to_pickle('intermediate/DE_renewables.pickle')
del DE_renewables
Explanation: Save
The merged, translated, cleaned, DataFrame will be saved temporily as a pickle file, which stores a Python object fast.
End of explanation
# Download the data for Denmark
filepaths = downloader.download_data_for_country('DK')
print(filepaths)
Explanation: Denmark DK
Download and read
The data which will be processed below is provided by the following data sources:
Energistyrelsen (ens) / Danish Energy Agency - The wind turbines register is released by the Danish Energy Agency.
Energinet.dk - The data of solar power plants are released by the leading transmission network operator Denmark.
geonames.org - The postcode data from Denmark is provided by Geonames and licensed under a Creative Commons Attribution 3.0 license.
Eurostat - The data for converting information on municipalities, postcodes and geographic coordinates to NUTS 2016 classification codes.
End of explanation
def read_dk_wind_turbines(filepath, sheet_name):
# Reads the data on Danish wind turbines
# from the sheet of the given name
# in the file with the path.
# Returns the data as a Pandas dataframe.
book = xlrd.open_workbook(filepath)
sheet = book.sheet_by_name(sheet_name)
# Since the column names are in two rows, not one,
# collect them in two parts. The first part is
# fixed and contains column names.
header = []
for i in range(0, 16):
# Make sure that strings 1) do not contain the newline sign
# and 2) have no trailing blank spaces.
column_name = sheet.cell_value(17, i).replace("\n", "").strip()
header = header + [column_name]
# The second part is variable. It consists of two subparts:
# 1) previous years (type float)
# 2) the past months of the current year (type date)
# Reading previous years as column names
i = 16
cell = sheet.cell(16, i)
while cell.ctype == xlrd.XL_CELL_NUMBER:
column_name = str(int(cell.value))
header = header + [column_name]
i = i + 1
cell = sheet.cell(16, i)
# Reading the months of the current year as column names
while cell.ctype == xlrd.XL_CELL_DATE:
year, month, _, _, _, _ = xlrd.xldate_as_tuple(cell.value, book.datemode)
column_name = str("{}-{}".format(year, month))
header = header + [column_name]
i = i + 1
cell = sheet.cell(16, i)
# Add the final column for the total of the current year
header += ['{}-total'.format(header[-1].split('-')[0])]
# Skip the first 17 rows in the sheet. The rest contains the data.
df = pd.read_excel(filepath,
sheet_name=sheet_name,
skiprows=17,
skipfooter=3
)
#
#df.drop(df.columns[len(df.columns)-1], axis=1, inplace=True)
# Set the column names.
df.columns = header
return df
# Get wind turbines data
wind_turbines_sheet_name = 'IkkeAfmeldte-Existing turbines'
DK_wind_filepath = filepaths['Energistyrelsen']
DK_wind_df = read_dk_wind_turbines(DK_wind_filepath,
wind_turbines_sheet_name
)
# Get photovoltaic data
DK_solar_filepath = filepaths['Energinet']
DK_solar_df = pd.read_excel(DK_solar_filepath,
sheet_name='Data',
skiprows=[0],
converters={'Postnr': str}
)
# Remove duplicates
DK_wind_df.drop_duplicates(inplace=True)
DK_solar_df.drop_duplicates(inplace=True)
Explanation: The function for reading the data on the wind turbines.
End of explanation
# Choose the translation terms for Denmark, create dictionary and show dictionary
idx_DK = columnnames[columnnames['country'] == 'DK'].index
column_dict_DK = columnnames.loc[idx_DK].set_index('original_name')['opsd_name'].to_dict()
# Windows has problems reading the csv entry for east and north (DK).
# The reason might be the difference when opening the csv between linux and
# windows.
column_dict_DK_temp = {}
for k, v in column_dict_DK.items():
column_dict_DK_temp[k] = v
if v == 'utm_east' or v == 'utm_north':
# merge 2 lines to 1
new_key = ''.join(k.splitlines())
column_dict_DK_temp[new_key] = v
column_dict_DK = column_dict_DK_temp
column_dict_DK
# Replace column names based on column_dict_DK
DK_wind_df.rename(columns=column_dict_DK, inplace=True)
DK_solar_df.rename(columns=column_dict_DK, inplace=True)
Explanation: Translate column names
End of explanation
# Add names of the data sources to the DataFrames
DK_wind_df['data_source'] = 'Energistyrelsen'
DK_solar_df['data_source'] = 'Energinet.dk'
# Add energy source level 2 and technology for each of the two DataFrames
DK_wind_df['energy_source_level_2'] = 'Wind'
DK_solar_df['energy_source_level_2'] = 'Solar'
DK_solar_df['technology'] = 'Photovoltaics'
Explanation: Add data source and missing information
End of explanation
mask=DK_solar_df['commissioning_date'] == '1970-01-01'
DK_solar_df.loc[mask, 'commissioning_date'] = np.nan
Explanation: Correct the dates
Some dates in the Energinet dataset are equal to 1970-01-01, which should be NaN instead
End of explanation
# Choose the translation terms for Denmark, create dictionary and show dictionary
idx_DK = valuenames[valuenames['country'] == 'DK'].index
value_dict_DK = valuenames.loc[idx_DK].set_index('original_name')['opsd_name'].to_dict()
# Replace all original value names by the OPSD value names
DK_wind_df.replace(value_dict_DK, inplace=True)
DK_solar_df.replace(value_dict_DK, inplace=True)
Explanation: Translate values and harmonize energy source level 2
End of explanation
# Index for all values with utm information
idx_notnull = DK_wind_df['utm_east'].notnull()
# Convert from UTM values to latitude and longitude coordinates
DK_wind_df['lonlat'] = DK_wind_df.loc[idx_notnull, ['utm_east', 'utm_north']
].apply(lambda x: utm.to_latlon(x[0],
x[1],
32,
'U'), axis=1).astype(str)
# Split latitude and longitude in two columns
lat = []
lon = []
for row in DK_wind_df['lonlat']:
try:
# Split tuple format
# into the column lat and lon
row = row.lstrip('(').rstrip(')')
lat.append(row.split(',')[0])
lon.append(row.split(',')[1])
except:
# set NAN
lat.append(np.NaN)
lon.append(np.NaN)
DK_wind_df['lat'] = pd.to_numeric(lat)
DK_wind_df['lon'] = pd.to_numeric(lon)
# drop lonlat column that contains both, latitute and longitude
DK_wind_df.drop('lonlat', axis=1, inplace=True)
Explanation: Georeferencing
UTM32 to latitude and longitude (Data from Energistyrelsen)
The Energistyrelsen data set offers UTM Geoinformation with the columns utm_east and utm_north belonging to the UTM zone 32. In this section the existing geoinformation (in UTM-format) will be transformed into latidude and longitude coordiates as a uniform standard for geoinformation.
End of explanation
# Get geo-information
zip_DK_geo = zipfile.ZipFile(filepaths['Geonames'])
# Read generated postcode/location file
DK_geo = pd.read_csv(zip_DK_geo.open('DK.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
DK_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
DK_geo.drop_duplicates('postcode', keep='last', inplace=True)
DK_geo['postcode'] = DK_geo['postcode'].astype(str)
# Add longitude/latitude infomation assigned by postcode (for Energinet.dk data)
DK_solar_df = DK_solar_df.merge(DK_geo[['postcode', 'lon', 'lat']],
on=['postcode'],
how='left')
# Show number of units with missing coordinates separated by wind and solar
print('Missing Coordinates DK_wind', DK_wind_df.lat.isnull().sum(), 'out of', len(DK_wind_df.index))
print('Missing Coordinates DK_solar', DK_solar_df.lat.isnull().sum(), 'out of', len(DK_solar_df.index))
zip_DK_geo.close()
Explanation: Postcode to lat/lon (WGS84)
(for data from Energinet.dk)
The available post code in the original data provides an approximation for the geocoordinates of the solar power plants.<br>
The postcode will be assigned to latitude and longitude coordinates with the help of the postcode table.
End of explanation
# Merge DataFrames for wind and solar into DK_renewables
dataframes = [DK_wind_df, DK_solar_df]
DK_renewables = pd.concat(dataframes, sort=False)
DK_renewables = DK_renewables.reset_index()
# Assign energy source level 1 to the dataframe
DK_renewables['energy_source_level_1'] = 'Renewable energy'
# Merge the address and address-number columns into one
to_string = lambda x: str(x) if not pd.isnull(x) else ""
DK_renewables['address'] = DK_renewables['address'].map(to_string) + " " + DK_renewables['address_number'].map(to_string)
# Make sure that the column has no whitespaces at the beginning or the end
DK_renewables['address'] = DK_renewables['address'].str.strip()
# Assign NUTS codes
DK_postcode2nuts = filepaths['Eurostat']
DK_renewables = nuts_converter.add_nuts_information(DK_renewables, 'DK', DK_postcode2nuts,
how=['latlon', 'postcode', 'municipality_code', 'municipality_name'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = DK_renewables['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', DK_renewables.shape[0], 'facilities in DK.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = DK_renewables['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', DK_renewables.shape[0], 'facilities in DK.')
Explanation: Merge DataFrames, add NUTS information and choose columns
End of explanation
DK_renewables[DK_renewables['nuts_1_region'].isnull()][['municipality', 'municipality_code', 'lat', 'lon']]
Explanation: Let us check geoinformation on the facilities for which NUTS codes could not be determined.
End of explanation
# Select those columns of the orignal data which are utilised further
columns_of_interest = ['commissioning_date', 'energy_source_level_1', 'energy_source_level_2',
'technology', 'electrical_capacity_kW', 'dso', 'gsrn_id', 'postcode',
'municipality_code', 'municipality', 'address',
'utm_east', 'utm_north', 'lon', 'lat', 'nuts_1_region', 'nuts_2_region', 'nuts_3_region',
'hub_height', 'rotor_diameter', 'manufacturer', 'model', 'data_source']
# Clean DataFrame from columns other than specified above
DK_renewables = DK_renewables.loc[:, columns_of_interest]
DK_renewables.reset_index(drop=True, inplace=True)
Explanation: As we see, no information on municipality and latitude/longitude coordinates are present for those power plants, so there was no possibility to assign them their NUTS codes.
Select columns
End of explanation
# Remove duplicates
DK_renewables.drop_duplicates(inplace=True)
DK_renewables.reset_index(drop=True, inplace=True)
Explanation: Remove duplicate rows
End of explanation
# kW to MW
DK_renewables['electrical_capacity_kW'] /= 1000
# adapt column name
DK_renewables.rename(columns={'electrical_capacity_kW': 'electrical_capacity'},
inplace=True)
Explanation: Transform electrical_capacity from kW to MW
End of explanation
visualize_points(DK_renewables['lat'],
DK_renewables['lon'],
'Denmark',
categories=DK_renewables['energy_source_level_2']
)
Explanation: Visualize
End of explanation
DK_renewables.to_pickle('intermediate/DK_renewables.pickle')
del DK_renewables
Explanation: Save
End of explanation
# Download the data
filepaths = downloader.download_data_for_country('FR')
# Show the local paths
filepaths
Explanation: France FR
The data which will be processed below is provided by the following data sources:
Ministry for Ecological and Inclusive Transition - Number of installations and installed capacity of the different renewable source for every municipality in France. Data until 31/12/2017. As of 2020, this dataset is no longer maintained by the ministry and we refer to it as the old dataset.
ODRÉ - The Open Data Réseaux Énergies (ODRÉ, Open Data Networks for Energy) platform provides stakeholders with data around the themes of Production, Multi-energy Consumption, Storage, Mobility, Territories and Regions, Infrastructure, Markets and Meteorology. As of 2020, we refer to this dataset as the new dataset. It contains the data up to 31/12/2018.
OpenDataSoft - a list of French INSEE codes and corresponding coordinates, published under the Licence Ouverte (Etalab).
End of explanation
# Load the data
FR_re_filepath = filepaths['ODRE']
FR_re_df = pd.read_csv(FR_re_filepath,
sep=';',
parse_dates=['dateRaccordement', 'dateDeraccordement',
'dateMiseEnService', 'dateDebutVersion'],
infer_datetime_format=True)
# Make sure that the column dateDeraccordement is datetime
FR_re_df['dateDeraccordement'] = pd.to_datetime(FR_re_df['dateDeraccordement'], errors='coerce')
Explanation: ODRE data
Load the data
End of explanation
# Choose the translation terms for France, create dictionary and show it
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
idx_FR = columnnames[(columnnames['country'] == 'FR') & (columnnames['data_source'] == 'ODRE')].index
column_dict_FR = columnnames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
column_dict_FR
# Translate column names
FR_re_df.rename(columns=column_dict_FR, inplace=True)
# Keep only the columns specified in the translation dictionary as we'll need only them
columns_to_keep = list(column_dict_FR.values())
FR_re_df = FR_re_df.loc[:, columns_to_keep]
FR_re_df.reset_index(drop=True, inplace=True)
# Show a pair of rows
FR_re_df.head(2)
Explanation: Translate column names
End of explanation
FR_re_df['data_source'] = 'Open Data Réseaux Énergies'
FR_re_df['as_of_year'] = 2018 # Year for which the dataset has been compiled by the data source
Explanation: Add data source
End of explanation
# Choose the translation terms for France, create a dictionary and show it
valuenames = pd.read_csv(os.path.join('input', 'value_translation_list.csv'))
idx_FR = valuenames[(valuenames['country'] == 'FR') & (valuenames['data_source'] == 'ODRE')].index
value_dict_FR = valuenames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
value_dict_FR
# Replace all original value names by the OPSD value names
FR_re_df.replace(value_dict_FR, inplace=True)
Explanation: Translate values
End of explanation
no_name_aliases = ['Agrégation des installations de moins de 36KW', 'Confidentiel', 'confidentiel']
no_name_mask = FR_re_df['site_name'].isin(no_name_aliases)
FR_re_df.loc[no_name_mask, 'site_name'] = np.nan
Explanation: Correct site names
Some facilites do not come with their names. Instead, strings such as Agrégation des installations de moins de 36KW, Confidentiel and confidentiel are used. Here, we correct this by setting all such names to np.nan.
End of explanation
mask = (FR_re_df['commissioning_date'].dt.year <= 1900) &\
((FR_re_df['technology'].isin(['Photovoltaics', 'Onshore']) |\
(FR_re_df['energy_source_level_2'] == 'Solar')))
FR_re_df.loc[mask, 'commissioning_date'] = np.nan
#for x in FR_re_df[FR_re_df['commissioning_date'].dt.year <= 1980]['technology']:
# print(x)
Explanation: Replace suspicious dates with N/A
The commissioning dates of some solar and wind plants are set in the early 20th and late 19th centuries. We replace those dates with N/A since they do not make sense.
End of explanation
# Check the columns
FR_re_df.isnull().all()
Explanation: Check missing values
Now, we will drop all the columns and all the rows which contain only null values.
End of explanation
# Check the rows
print('There is a row containing all the null values?')
FR_re_df.isnull().all(axis=1).any()
Explanation: As we see above, no column contains only the null value, so we do not need to drop any.
End of explanation
FR_re_df[['energy_source_level_2', 'technology']].drop_duplicates()
Explanation: No row contains only the null values, so no need to for filtering on that basis.
Standardize the energy types and technologies
Now, we proceed with standardizing the energy types and technologies present in the data according to the OPSD energy hierarchy.
End of explanation
# Define the mask for selecting rows with unusable info on electrical capacity
ec_mask = (FR_re_df['electrical_capacity'] == 0) | (FR_re_df['electrical_capacity'].isna())
# Define the mask for selecting the rows with non-renewable energy_source_level_2
non_renewable_esl2 = ['Non-renewable thermal', 'Non-hydraulic storage', 'Nuclear']
non_renewable_esl2_mask = FR_re_df['energy_source_level_2'].isin(non_renewable_esl2)
# Define the mask to select the rows with non-renewable technology
non_renewable_technologies = ['Steam turbine', 'Combustion cogeneration', 'Combustion engine',
'Combined cycle', 'Pumped storage', 'Piston motor', 'Nuclear fission']
non_renewable_technology_mask = FR_re_df['technology'].isin(non_renewable_technologies)
# Define the mask to select the rows without specified energy type and technology
other_mask = (FR_re_df['energy_source_level_2'] == 'Other') & \
((FR_re_df['technology'] == 'Other') | (pd.isnull(FR_re_df['technology'])))
# Combine the masks
drop_mask = ec_mask | non_renewable_esl2_mask | non_renewable_technology_mask | other_mask
# Show how many rows are going to be dropped
print('Dropping', drop_mask.sum(), 'rows out of', FR_re_df.shape[0])
# Keep all the rows not selected by the drop mask
keep_mask = ~drop_mask
FR_re_df = FR_re_df[keep_mask].reindex()
# Show some rows
print("A sample of the kept data:")
FR_re_df.sample(5)
Explanation: In order to facilitate further processing, we can remove the rows that we know for sure we won't need.
Those are the rows satisfying either of the following conditions:
* electrical_capacity is 0 or NaN,
* energy_source_level_2 corresponds to a non-renewable energy type (Non-renewable thermal, Non-hydraulic storage, Nuclear),
* technology indicates that a non-renewable technology is used at the facility (Steam turbine, Combustion cogeneration, Combustion engine, Combined cycle, Pumped storage, Piston motor, Nuclear fission).
* energy_source_level_2 is Other and technology is Other or NaN.
End of explanation
FR_re_df[['energy_source_level_2', 'technology']].drop_duplicates()
Explanation: Standardize source levels 1-3 and technology
Let us see the energy types and technologies present in the filtered data.
End of explanation
# Make sure that the proper string is used to indicate other or unspecified technology
FR_re_df['technology'].replace('Other', 'Other or unspecified technology', inplace=True)
# Define a function that will deal with other cases
def standardize(row):
level_2 = row['energy_source_level_2']
technology = row['technology']
if level_2 in ['Marine', 'Geothermal', 'Bioenergy']:
technology = np.nan
elif level_2 in ['Solar', 'Hydro', 'Other'] and pd.isna(technology):
technology = 'Other or unspecified technology'
elif level_2 == 'Wind' and (pd.isna(technology) or technology == 'Other or unspecified technology'):
technology = 'Onshore'
if level_2 == 'Hydro' and technology in ['Lake', 'Closed']:
technology = 'Other or unspecified technology'
elif level_2 == 'Solar' and technology == 'Thermodynamic':
technology = 'Other or unspecified technology'
elif level_2 == 'Other' and technology == 'Photovoltaics':
level_2 = 'Solar'
return [level_2, technology]
# Apply the rules coded in function standardize
FR_re_df[['energy_source_level_2', 'technology']] = FR_re_df.apply(standardize, axis=1, result_type='expand')
# Show the existing level 2 types and technologies
FR_re_df[['energy_source_level_2', 'technology']].drop_duplicates()
Explanation: First, let us standardize the values for energy source level 2 and technology.
1. We will use np.nan to indicate that technology should not be specified for the respective kind of sources according to the OPSD hierarchy.
2. 'Other or unspecified technology' will mean that technology should be specified but it was unclear or missing in the original dataset.
That means that we need to apply the following correction rules to the current data:
- All occurences of Other in the column technology should be replaced with Other or unspecified technology.
- If energy_source_level_2 is Marine, Geothermal, or Bioenergy, then technology should be set to np.nan regardless of what is specified in the data set.
- If energy_source_level_2 is Solar or Hydro, and technology is NaN, then technology should be set to Other or unspecified technology.
- If energy_source_level_2 is Wind and technology is NaN, then technology should be set to Onshore since France has no offshore wind farms.
- If energy_source_level_2 is Hydro and technology is Lake or Closed, then technology should be set to Other or unspecified technology.
- If energy_source_level_2 is Solar and technology is Thermodynamic, then technology should be set to Other or unspecified technology.
- If energy_source_level_2 is Other and technology is Photovoltaics, then energy_source_level_2 should be set to Solar.
End of explanation
FR_re_df[['energy_source_level_2', 'energy_source_level_3']].drop_duplicates()
Explanation: Let us now deal with the third level of the energy hierarchy. Only Bioenergy has the third level. Information on it can be found in the column energy_source_level_3 (whose original name was combustible).
End of explanation
index = (pd.isna(FR_re_df['energy_source_level_3']) & \
(FR_re_df['energy_source_level_2'] == 'Bioenergy'))
FR_re_df.loc[index, 'energy_source_level_3'] = 'Other or unspecified'
index = FR_re_df['energy_source_level_3'] == 'Wood'
FR_re_df.loc[index, 'energy_source_level_3'] = 'Biomass and biogas'
Explanation: We see that only the following two corrections are needed:
- If energy_source_level_3 is Wood, set energy_source_level_3 to Biomass and biogas.
- If energy_source_level_3 is NaN, and energy_source_level_2 is Bioenergy, set energy_source_level_3 to Other or unspecified.
End of explanation
# Assign energy_source_level_1 to the dataframe
FR_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy
energy_columns = ['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']
FR_re_df[energy_columns].drop_duplicates()
Explanation: Finally, we declare all the plants as renewable and show the final hierarchy.
End of explanation
# Get the local path of the downloaded georeferencing data
FR_geo_filepath = filepaths['Opendatasoft']
# Read INSEE Code Data
FR_geo = pd.read_csv(FR_geo_filepath,
sep=';',
header=0,
converters={'Code_postal': str})
# Drop possible duplicates of the same INSEE code
FR_geo.drop_duplicates('INSEE_COM', keep='last', inplace=True)
# create columns for latitude/longitude
lat = []
lon = []
# split in latitude/longitude
for row in FR_geo['Geo Point']:
try:
# Split tuple format
# into the column lat and lon
row = row.lstrip('(').rstrip(')')
lat.append(row.split(',')[0])
lon.append(row.split(',')[1])
except:
# set NAN
lat.append(np.NaN)
lon.append(np.NaN)
# add these columns to the INSEE DataFrame
FR_geo['lat'] = pd.to_numeric(lat)
FR_geo['lon'] = pd.to_numeric(lon)
# Column names of merge key have to be named identically
FR_re_df.rename(columns={'municipality_code': 'INSEE_COM'}, inplace=True)
# Merge longitude and latitude columns by the Code INSEE
FR_re_df = FR_re_df.merge(FR_geo[['INSEE_COM', 'lat', 'lon']],
on=['INSEE_COM'],
how='left')
# Translate Code INSEE column back to municipality_code
FR_re_df.rename(columns={'INSEE_COM': 'municipality_code'}, inplace=True)
Explanation: Georeferencing
First, we will determine the plants' longitude and latitude coordinates, and then assign them their NUTS codes.
Municipality (INSEE) code to lon/lat
End of explanation
#import importlib
#importlib.reload(util.nuts_converter)
#from util.nuts_converter import NUTSConverter
#nuts_converter = NUTSConverter(downloader, eurostat_eu_directory_path)
FR_postcode2nuts_path = filepaths['Eurostat']
FR_re_df = nuts_converter.add_nuts_information(FR_re_df, 'FR', FR_postcode2nuts_path,
lau_name_type='NATIONAL',
closest_approximation=True,
how=['municipality_code', 'latlon'])
# Report the number of facilites whose NUTS codes were successfully determined
determined = FR_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', FR_re_df.shape[0], 'facilities in FR.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = FR_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', FR_re_df.shape[0], 'facilities in FR.')
Explanation: Determine NUTS codes
End of explanation
# Check the facilities without NUTS classification
no_nuts = FR_re_df['nuts_1_region'].isnull()
# Find the masks where some information for finding the proper NUTS code is present
lat_or_lon_present = ~(FR_re_df['lat'].isna() & FR_re_df['lon'].isna())
municipality_code_present = ~(FR_re_df['municipality_code'].isnull())
municipality_name_present = ~(FR_re_df['municipality'].isnull())
# Show the cases where NUTS classification failed even though it shouldn't have
print('1. No NUTS code but latitude/longitude info present')
problematic_lat_lon = FR_re_df[no_nuts & lat_or_lon_present][['lat', 'lon']]
display(problematic_lat_lon)
print('2. No NUTS code but municipality code info present')
problematic_municipality_codes = FR_re_df[no_nuts & municipality_code_present]['municipality_code'].unique()
display(problematic_municipality_codes)
print('3. No NUTS code but municipality name info present')
problematic_municipality_names = FR_re_df[no_nuts & municipality_name_present]['municipality'].unique()
display(problematic_municipality_names)
Explanation: Let us now check the facilities without NUTS classification.
End of explanation
# Check if the any problematic code is actually present in the translation table
present_any = False
for code in problematic_municipality_codes:
mask = nuts_converter.municipality2nuts_df['municipality_code'].str.match(code)
present_any = present_any or mask.any()
print(present_any)
Explanation: We see that no row with known longitude and latitude was left unclassified.
What we also see is that some municipality codes did not translate to the corresponding NUTS codes. Further inspection shows that those codes are not present in the official NUTS translation tables.
End of explanation
# Print only the names of those problematic municipalities, which appear in the translation table only once.
for name in problematic_municipality_names:
mask = nuts_converter.municipality2nuts_df['municipality'].str.match(name)
if mask.sum() == 1:
print(name)
Explanation: We also see that problematic municipality names are either not present in the official translation tables or more than one municipality in the tables bears them.
End of explanation
FR_re_df['electrical_capacity'] = FR_re_df['electrical_capacity'] / 1000
Explanation: Therefore, we can confirm that NUTS classification codes were determined with the highest precision possible.
Convert electrical capacity to MW
End of explanation
# Load the data
FR_re_filepath = filepaths['gouv.fr']
FR_re_df_old = pd.read_excel(FR_re_filepath,
sheet_name='Commune',
encoding='UTF8',
thousands='.',
decimals=',',
header=[3, 4],
skipfooter=9, # skip the summary rows
index_col=[0, 1], # required for MultiIndex
converters={'Code officiel géographique': str})
FR_re_df_old.tail()
Explanation: Old data
End of explanation
# Rearrange data
FR_re_df_old.index.rename(['insee_com', 'municipality'], inplace=True)
FR_re_df_old.columns.rename(['energy_source_level_2', None], inplace=True)
FR_re_df_old = (FR_re_df_old
.stack(level='energy_source_level_2', dropna=False)
.reset_index(drop=False))
# Choose the translation terms for France, create dictionary and show dictionary
idx_FR = columnnames[(columnnames['country'] == 'FR') & (columnnames['data_source'] == 'gouv.fr')].index
column_dict_FR = columnnames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
column_dict_FR
# Translate columnnames
FR_re_df_old.rename(columns=column_dict_FR, inplace=True)
# Drop all rows that contain NA
FR_re_df_old = FR_re_df_old.dropna()
FR_re_df_old.head(10)
Explanation: This French data source contains number of installations and sum of installed capacity per energy source per municipality. The list is limited to the plants which are covered by article 10 of february 2000 by an agreement to a purchase commitment.
End of explanation
FR_re_df_old['data_source'] = 'Ministry for the Ecological and Inclusive Transition'
FR_re_df_old['as_of_year'] = 2017 # Year for which the dataset has been compiled by the data source
Explanation: Add data source
End of explanation
# Choose the translation terms for France, create dictionary and show dictionary
idx_FR = valuenames[(valuenames['country'] == 'FR') & (valuenames['data_source'] == 'gouv.fr')].index
value_dict_FR = valuenames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
value_dict_FR
# Replace all original value names by the OPSD value names
FR_re_df_old.replace(value_dict_FR, inplace=True)
Explanation: Translate values and harmonize energy source level 2
Kept secret if number of installations < 3
If the number of installations is less than 3, it is marked with an s instead of the number 1 or 2 due to statistical confidentiality (as explained by the data provider). Here, the s is changed to < 3. This is done in the same step as the other value translations of the energy sources.
End of explanation
energy_source_dict_FR = valuenames.loc[idx_FR].set_index(
'opsd_name')['energy_source_level_2'].to_dict()
display(energy_source_dict_FR)
display(FR_re_df_old[['energy_source_level_2']].drop_duplicates())
(FR_re_df_old['energy_source_level_2'].replace(energy_source_dict_FR).unique())
# Create dictionnary in order to assign energy_source to its subtype
energy_source_dict_FR = valuenames.loc[idx_FR].set_index(
'opsd_name')['energy_source_level_2'].to_dict()
# Column energy_source partly contains subtype information, thus this column is copied
# to new column for energy_source_subtype.
FR_re_df_old['technology'] = FR_re_df_old['energy_source_level_2']
# Only Photovoltaics should be kept as technology. Hydro should be changed to 'Other or unspecified technology',
# Geothermal to NaN, and Wind to Onshore.
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'Solar' : 'Photovoltaics',
'Wind': 'Onshore',
'Hydro': 'Other or unspecified technology',
'Geothermal': np.nan
}
FR_re_df_old['technology'].replace(technology_translation_dictionary, inplace=True)
# The energy source subtype values in the energy_source column are replaced by
# the higher level classification
FR_re_df_old['energy_source_level_2'].replace(energy_source_dict_FR, inplace=True)
# Assign energy_source_level_1 to the dataframe
FR_re_df_old['energy_source_level_1'] = 'Renewable energy'
FR_re_df_old.reset_index(drop=True, inplace=True)
# Choose energy source level 2 entries where energy source level 2 is Bioenergy in order to
# seperate Bioenergy subtypes to energy source level 3 and subtypes for the rest to technology
idx_FR_Bioenergy = FR_re_df_old[FR_re_df_old['energy_source_level_2'] == 'Bioenergy'].index
# Assign technology to energy source level 3 for all entries where energy source level 2 is
# Bioenergy and delete those entries from technology
FR_re_df_old[['energy_source_level_3']] = FR_re_df_old.iloc[idx_FR_Bioenergy][['technology']]
FR_re_df_old.loc[idx_FR_Bioenergy,'technology'] = np.nan
Explanation: Separate and assign energy source level 1-3 and technology
End of explanation
FR_re_df_old[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates()
Explanation: Show the hierarchy of the energy types present in the data.
End of explanation
# Column names of merge key have to be named identically
FR_re_df_old.rename(columns={'municipality_code': 'INSEE_COM'}, inplace=True)
# Merge longitude and latitude columns by the Code INSEE
FR_re_df_old = FR_re_df_old.merge(FR_geo[['INSEE_COM', 'lat', 'lon']],
on=['INSEE_COM'],
how='left')
# Translate Code INSEE column back to municipality_code
FR_re_df_old.rename(columns={'INSEE_COM': 'municipality_code'}, inplace=True)
Explanation: Georeferencing
Municipality (INSEE) code to lat/lon
End of explanation
FR_postcode2nuts_path = filepaths['Eurostat']
FR_re_df_old = nuts_converter.add_nuts_information(FR_re_df_old, 'FR', FR_postcode2nuts_path,
how=['municipality_code', 'latlon'])
# how=['municipality', 'municipality_code', 'latlon']
# Report the number of facilites whose NUTS codes were successfully determined
determined = FR_re_df_old['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', FR_re_df_old.shape[0], 'facilities in FR.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = FR_re_df_old['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', FR_re_df_old.shape[0], 'facilities in FR.')
# Show the facilities without NUTS classification
FR_re_df_old[FR_re_df_old['nuts_1_region'].isnull()]
Explanation: Determine NUTS codes
End of explanation
# For each column present in the new data's column space, but not the old,
# add an empty column to the old data.
for new_column in FR_re_df.columns:
if new_column not in FR_re_df.columns:
FR_re_df_old[new_column] = np.nan
# Define the mask to select the municipalities from the old data, that are not covered
# by the new.
not_included = ~(FR_re_df_old['municipality_code'].isin(FR_re_df['municipality_code']))
FR_re_df_old[not_included]
# Add a dummy column to the new data frame
# representing the number of power plants (always 1)
FR_re_df['number_of_installations'] = 1
# Mark the old data rows as aggregations on municipality level.
FR_re_df_old['site_name'] = 'Aggregated data for ' + FR_re_df_old['municipality']
# Concatenate the new data with the old rows referring to the municipalities
# not covered by the new.
FR_re_df = pd.concat([FR_re_df, FR_re_df_old[not_included]], ignore_index=True, axis='index', sort=True)
Explanation: As we can see, the NUTS codes were determined successfully for all the facilities in the dataset.
Integrate old and new data
Some municipalities are not covered by the new data set, provided by ODRE. Now, we find those municipalities and integrate them with the new data.
The only column present in the old data, but not in the new, is number_of_installations. Since the old data
were aggregated on the municipality level, the column in question refers to the numbers of power plants in the
municipalitis. Since the new data covers individual plants, if we set the column number_of_installations to 1
for all the plants in the the new data, we will make the two sets consistent with one another and be able
to concatenate them.
We will set site_name to 'Aggregated data for municipality' for all the rows from the old data, where municipality refers to the name of the municipality for which the row has been compiled.
Note: the electrical capacity in the old data is already in MW, so conversion is not needed.
End of explanation
columns_to_keep = ['EIC_code', 'municipality_group_code', 'IRIS_code', 'as_of_year',
'commissioning_date', 'connection_date', 'data_source', 'departement',
'departement_code', 'disconnection_date',
'electrical_capacity', 'energy_source_level_1', 'energy_source_level_2',
'energy_source_level_3', 'lat', 'lon',
'municipality', 'municipality_code',
'municipality_group', 'number_of_installations', 'nuts_1_region',
'nuts_2_region', 'nuts_3_region', 'region', 'region_code', 'site_name',
'source_station_code', 'technology']
FR_re_df = FR_re_df[columns_to_keep]
FR_re_df.reset_index(drop=True, inplace=True)
Explanation: Select the columns
Now, we select the columns we want to keep.
End of explanation
visualize_points(FR_re_df['lat'],
FR_re_df['lon'],
'France',
categories=FR_re_df['energy_source_level_2']
)
Explanation: Visualize
End of explanation
FR_re_df.to_pickle('intermediate/FR_renewables.pickle')
del FR_re_df
Explanation: Save
End of explanation
# Download the data
filepaths = downloader.download_data_for_country('PL')
# Get the local paths to the data files
PL_re_filepath = filepaths['Urzad Regulacji Energetyki']
PL_postcode2nuts_filepath = filepaths['Eurostat']
PL_geo_filepath = filepaths['Geonames']
Explanation: Poland PL
Download
The data which will be processed below is provided by the following data source:
Urzad Regulacji Energetyki (URE) / Energy Regulatory Office - Installed capacities of renewable-energy power plants in Poland. The plants are anonymized in the sense that no names, post codes or geographical coordinates are present. They are described by: the energy type their use, installed capacity, województwo (province) and powiat (district) that they are located in.
End of explanation
# Read the data into a pandas dataframe
PL_re_df = pd.read_excel(PL_re_filepath,
encoding='latin',
header=2,
skipfooter=14
)
# Show 5 random rows
PL_re_df.sample(n=5)
Explanation: Load and explore the data
The dataset comes in the csv format. Let us open it, inspect its columns and clean it a bit before processing it further.
End of explanation
# Get the mask for selecting the WS plants
ws_mask = PL_re_df['Rodzaj_OZE'] == 'WS'
# Drop them
print('Dropping', ws_mask.sum(), 'out of', PL_re_df.shape[0], 'power plants.')
PL_re_df.drop(PL_re_df.index[ws_mask], axis=0, inplace=True)
PL_re_df.reset_index(drop=True, inplace=True)
Explanation: There are only five columns:
- Lp.: the ordinal number of the entry (power plant), effectively serving as its identification number.
- Województwo: the province (voivodeship) where the plant is located
- Powiat: the district where the plant is located
- Rodzaj_OZE: the code of the energy the plants uses. According to the legend in the .xlsx file, the codes are as follows:
- BG: biogas
- BM: biomass
- PVA: solar energy
- WIL: wind energy
- WO: hydroenergy
- WS: using the technology of co-firing biomass, biogas or bioliquids with other fuels (fossil fuels and biomass / biogas / bioliquids)
- Moc zainstalowana [MW]: installed capacity (in MWs).
The type corresponding to WS does not fit into the OPSD energy hiearchy, so we can drop such plants.
End of explanation
# Choose the translation terms for Poland, create and show the dictionary
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
idx_PL = columnnames[(columnnames['country'] == 'PL') &
(columnnames['data_source'] == 'Urzad Regulacji Energetyki')].index
column_dict_PL = columnnames.loc[idx_PL].set_index('original_name')['opsd_name'].to_dict()
column_dict_PL
# Translate column names
PL_re_df.rename(columns=column_dict_PL, inplace=True)
# Show a couple of rows
PL_re_df.head(2)
Explanation: To ease the work, we can translate the columns' names to English using the OPSD translation tables.
End of explanation
print('The number of missing values in the data:', PL_re_df.isna().sum().sum())
print('Are all capacities proper numbers?', PL_re_df['electrical_capacity'].dtype == 'float64')
print('What about the energy codes?', PL_re_df['energy_type'].unique())
# Check the voivodeships
print('Show the names of the voivodeships.')
PL_re_df['region'].unique()
Explanation: Inspect the data
Let us do few quick checks to see state of the data:
- Are there any NA values?
- Are all the values in the column electrical_capacity proper numbers?
- Are all the values in the column energy_type (codes of energy types) consistent strings? Here we check if all the codes appear in one and only one form. For example, PVA is the code for solar energy and we want to make sure that only PVA appears in the column, not other variations such as pva, Pva etc.
- What is the form of the geographical data? Are some districts represented by different strings in different rows? What about the regions (provinces, województwa, voivodeships)?
We will need the answers to those questions to know how to proceed with processing.
End of explanation
PL_re_df['region'] = PL_re_df['region'].str.strip().str.capitalize()
PL_re_df['region'].unique()
Explanation: We can see that each name comes in two forms: (1) with the first letter capital and (2) with the first letter lowercase. One province is referred to by three different strings: 'Śląskie', 'śląskie', and 'śląskie ' (the last with a trailing white space). In order to standardize this column, we trim and capitalize all the strings appearing in it.
End of explanation
districts = PL_re_df['district'].unique()
districts.sort()
districts
Explanation: Now, let us check the strings for districts (powiats).
End of explanation
# Correct the typos
PL_re_df.loc[PL_re_df['district'] == 'lipowski', 'district'] = 'lipnowski'
PL_re_df.loc[PL_re_df['district'] == 'hojnowski', 'district'] = 'hajnowski'
Explanation: As we see in the list, the same district can be referred to by more than one string. We identify the following ways a district is referred to in the dataset:
1. by using the noun in the nominative case, capitalized (e.g. Kraków),
2. by prepending m. or m. st. to the form 1 (e.g. m. Kraków or m. st. Warszawy) and
3. by the possesive adjective, lowercase (e.g. krakowski).
Some districts, such as Krakow, appear in all the three forms, but there are those which appear in two (e.g. Bytom and m. Bytom). This will pose a problem when we later try to assign the plants their NUTS codes. Furthermore, the NUTS translation tables do not map districts to the codes, but lower administrative units (municipalities) and postcodes to NUTS. We solve this issue at a later point in the notebook, Section Georeferencing (NUTS classification), and not here as it requires heavier processing than warranted during initial explorative analysis and lightweight cleaning of the data.
We note that the districts lipowski and hojnowski are misspelled, as they should actually be lipnowski and hajnowski, so we can correct the typos now.
End of explanation
# Choose the translation terms for Poland, create dictionary
idx_PL = valuenames[valuenames['country'] == 'PL'].index
value_dict_PL = valuenames.loc[idx_PL].set_index('original_name')['opsd_name'].to_dict()
# Set energy source level 3
PL_re_df['energy_source_level_3'] = PL_re_df['energy_type'].replace(value_dict_PL)
# Create dictionnary in order to assign energy_source_level_2 to its subtype
idx_PL = valuenames[valuenames['country'] == 'PL'].index
energy_source_dict_PL = valuenames.loc[idx_PL].set_index('original_name')['energy_source_level_2'].to_dict()
# Add energy_source_level_2
PL_re_df['energy_source_level_2'] = PL_re_df['energy_type'].replace(energy_source_dict_PL)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'BG': np.nan,
'BM': np.nan,
'PVA': 'Other or unspecified technology', # Photovoltaics?
'WIL': 'Other or unspecified technology', # Onshore?
'WO': 'Other or unspecified technology', # Run-of-river
}
PL_re_df['technology'] = PL_re_df['energy_type'].replace(technology_translation_dictionary)
# Add energy_source_level_1
PL_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
PL_re_df[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates().sort_values(by='energy_source_level_2')
Explanation: Harmonising energy levels
End of explanation
# Define the function to standardize district names from the original data
def standardize_districts(original_string):
if original_string[-1] == ',': # there is one district whose name ends with ','; that's a typo in the data
original_string = original_string[:-1]
if original_string.startswith('m. st. '):
return original_string[7:]
elif original_string.startswith('m. '):
return original_string[3:]
elif any([original_string.endswith(suffix) for suffix in ['ski', 'cki', 'zki']]):
return 'Powiat ' + original_string
else:
return original_string
# Get geo-information
zip_PL_geo = zipfile.ZipFile(PL_geo_filepath)
# Read generated postcode/location file
PL_geo = pd.read_csv(zip_PL_geo.open('PL.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
PL_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
PL_geo.drop_duplicates('postcode', keep='last', inplace=True)
PL_geo['postcode'] = PL_geo['postcode'].astype(str)
# Get the names
geonames_districts = PL_geo['admin_name2'].unique()
# Show them
geonames_districts
# Standardize the district names from the original data
PL_re_df['standardized_district'] = PL_re_df['district'].apply(standardize_districts)
standardized_districts = PL_re_df['standardized_district'].unique()
# Check which districts could not be found in the GeoNames data
#print(len([x for x in semi if x in geopowiats]), len([x for x in semi if x not in geopowiats]))
not_found = set(standardized_districts).difference(set(geonames_districts))
number_of_not_found = len(not_found)
total = len(standardized_districts)
print('{}/{} names could not be found. Those are:'.format(number_of_not_found, total))
print(not_found)
Explanation: Georeferencing (NUTS classification)
We have already seen that the district names are not standardized and observed that we cannot use them directly to get the corresponding NUTS codes.
There is a way to get around this issue. We can do it as folows:
1. First, we find a postcode in the GeoNames zip for Poland that corresponds to each district in the URE data. To do so, we must standardize all the district names to the forms that appear in the GeoNames zip file.
2. Then, we can easily map a postcode to the appropriate NUTS codes using nuts_converter.
By inspection, we observe that all the district names in the zip have one of the following two forms:
- Noun in the nominative case, capitalized.
- Powiat * where * is a possessive adjective.
So, we standardize all the strings in the district column as follows:
- Remove all the trailing whitespaces and characters other than letters.
- If the string starts with m. or m. st., remove m. (or m. st.) from the beginning of the string.
- If the string ends with a possessive suffix ski, cki or zki, prepend the string Powiat (note the ending whitespace) to it.
End of explanation
# We define the similarity between two strings, string1 and string2,
# as the length of the longest prefix of string1 that appears in string2.
# Note 1: this measure of similarity is not necessarily symmetrical.
# Note 2: a prefix of a string is its substring that starts from the beginning of the string.
def calculate_similarity(string1, string2):
for n in range(len(string1), 1, -1):
prefix = string1[0:(n-1)]
if prefix in string2:
return len(prefix)
return 0
# Define a function to find, among a group of candidate strings,
# the most similar string to the one given as the reference string.
def find_the_most_similar(reference_string, candidate_strings):
the_most_similar = None
maximal_similarity = 0
for candidate_string in candidate_strings:
similarity = calculate_similarity(reference_string, candidate_string)
if similarity > maximal_similarity:
maximal_similarity = similarity
the_most_similar = candidate_string
return the_most_similar, maximal_similarity
already_mapped = PL_re_df[['district', 'standardized_district']].drop_duplicates().to_dict(orient='records')
already_mapped = {mapping['district'] : mapping['standardized_district'] for mapping in already_mapped
if mapping['standardized_district'] in geonames_districts}
# Make a dictionary to map each district from the original data to its GeoNames equivalent.
# The districts whose standardized versions have been found in the GeoNames data to their standardizations.
# The mappings for other districts will be found using the previously defined similarity measures.
districts_map = PL_re_df[['district', 'standardized_district']].drop_duplicates().to_dict(orient='records')
districts_map = {mapping['district'] : mapping['standardized_district'] for mapping in districts_map}
# Override the mappings for the 49 districts whose standardized names have not been found in the GeoNames data.
for district, standardized_district in districts_map.items():
#standardized_district = ['standardized_district']
if standardized_district not in geonames_districts:
#print('---------')
if standardized_district.startswith('Powiat'):
standardized_district = standardized_district[7:]
#print(district)
capitalized = standardized_district.capitalize()
lowercase = standardized_district.lower()
candidate1, similarity1 = find_the_most_similar(capitalized, geonames_districts)
candidate2, similarity2 = find_the_most_similar(lowercase, geonames_districts)
if similarity1 > similarity2:
districts_map[district] = candidate1
#print('\t', candidate1, similarity1)
elif similarity2 > similarity1:
districts_map[district] = candidate2
#print('\t', candidate2, similarity2)
else:
# Break the ties by mapping to the shorter string
if len(candidate1) < len(candidate2):
districts_map[district] = candidate1
#print('\t', candidate1, '|', candidate2, similarity1)
else:
districts_map[district] = candidate2
#print('\t', candidate2, '|', candidate1, similarity2)
# Apply the override to PL_re_df
PL_re_df['standardized_district'] = PL_re_df['district'].apply(lambda district: districts_map[district])
# Show the results
PL_re_df[['district', 'standardized_district']].drop_duplicates()
Explanation: We can now apply a heuristic method for finding the corresponding name in the GeoNames data. It is based on similarity between strings. It turns out that it works fine, except for a couple of cases, which we deal with manually.
End of explanation
# Clear the mappings for wołowski, Nowy Sącz, rzeszowski, hojnowski.
for district in ['wołowski', 'm. Nowy Sącz', 'rzeszowski', 'hojnowski']:
districts_map[district] = ''
PL_re_df.loc[PL_re_df['district'] == district, 'standardized_district'] = ''
# For each mapping, select a postcode from the GeoNames data
df_dict = {'original' : [], 'geonames' : []}
for original_name in districts_map:
geonames_name = districts_map[original_name]
df_dict['original'].append(original_name)
df_dict['geonames'].append(geonames_name)
mapping_df = pd.DataFrame.from_dict(df_dict)
# To make sure that the selected postcodes do appear in the NUTS table,
# we drop, from PL_geo, all rows with the postcodes not in the postcode-to-NUTS table for Poland.
PL_table = nuts_converter.open_postcode2nuts(filepaths['Eurostat'])['CODE']
PL_geo = pd.merge(PL_geo, PL_table, how='inner', left_on='postcode', right_on='CODE')
PL_geo.drop(['CODE'], axis='columns', inplace=True)
#
merged = pd.merge(mapping_df,
PL_geo[['admin_name2', 'postcode']],
how='left',
left_on='geonames',
right_on='admin_name2')
# Rename the column postcode to make its meaning straightforward
merged.rename(columns={'postcode' : 'random_postcode'}, inplace=True)
merged = merged.drop_duplicates(['geonames'])
print(PL_re_df.shape)
PL_re_df = pd.merge(PL_re_df,
merged[['geonames', 'random_postcode']],
how='left',
left_on='standardized_district',
right_on='geonames')
# Show results
PL_re_df.head(2)
Explanation: The following districts have not been mapped correctly: wołowski, m. Nowy Sącz and rzeszowski. Let us clear their mappings so that we can assign them their NUTS codes manually later.
End of explanation
display(PL_re_df[PL_re_df['random_postcode'].isnull()])
PL_re_df['random_postcode'].isnull().sum()
Explanation: Show the rows for which we could not find postcodes.
End of explanation
PL_postcode2nuts_path = filepaths['Eurostat']
PL_re_df = nuts_converter.add_nuts_information(PL_re_df, 'PL', PL_postcode2nuts_path,
postcode_column='random_postcode', how=['postcode'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = PL_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', PL_re_df.shape[0], 'facilities in PL.')
# Manual assignments
manual_nuts3_map = {
'wołowski' : 'PL518',
'm. Nowy Sącz' : 'PL218',
'rzeszowski' : 'PL325'
}
for district in manual_nuts3_map:
nuts3 = manual_nuts3_map[district]
nuts2 = nuts3[:-1]
nuts1 = nuts3[:-2]
mask = (PL_re_df['district'] == district)
PL_re_df.loc[mask, ['nuts_1_region', 'nuts_2_region', 'nuts_3_region']] = [nuts1, nuts2, nuts3]
# Report the number of facilites whose NUTS codes could not be determined
not_determined = PL_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', PL_re_df.shape[0], 'facilities in PL.')
Explanation: There are only 17 such power plants and all of them are placed in the districts which we deliberately left out for manual classification.
Add NUTS information
We add the NUTS information as usual, using the converter. After that, we manually add the codes for the left-out districts as follows:
| District | NUTS_1 | NUTS_2 | NUTS_3 |
|----------|--------|--------|--------|
| wołowski | PL5 | PL51 | PL518 |
| m. Nowy Sącz | PL2 | PL21 | PL218 |
| rzeszowski | PL3 | PL32 | PL325 |
End of explanation
PL_re_df['data_source'] = 'Urzad Regulacji Energetyki'
PL_re_df['as_of_year'] = 2019 # The year for which the dataset has been compiled by the data source
Explanation: Add data source and year
End of explanation
# Choose which column to keep
PL_re_df = PL_re_df.loc[:, [ 'URE_id', 'region', 'district',
'nuts_1_region', 'nuts_2_region', 'nuts_3_region',
'electrical_capacity',
'energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3',
'technology',
'data_source', 'as_of_year']]
Explanation: Select columns
End of explanation
PL_re_df.to_pickle('intermediate/PL_renewables.pickle')
del PL_re_df
Explanation: Save
End of explanation
# Download the data and get the local paths of the downloaded files
filepaths = downloader.download_data_for_country('CH')
CH_re_filepath = filepaths['BFE']
CH_geo_filepath = filepaths['Geonames']
CH_postcode2nuts_filepath = filepaths['Eurostat']
# Get data of renewables per municipality
CH_re_df = pd.read_excel(CH_re_filepath,
sheet_name='KEV Bezüger 2018',
encoding='UTF8',
thousands='.',
decimals=','
#header=[0]
#skipfooter=9, # contains summarized values
#index_col=[0, 1], # required for MultiIndex
#converters={'Code officiel géographique':str}
)
Explanation: Switzerland CH
Download and read
The data which will be processed below is provided by the following data sources:
Swiss Federal Office of Energy - Data of all renewable power plants receiving "Kostendeckende Einspeisevergütung" (KEV) which is the Swiss feed in tarif for renewable power plants.
Geodata is based on municipality codes.
The available municipality code in the original data provides an approximation for the geocoordinates of the renewable power plants. The postcode will be assigned to latitude and longitude coordinates with the help of the postcode table.
geonames.org - The postcode data from Switzerland is provided by Geonames and licensed under a Creative Commons Attribution 3.0 license.
End of explanation
# Choose the translation terms for Switzerland, create dictionary and show dictionary
idx_CH = columnnames[columnnames['country'] == 'CH'].index
column_dict_CH = columnnames.loc[idx_CH].set_index('original_name')['opsd_name'].to_dict()
column_dict_CH
# Translate columnnames
CH_re_df.columns = [column_name.replace("\n", "") for column_name in CH_re_df.columns]
CH_re_df.rename(columns=column_dict_CH, inplace=True)
Explanation: Translate column names
End of explanation
CH_re_df['data_source'] = 'BFE'
Explanation: Add data source
End of explanation
# Choose the translation terms for Switzerland, create dictionary
idx_CH = valuenames[valuenames['country'] == 'CH'].index
value_dict_CH = valuenames.loc[idx_CH].set_index('original_name')['opsd_name'].to_dict()
Explanation: Harmonize energy source hierarchy and translate values
End of explanation
# Assign energy_source_level_1 to the dataframe
CH_re_df['energy_source_level_1'] = 'Renewable energy'
# Create dictionnary in order to assign energy_source to its subtype
#energy_source_dict_CH = valuenames.loc[idx_CH].set_index('opsd_name')['energy_source_level_2'].to_dict()
#
# ...and the energy source subtype values in the energy_source column are replaced by
# the higher level classification
#CH_re_df['energy_source_level_2'].replace(energy_source_dict_CH, inplace=True)
CH_re_df['energy_source_level_3'] = CH_re_df['technology']
# Create dictionnary in order to assign energy_source_level_2 to its subtype
idx_CH = valuenames[valuenames['country'] == 'CH'].index
energy_source_dict_CH = valuenames.loc[idx_CH].set_index('original_name')['energy_source_level_2'].to_dict()
# Add energy_source_level_2
CH_re_df['energy_source_level_2'] = CH_re_df['energy_source_level_2'].replace(energy_source_dict_CH)
# Translate values in order to standardize energy_source_level_3
value_dict_CH = valuenames.loc[idx_CH].set_index('original_name')['opsd_name'].to_dict()
CH_re_df['energy_source_level_3'].replace(value_dict_CH, inplace=True)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'Klärgasanlage': np.nan,
'Dampfprozess': 'Steam turbine',
'übrige Biomasse - WKK-Anlage': 'Other or unspecified technology',
'übrige Biomasse - Dampfprozess': 'Steam turbine',
'Schlammverbrennungsanlage': 'Combustion engine',
'WKK-Prozess': 'Other or unspecified technology',
'Kehrrichtverbrennungsanlage': 'Combustion engine',
'Integrierte Anlage': 'Photovoltaics',
'Angebaute Anlage': 'Photovoltaics',
'Freistehende Anlage': 'Photovoltaics',
'Trinkwasserkraftwerk': 'Other or unspecified technology',
'Durchlaufkraftwerk': 'Run-of-river',
'Dotierwasserkraftwerk': 'Other or unspecified technology',
'Ausleitkraftwerk': 'Other or unspecified technology',
'Wind Offshore': 'Other or unspecified technology',
'Abwasserkraftwerk': 'Other or unspecified technology',
'Unbekannt': 'Other or unspecified technology',
np.nan: 'Onshore',
None: 'Onshore'
}
CH_re_df['technology'].replace(technology_translation_dictionary, inplace=True)
# Add energy_source_level_1
CH_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
energy_columns = ['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']
CH_re_df[energy_columns].drop_duplicates().sort_values(by='energy_source_level_2')
Explanation: Separate and assign energy source level 1-3 and technology
End of explanation
drop_mask = (CH_re_df['energy_source_level_3'] == 'Biomass and biogas') & \
(CH_re_df['technology'] == 'Steam turbine')
drop_indices = drop_mask[drop_mask].index
CH_re_df.drop(drop_indices, axis='index', inplace=True)
CH_re_df.reset_index(drop=True, inplace=True)
Explanation: The power plants with energy_source_level_3=Biomass and biogas and technology=Steam turbine do not belong to the renewable energy power plants, so we can remove them.
End of explanation
CH_re_df.replace(value_dict_CH, inplace=True)
Explanation: Replace the rest of the original terms with their OPSD equivalents
End of explanation
# Get geo-information
zip_CH_geo = zipfile.ZipFile(CH_geo_filepath)
# Read generated postcode/location file
CH_geo = pd.read_csv(zip_CH_geo.open('CH.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
CH_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
CH_geo.drop_duplicates('postcode', keep='last', inplace=True)
CH_geo['postcode'] = CH_geo['postcode'].astype(str)
# harmonise data class
CH_geo.postcode = CH_geo.postcode.astype(int)
# Add longitude/latitude infomation assigned by municipality code
CH_re_df = pd.merge(CH_re_df,
CH_geo[['lat', 'lon', 'postcode']],
left_on='municipality_code',
right_on='postcode',
how='left'
)
zip_CH_geo.close()
Explanation: Georeferencing
Postcode to lat/lon (WGS84)
End of explanation
CH_postcode2nuts_path = filepaths['Eurostat']
# Use the string versions of postcode and municipality code columns
CH_re_df['postcode_str'] = CH_re_df['postcode'].astype(str).str[:-2]
CH_re_df['municipality_code_str'] = CH_re_df['municipality_code'].astype(str)
CH_re_df = nuts_converter.add_nuts_information(CH_re_df, 'CH', CH_postcode2nuts_path,
postcode_column='postcode_str',
municipality_code_column='municipality_code_str',
lau_name_type='NATIONAL', how=['postcode', 'municipality'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = CH_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', CH_re_df.shape[0], 'facilities in CH.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = CH_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', CH_re_df.shape[0], 'facilities in CH.')
Explanation: Add NUTS information
End of explanation
CH_re_df[CH_re_df['nuts_1_region'].isnull()][['postcode', 'municipality']]
# Check the facilities without NUTS classification
no_nuts = CH_re_df['nuts_1_region'].isnull()
# Find the masks where some information for finding the proper NUTS code is present
municipality_name_present = ~(CH_re_df['municipality'].isnull())
# Show the cases where NUTS classification failed even though it shouldn't have
problematic_municipality_names = CH_re_df[no_nuts & municipality_name_present]['municipality'].unique()
print('Problematic municipalities:', ', '.join(list(problematic_municipality_names)) + '.')
print('Are those names present in the official NUTS tables for CH?')
if nuts_converter.municipality2nuts_df['municipality'].isin(problematic_municipality_names).any():
print('At least one is.')
else:
print('No, none is.')
Explanation: Let us check the stations for which NUTS codes could not be determined.
End of explanation
# kW to MW
CH_re_df['electrical_capacity'] /= 1000
# kWh to MWh
CH_re_df['production'] /= 1000
Explanation: We see that the municipalities of only plants for which we could not determine the NUTS codes cannot be found in the official translation tables, so there was no possibility to assign them their NUTS classification codes.
Transform electrical_capacity from kW to MW
End of explanation
columns_to_keep = ['project_name', 'energy_source_level_2','energy_source_level_3', 'technology',
'electrical_capacity', 'production', 'tariff', 'commissioning_date', 'contract_period_end',
'address', 'municipality_code', 'municipality', 'nuts_1_region', 'nuts_2_region',
'nuts_3_region', 'canton', 'company', 'title', 'surname', 'first_name', 'data_source',
'energy_source_level_1', 'lat', 'lon', 'postcode']
CH_re_df = CH_re_df.loc[:, columns_to_keep]
CH_re_df.reset_index(drop=True, inplace=True)
Explanation: Select columns to keep
End of explanation
visualize_points(CH_re_df['lat'],
CH_re_df['lon'],
'Switzerland',
categories=CH_re_df['energy_source_level_2']
)
Explanation: Visualize
End of explanation
CH_re_df.to_pickle('intermediate/CH_renewables.pickle')
del CH_re_df
Explanation: Save
End of explanation
# Download the data and get the local paths to the corresponding files
filepaths = downloader.download_data_for_country('UK')
UK_re_filepath = filepaths['BEIS']
UK_geo_filepath = filepaths['Geonames']
UK_postcode2nuts_filepath = filepaths['Eurostat']
# Read the renewable powerplants data into a dataframe
UK_re_df = pd.read_csv(UK_re_filepath,
header=2,
encoding='latin1',
parse_dates=['Record Last Updated (dd/mm/yyyy)','Operational'],
infer_datetime_format=True,
thousands=','
)
# Drop empty columns and rows
UK_re_df.dropna(axis='index', how='all', inplace=True)
UK_re_df.dropna(axis='columns', how='all', inplace=True)
Explanation: Check and validation of the renewable power plants list as well as the creation of CSV/XLSX/SQLite files can be found in Part 2 of this script. It also generates a daily time series of cumulated installed capacities by energy source.
United Kingdom UK
The data for the UK are provided by the following sources:
UK Government Department of Business, Energy & Industrial Strategy (BEIS) - the data contain information on the UK renewable energy sources and are updated at the end of each quarter.
geonames.org - the data about latitued and longitudes of the UK postcodes.
Download and Read
End of explanation
# Keep only operational facilities in the dataset
UK_re_df = UK_re_df.loc[UK_re_df["Development Status"] == "Operational"]
UK_re_df.reset_index(inplace=True, drop=True)
# Standardize string columns
strip_and_lower = ['CHP Enabled']
strip_only = ['Country', 'County', 'Operator (or Applicant)', 'Mounting Type for Solar']
for column in strip_and_lower:
util.helper.standardize_column(UK_re_df, column, lower=True)
for column in strip_only:
util.helper.standardize_column(UK_re_df, column, lower=False)
# Drop Flywheels, Battery and Liquid Air Energy Storage
UK_re_df = UK_re_df[~UK_re_df['Technology Type'].isin(['Flywheels', 'Battery', 'Liquid Air Energy Storage'])]
UK_re_df.reset_index(drop=True, inplace=True)
# Copy the column "Technology Type" to a new column named "technology"
UK_re_df['technology'] = UK_re_df['Technology Type']
Explanation: Clean the data
The downloaded dataset has to be cleaned:
- Both operational and nonoperational facilities are present in the set. However, only operational facilities are of the interest, so the dataset has to be filtered on this condition.
- Some columns don't have standardized values. For example, CHP Enabled contains five different strings: "No", "Yes", "no", "yes", and "No " with a trailing white space, even though they represent only two distinct values. So, we have to ensure a 1-to-1 mapping between the true values of a feature and their representations for all the features present in the set.
- The technologies Battery, Flywheels and Liquid Air Energy Storage are of no interest, so the facilities using them should be omitted.
End of explanation
# Choose the translation terms for the UK and create the translation dictionary
idx_UK = columnnames[columnnames['country'] == 'UK'].index
column_dict_UK = columnnames.loc[idx_UK].set_index('original_name')['opsd_name'].to_dict()
# Show the dictionary
column_dict_UK
# Translate column names
UK_re_df.rename(columns=column_dict_UK, inplace=True)
Explanation: Translate column names
End of explanation
UK_re_df['data_source'] = 'BEIS'
Explanation: Add data source
End of explanation
# Create dictionnary in order to assign energy_source_level_2 to its subtype
idx_UK = valuenames[valuenames['country'] == 'UK'].index
energy_source_dict_UK = valuenames.loc[idx_UK].set_index('original_name')['energy_source_level_2'].to_dict()
# Add energy_source_level_2
UK_re_df['energy_source_level_2'] = UK_re_df['energy_source_level_3'].replace(energy_source_dict_UK)
# Translate values in order to standardize energy_source_level_3
value_dict_UK = valuenames.loc[idx_UK].set_index('original_name')['opsd_name'].to_dict()
UK_re_df['energy_source_level_3'].replace(value_dict_UK, inplace=True)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'Biomass (co-firing)': 'Other or unspecified technology',
'Biomass (dedicated)': 'Other or unspecified technology',
'Advanced Conversion Technologies': 'Other or unspecified technology',
'Anaerobic Digestion': 'Other or unspecified technology',
'EfW Incineration': np.nan,
'Large Hydro': 'Other or unspecified technology',
'Small Hydro': 'Other or unspecified technology',
'Landfill Gas': np.nan,
'Solar Photovoltaics': 'Photovoltaics',
'Sewage Sludge Digestion': np.nan,
'Tidal Barrage and Tidal Stream': np.nan,
'Shoreline Wave': np.nan,
'Wind Offshore': 'Offshore',
'Wind Onshore': 'Onshore',
'Pumped Storage Hydroelectricity': 'Pumped storage'
}
UK_re_df['technology'].replace(technology_translation_dictionary, inplace=True)
# Add energy_source_level_1
UK_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
UK_re_df[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates()
Explanation: Translate values and harmonise energy source levels 1-3 and technology
End of explanation
# Define a wrapper for bng_to_latlon for handling None values
def to_lat_lon(easting, northing):
if pd.isnull(easting) or pd.isnull(northing):
return (None, None)
else:
return bng_to_latlon.OSGB36toWGS84(easting, northing)
# Convert easting and northing columns to numbers
UK_re_df['X-coordinate'] = pd.to_numeric(
UK_re_df['X-coordinate'].astype(str).str.replace(',', ''),
errors='coerce'
)
UK_re_df['Y-coordinate'] = pd.to_numeric(
UK_re_df['Y-coordinate'].astype(str).str.replace(',', ''),
errors='coerce'
)
# Convert easting and northing coordinates to standard latitude and longitude
latlon = UK_re_df.apply(lambda row: to_lat_lon(row["X-coordinate"], row["Y-coordinate"]),
axis=1
)
# Split a column of (latitude, longitude) pairs into two separate coordinate columns
latitude = latlon.apply(lambda x: x[0])
longitude = latlon.apply(lambda x: x[1])
# Add them to the dataframe
UK_re_df['latitude'] = latitude
UK_re_df['longitude'] = longitude
Explanation: Georeferencing
The facilities' location details comprise of the information on the address, county, region, country (England, Scotland, Wales, Northern Ireland), post code, and Easting (X) and Northing (Y) coordinates of each facility in the OSGB georeferencing system. To convert the easting and northing cordinates to standard WG84 latitude and longitude, we use package bng_latlon.
End of explanation
# Get geo-information
zip_UK_geo = zipfile.ZipFile(UK_geo_filepath)
# Read generated postcode/location file
UK_geo = pd.read_csv(zip_UK_geo.open('GB_full.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
UK_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
UK_geo.drop_duplicates('postcode', keep='last', inplace=True)
UK_geo['postcode'] = UK_geo['postcode'].astype(str)
# Find the rows where latitude and longitude are unknown
missing_latlon_mask = UK_re_df['latitude'].isna() | UK_re_df['longitude'].isna()
missing_latlon = UK_re_df[missing_latlon_mask]
# Add longitude/latitude infomation assigned by post code
updated_latlon = pd.merge(missing_latlon,
UK_geo[['lat', 'lon', 'postcode']],
left_on='postcode',
right_on='postcode',
how='left'
)
# Return the updated rows to the original frame
UK_re_df = pd.merge(UK_re_df,
updated_latlon[['uk_beis_id', 'lat', 'lon']],
on='uk_beis_id',
how='left'
)
# Use the bng_to_latlon coordinates (columns: 'latitude' and 'longitude') if present,
# otherwise, use those obtained with UK_geo (columns: 'lat' and 'lon').
UK_re_df['longitude'] = UK_re_df.apply(lambda row: row['longitude'] if not pd.isnull(row['longitude'])
else row['lon'],
axis=1
)
UK_re_df['latitude'] = UK_re_df.apply(lambda row: row['latitude'] if not pd.isnull(row['latitude'])
else row['lat'],
axis=1
)
# Drop the UK_geo columns (lat/lon)
# as the information was moved to the 'latitude' and 'longitude' columns.
UK_re_df.drop(['lat', 'lon'], axis='columns', inplace=True)
zip_UK_geo.close()
Explanation: Cases with unknown Easting and Northing coordinates
If the Easting and Northing coordinates of a facility are not provided, its latitude and longitude cannot be determined. For such sources, we look up the WGS84 coordinates in the geodataset provided by geonames.org, where the UK postcodes are paired with their latitudes and longitudes.
End of explanation
# Find the rows where latitude and longitude are unknown
missing_latlon_mask = UK_re_df['latitude'].isna() | UK_re_df['longitude'].isna()
missing_latlon = UK_re_df[missing_latlon_mask].copy()
missing_latlon = missing_latlon.reset_index()
# Determine their post code prefixes
prefixes = missing_latlon.apply(lambda row: str(row['postcode']).split(' ')[0],
axis=1
)
missing_latlon['Prefix'] = prefixes
# Determine the centroids of the areas covered by the prefixes
grouped_UK_geo=UK_geo.groupby(by=lambda i: str(UK_geo['postcode'].loc[i]).split(' ')[0])
# Assing the centroid coordinates to the facilities with unknown coordinates
updated_latlon = pd.merge(missing_latlon,
grouped_UK_geo.mean(),
left_on="Prefix",
right_index=True,
how="left"
)
# Return the updated rows to the original frame
UK_re_df = pd.merge(UK_re_df,
updated_latlon[['uk_beis_id', 'lat', 'lon']],
on='uk_beis_id',
how='left'
)
# Keep the already known coordinates (columns: 'latitude' and 'longitude') if present,
# otherwise, use those obtained by approximation (columns: 'lat' and 'lon').
UK_re_df['longitude'] = UK_re_df.apply(lambda row: row['longitude'] if not pd.isnull(row['longitude'])
else row['lon'],
axis=1
)
UK_re_df['latitude'] = UK_re_df.apply(lambda row: row['latitude'] if not pd.isnull(row['latitude'])
else row['lat'],
axis=1
)
# Drop the UK_geo columns (lat/lon)
# as the information was moved to the 'latitude' and 'longitude' columns.
UK_re_df.drop(['lat', 'lon'], axis='columns', inplace=True)
Explanation: Cases for approximation
In the cases where the full post code was not present in geonames.org, use its prefix to find the latitude / longitude pairs of locations covered by that prefix. Then, approximate those facilities' locations by the centroids of their prefix areas.
End of explanation
UK_postcode2nuts_filepath = filepaths['Eurostat']
UK_re_df = nuts_converter.add_nuts_information(UK_re_df, 'UK', UK_postcode2nuts_filepath,
latitude_column='latitude',
longitude_column='longitude', closest_approximation=True,
lau_name_type='NATIONAL', how=['latlon', 'municipality'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = UK_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', UK_re_df.shape[0], 'facilities in UK.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = UK_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', UK_re_df.shape[0], 'facilities in UK.')
Explanation: Add NUTS information
End of explanation
UK_re_df[UK_re_df['nuts_1_region'].isnull()]
Explanation: Let us see the facilities for which the NUTS codes could not be determined.
End of explanation
visualize_points(UK_re_df['latitude'],
UK_re_df['longitude'],
'United Kingdom',
categories=UK_re_df['energy_source_level_2']
)
Explanation: There are two such rows only. The langitude and longitude coordinates, as well as municipality codes, are missing from the data set, so NUTS codes could not have been determined.
Visualize the data
End of explanation
max_X = UK_re_df['X-coordinate'].max()
min_X = UK_re_df['X-coordinate'].min()
max_Y = UK_re_df['Y-coordinate'].max()
min_Y = UK_re_df['Y-coordinate'].min()
figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
ax = plt.axes(projection=ccrs.OSGB())
ax.coastlines('10m')
ax.scatter(UK_re_df['X-coordinate'], UK_re_df['Y-coordinate'],s=0.5)
plt.show()
Explanation: We see that some facilities appear to be located in the sea. Let us plot the original OSGB coordinates to see if translation to the standard longitude and latitude coordinates failed for some locations.
End of explanation
# Rename 'longitude' and 'latitude' to 'lon' and 'lat' to conform to the naming convention
# used for other countries.
UK_re_df.rename(columns={'longitude': 'lon', 'latitude': 'lat'}, inplace=True)
# Define the columns to keep
columns_of_interest = ['commissioning_date', 'uk_beis_id', 'operator', 'site_name',
'energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology',
'electrical_capacity', 'chp', 'support_robranding', 'support_fit', 'support_cfd',
'capacity_individual_turbine', 'number_of_turbines', 'solar_mounting_type',
'status', 'address', 'municipality', 'nuts_1_region', 'nuts_2_region', 'nuts_3_region',
'region', 'country', 'postcode', 'lon', 'lat', 'data_source'
]
for col in columns_of_interest:
if col not in UK_re_df.columns:
print(col)
# Clean the dataframe from columns other than those specified above
UK_re_df = UK_re_df.loc[:, columns_of_interest]
UK_re_df.reset_index(drop=True, inplace=True)
UK_re_df.columns
Explanation: As we can see, the maps are basically the same, which confirms that translation to the longitude and latitude coordinates is done correctly and that they reflect the positions specified by the original X and Y OSGB coordinates.
Keep only the columns of interest
End of explanation
UK_re_df.to_pickle('intermediate/UK_renewables.pickle')
Explanation: Save
End of explanation
# Download the data and get the local paths to the corresponding files
filepaths = downloader.download_data_for_country('SE')
print(filepaths)
SE_re_filepath = filepaths['Vindbrukskollen']
SE_geo_filepath = filepaths['Geonames']
SE_postcode2nuts_filepath = filepaths['Eurostat']
Explanation: Sweden
The data for Sweden are provided by the following sources:
Vindbrukskollen - Wind farms in Sweden.
End of explanation
# Define the function for converting the column "Senast sparads" to date type
#def from_int_to_date(int_date):
# print(int_date)
# str_date =str(int_date)
# year = str_date[:4]
# month = str_date[4:6]
# day = str_date[6:8]
# str_date = '{}/{}/{}'.format(year, month, day)
# return pd.to_datetime(str_date, format='%Y/%m/%d')
# Read the data
SE_re_df = pd.read_excel(SE_re_filepath,
sheet_name='Vindkraftverk',
na_values='-',
parse_dates=['Uppfört', 'Senast sparad'],
infer_datetime_format=True,
#converters={'Senast sparad' : from_int_to_date}
)
# Show 5 rows from the beginning
SE_re_df.head(5)
Explanation: Load the data
End of explanation
# Drop empty rows and columns
SE_re_df.dropna(axis='index', how='all', inplace=True)
SE_re_df.dropna(axis='columns', how='all', inplace=True)
# Make sure that the column Uppfört is of the date type and correctly formatted
SE_re_df['Uppfört'] = pd.to_datetime(SE_re_df['Uppfört'], format='%Y-%m-%d')
# Keep only operational wind farms
subset_mask = SE_re_df['Status'].isin(['Beviljat', 'Uppfört'])
SE_re_df.drop(SE_re_df[~subset_mask].index, axis='index', inplace=True)
# Remove the farms whose capacity is not known.
subset_mask = SE_re_df['Maxeffekt (MW)'].isna()
SE_re_df.drop(SE_re_df[subset_mask].index, axis='index', inplace=True)
# Standardize string columns
string_columns = ['Modell', 'Fabrikat', 'Elområde', 'Kommun', 'Län', 'Handlingstyp', 'Placering']
for col in string_columns:
util.helper.standardize_column(SE_re_df, col, lower=False)
Explanation: Clean the data
Drop empty rows and columns.
Make sure that the column Uppfört is of the date type.
Keep only operational wind farms (Status is Beviljat (permission granted) or Uppfört (the farm exists)).
Remove the farms whose capacity is not known.
Standardize string columns.
End of explanation
# Choose the translation terms for the UK and create the translation dictionary
idx_SE = columnnames[columnnames['country'] == 'SE'].index
column_dict_SE = columnnames.loc[idx_SE].set_index('original_name')['opsd_name'].to_dict()
# Show the dictionary
display(column_dict_SE)
# Translate column names
SE_re_df.rename(columns=column_dict_SE, inplace=True)
Explanation: Translate column names
End of explanation
SE_re_df.loc[(SE_re_df['commissioning_date'].dt.year == 1900), 'commissioning_date'] = np.nan
Explanation: Correct the dates
Some wind farms are declared to be commissioned in the year 1900. We set those dates to np.nan.
End of explanation
SE_re_df['data_source'] = 'Vindbrukskollen'
Explanation: Add source
End of explanation
# Choose the translation terms for Sweden
idx_SE = valuenames[valuenames['country'] == 'SE'].index
value_dict_SE = valuenames.loc[idx_SE].set_index('original_name')['opsd_name'].to_dict()
value_dict_SE
# Replace all original value names by the OPSD value names
SE_re_df.replace(value_dict_SE, inplace=True)
# Set nans in the technology column to 'Unknown or unspecified technology'
SE_re_df['technology'].fillna('Unknown or unspecified technology', inplace=True)
# Add energy level 2
SE_re_df['energy_source_level_2'] = 'Wind'
# Add energy_source_level_1
SE_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
SE_re_df[['energy_source_level_1', 'energy_source_level_2', 'technology']].drop_duplicates()
Explanation: Translate values and harmonize energy source levels
End of explanation
# Get latitude and longitude columns
lat, lon = util.helper.sweref99tm_latlon_transform(SE_re_df['sweref99tm_north'], SE_re_df['sweref99tm_east'])
# Include them in the dataframe
SE_re_df['lat'] = lat
SE_re_df['lon'] = lon
Explanation: Georeferencing
The coordinates in the columns sweref99tm_north and sweref99tm_east are specified in the SWEREF 99 TM coordinate system, used in Sweden. To convert those coordinates to the usual WGS84 latitudes and longitudes, we use the function sweref99tm_latlon_transform from the module util.helper, provided by Jon Olauson.
End of explanation
SE_postcode2nuts_filepath = filepaths['Eurostat']
SE_re_df = nuts_converter.add_nuts_information(SE_re_df, 'SE', SE_postcode2nuts_filepath,
lau_name_type='NATIONAL', how=['municipality', 'latlon'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = SE_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', SE_re_df.shape[0], 'facilities in SE.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = SE_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', SE_re_df.shape[0], 'facilities in SE.')
Explanation: Assigning NUTS codes
End of explanation
# Define which columns should be kept
columns_to_keep = ['municipality', 'county', 'nuts_1_region', 'nuts_2_region', 'nuts_3_region', 'lat', 'lon',
'energy_source_level_1', 'energy_source_level_2', 'technology', 'se_vindbrukskollen_id',
'site_name', 'manufacturer',
'electrical_capacity', 'commissioning_date', 'data_source']
# Keep only the selected columns
SE_re_df = SE_re_df.loc[:, columns_to_keep]
Explanation: Select the columns to keep
End of explanation
visualize_points(SE_re_df['lat'],
SE_re_df['lon'],
'Sweden',
categories=SE_re_df['technology']
)
Explanation: Visualize
End of explanation
SE_re_df.reset_index(inplace=True, drop=True)
SE_re_df.to_pickle('intermediate/SE_renewables.pickle')
del SE_re_df
Explanation: Save
End of explanation
# Download the data and get the local paths to the corresponding files
print('Start:', datetime.datetime.now())
downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
filepaths = downloader.download_data_for_country('CZ')
print('End:', datetime.datetime.now())
CZ_re_filepath = filepaths['ERU']
CZ_geo_filepath = filepaths['Geonames']
CZ_postcode2nuts_filepath = filepaths['Eurostat']
# Define a converter for CZ postcode strings
def to_cz_postcode_format(postcode_str):
return postcode_str[:3] + ' ' + postcode_str[3:]
# Read the data from the csv file
CZ_re_df = pd.read_csv(CZ_re_filepath,
escapechar='\\',
dtype = {
'number_of_sources' : int,
},
parse_dates=['licence_approval_date'],
infer_datetime_format=True,
converters = {
'site_postcode' : to_cz_postcode_format,
'holder_postcode' : to_cz_postcode_format
}
)
# Show a few rows
CZ_re_df.head(5)
Explanation: Czech Republic
The data for Czech Republic are provided by the following source:
- ERU (Energetický regulační úřad, Energy Regulatory Office) - Administrative authority responsible for regulation in the energy sector. Provides the data on renewable energy plants in Czech Republic.
Download and read the data
Downloading the data from the original source may take 1-2 hours because it's done by scraping the information from HTML pages.
If downloading fails because of the ERU's server refusing connections:
- pause and wait for some time;
- delete the file eru.csv in the CZ input directory;
- try downloading again.
Alternatively, you can download the data from the OPSD server.
End of explanation
CZ_re_df.dtypes
Explanation: Let's inspect the dataframe's columns:
End of explanation
mwe_columns = [col for col in CZ_re_df.columns if 'megawatts_electric' in col and col != 'megawatts_electric_total']
mwt_columns = [col for col in CZ_re_df.columns if 'megawatts_thermal' in col and col != 'megawatts_thermal_total']
def count_types(row):
global mwe_columns
different_types = sum([row[col] > 0 for col in mwe_columns])
return different_types
CZ_re_df.apply(count_types, axis=1).value_counts()
Explanation: It contains 30 columns:
- site_name, site_region, site_postcode, site_locality, site_district give us basic information on the site;
- megawatts_electric_total shows us the total electric capacity of the site;
- Since each site can use different types of energy, megawatts_electric_hydro, megawatts_electric_solar, megawatts_electric_biogas_and_biomass, megawatts_electric_wind, megawatts_electric_unspecified show us how total capacity breaks down to those renewable types from the OPSD energy hierarchy;
- The columns beginning with megawatts_thermal_ represent the amiunt of input energy required (and will be equal to zero in most cases);
- watercourse and watercourse_length_km represent the name and length of the watercourse used by the site (if any);
- holder_name, holder_region, holder_address, holder_postcode, holder_locality, holder_district, holder_representative give us basic information on the site's owner;
- licence_number and licence_approval_date show us the licence number given to the holder and its approval date.
- link points to the ERU page with the site's data in HTML.
Since some sites use conventional types of energy, it is possible that megawatts_electric_total > megawatts_electric_hydro + megawatts_electric_solar + megawatts_electric_biogas_and_biomass + megawatts_electric_wind + megawatts_electric_unspecified. If the sum of renewable-energy capacities is equal to zero, that means that the correspoding row actually represents a conventional powerplant, so it should be excluded.
Let us now check how many sites use how many types of renewable energy sources.
End of explanation
# Drop empty columns and rows
CZ_re_df.dropna(axis='index', how='all', inplace=True)
CZ_re_df.dropna(axis='columns', how='all', inplace=True)
# Drop rows with no data on electrical capacity and the rows where total electrical capacity is 0
empty_mask = (CZ_re_df['megawatts_electric_total'] == 0) | (CZ_re_df['megawatts_electric_total'].isnull())
CZ_re_df = CZ_re_df.loc[~empty_mask]
CZ_re_df.reset_index(inplace=True, drop=True)
# Replace NANs with zeroes in mwe and mwt columns
replacement_dict = {col : 0 for col in mwe_columns + mwt_columns}
CZ_re_df.fillna(replacement_dict, inplace=True)
# Drop the rows where renewable-energy share of the total capacity is equal to zero
conventional_mask = (CZ_re_df['megawatts_electric_hydro'] +
CZ_re_df['megawatts_electric_solar'] +
CZ_re_df['megawatts_electric_biogas_and_biomass'] +
CZ_re_df['megawatts_electric_wind'] +
CZ_re_df['megawatts_electric_unspecified']) == 0
CZ_re_df = CZ_re_df.loc[~conventional_mask]
CZ_re_df.reset_index(inplace=True, drop=True)
Explanation: As of April 2020, as we can see in the output above, there are only 4 sites which use more than one type of renewable energy, and there are 193 sites which do not use renewable energy at all.
Clean the data
End of explanation
# Define the function which will extract the data about the type of energy specified by the given column
# and return it as a dataframe in the "long format"
def select_and_reformat(df, column):
# Use the mwe and mwt columns defined above
global mwe_columns
global mwt_columns
# Declare the given column and its mwt counterpart as exceptions
mwt_exception = column.replace('electric', 'thermal')
exceptions = [column, mwt_exception]
# Exclude all the mwe and mwt columns which do not correspond to the given energy type
columns_to_skip = [col for col in mwe_columns + mwt_columns if col not in exceptions]
# Keep all the other columns
columns_to_keep = [col for col in df.columns if col not in columns_to_skip]
# Find the stations which use the given type of energy
selection_mask = (df[column] > 0)
# Keep them and select the columns we decided to keep
selection_df = df[selection_mask][columns_to_keep]
# Create a new column which will indicate the energy type
selection_df['energy_type'] = " ".join(column.split('_')[2:])
# Remove the energy type name from the columns representing electrical capacity
# and megawatts thermal
selection_df.rename(columns = {column : 'electrical_capacity',
mwt_exception : 'megawatts_thermal'},
inplace=True)
selection_df.drop(columns=['megawatts_electric_total', 'megawatts_thermal_total'],
inplace=True)
# Ensure the rows are properly indexed as 0,1,2,...
selection_df.reset_index(inplace=True, drop=True)
return selection_df
# Create a dataframe for each energy type
dataframes = []
for column in mwe_columns:
selection = select_and_reformat(CZ_re_df, column)
energy_type = selection['energy_type'].unique()[0]
dataframes.append(selection)
# Concatenate the dataframes
CZ_re_df = pd.concat(dataframes, ignore_index=False)
CZ_re_df.reset_index(inplace=True, drop=True)
Explanation: Reformat the data
There are sites which use different types of renewable source to produce electric energy. Those are the sites where at least two of the following columns are not equal to zero: megawatts_electric_hydro, megawatts_electric_solar, megawatts_electric_biogas_and_biomass, megawatts_electric_wind, megawatts_electric_unspecified. The data that come in this shape are said to be in the so called wide format. For the purpose of our later processing, it would be more convenient to have the data where each row is associated to one and only one type of energy (the so called long format). Therefore, we must first restructure our data from the wide to long format.
End of explanation
CZ_re_df
Explanation: Let us see what is this restructured dataframe like.
End of explanation
# Choose the translation terms for CZ and create the translation dictionary
idx_CZ = columnnames[columnnames['country'] == 'CZ'].index
column_dict_CZ = columnnames.loc[idx_CZ].set_index('original_name')['opsd_name'].to_dict()
# Show the dictionary
column_dict_CZ
# Translate column names
CZ_re_df.rename(columns=column_dict_CZ, inplace=True)
Explanation: The number of columns has been reduced as we have transformed the data to the long format. The rows representning conventional power plants have been excluded. Since only few sites use multiple types of energy, the total number of rows has not increased.
Translate column names
End of explanation
# Choose the translation terms for Czech Republic
idx_CZ = valuenames[valuenames['country'] == 'CZ'].index
# Choose the translation terms for energy source level 3
energy3_dict_CZ = valuenames.loc[idx_CZ].set_index('original_name')['opsd_name'].to_dict()
energy3_dict_CZ
# Add energy source level 3
CZ_re_df['energy_source_level_3'] = CZ_re_df['technology'].replace(energy3_dict_CZ)
# Choose the terms for energy source level 2
energy2_dict_CZ = valuenames.loc[idx_CZ].set_index('original_name')['energy_source_level_2'].to_dict()
CZ_re_df['energy_source_level_2'] = CZ_re_df['technology'].replace(energy2_dict_CZ)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_dict = {
'biogas and biomass' : np.nan,
'wind' : 'Onshore',
'solar' : 'Other or unspecified technology',
'hydro' : 'Run-of-river',
'unspecified' : np.nan
}
CZ_re_df['technology'] = CZ_re_df['technology'].replace(technology_dict)
# Add energy_source_level_1
CZ_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
CZ_re_df[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates()
Explanation: Translate values and harmonize energy levels
End of explanation
CZ_re_df['data_source'] = 'ERU'
Explanation: Add data source
End of explanation
# Get geo-information
zip_CZ_geo = zipfile.ZipFile(CZ_geo_filepath)
# Read generated postcode/location file
CZ_geo = pd.read_csv(zip_CZ_geo.open('CZ.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
CZ_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
CZ_geo.drop_duplicates('postcode', keep='last', inplace=True)
# Add longitude/latitude infomation assigned by postcode
CZ_re_df = pd.merge(CZ_re_df,
CZ_geo[['lat', 'lon', 'postcode']],
left_on='postcode',
right_on='postcode',
how='left'
)
Explanation: Georeferencing
End of explanation
CZ_postcode2nuts_filepath = filepaths['Eurostat']
CZ_re_df = nuts_converter.add_nuts_information(CZ_re_df, 'CZ', CZ_postcode2nuts_filepath, how=['postcode'])
# Report the number of facilites whose NUTS codes were successfully determined
determined = CZ_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', CZ_re_df.shape[0], 'facilities in CZ.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = CZ_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', CZ_re_df.shape[0], 'facilities in CZ.')
Explanation: Assign NUTS codes
End of explanation
# Define which columns should be kept
columns_to_keep = ['site_name', 'region', 'municipality', 'locality', 'postcode',
'nuts_1_region', 'nuts_2_region', 'nuts_3_region', 'lat', 'lon',
'energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology',
'owner', 'electrical_capacity', 'data_source']
# Keep only the selected columns
CZ_re_df = CZ_re_df.loc[:, columns_to_keep]
Explanation: Select the columns to keep
End of explanation
CZ_re_df.drop_duplicates(inplace=True)
CZ_re_df.reset_index(drop=True, inplace=True)
Explanation: Drop duplicates
End of explanation
visualize_points(CZ_re_df['lat'],
CZ_re_df['lon'],
'Czechia',
categories=CZ_re_df['energy_source_level_2']
)
Explanation: Visualuze
End of explanation
CZ_re_df.reset_index(inplace=True, drop=True)
CZ_re_df.to_pickle('intermediate/CZ_renewables.pickle')
del CZ_re_df
Explanation: Save
End of explanation
zip_archive = zipfile.ZipFile(input_directory_path + '.zip', 'w', zipfile.ZIP_DEFLATED)
print("Zipping the raw files...")
for filename in os.listdir(input_directory_path):
print("Adding", filename, "to the zip.")
filepath = os.path.join(input_directory_path, filename)
zip_archive.write(filepath)
zip_archive.close()
print("Done!")
#shutil.rmtree(input_directory_path)
Explanation: Zip the raw data
End of explanation |
13,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PoS tagging en Español
En este primer ejercicio vamos a jugar con uno de los corpus en español que está disponible desde NLTK
Step1: Fíjate que las etiquetas que se usan en el treebank español son diferentes a las etiquetas que habíamos visto en inglés. Para empezar, el español es una lengua con una morfología más rica
Step2: Las etiquetas morfológicas que hemos visto son bastante complejas, ya que incorporan los rasgos de la flexión del español. Afortunadamente, NLTK permite cargar los corpus etiquetados con un conjunto de etiquetas universal y simplificado (todos los detalles en el paper) utilizando la opcion tagset='universal'. Para ello, asegúrate de que has almacenado dentro de tu directorio de recursos de nltk el mapeo de etiquetas originales del corpus con su versión simplificada. Este fichero se llama universal_tagset-ES.map y lo tienes en la carpeta data del respositorio. Es recomendable renombrarlo, por ejemplo
Step3: Después, ejecuta la siguiente celda y fíjate cómo hemos cargado una lista de oraciones etiquetadas con esta nueva versión de las etiquetas.
Step5: Estas etiquetas son más sencillas, ¿verdad? Básicamente tenemos DET para determinante, NOUN para nombre, VERB para verbo, ADJ para adjetivo, ADP para preposición, etc.
Vamos a utilizar este corpus para entrenar varios etiquetadores basados en ngramas, tal y como hicimos en clase y se explica en la presentación nltk-pos.
Construye de manera incremental cuatro etiquetadores.
un etiquetador que por defecto que asuma que una palabra desconocida es un nombre común en masculino singular y asigne la etiqueta correspondiente a todas las palabras.
un etiquetador basado en unigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
un etiquetador basado en bigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
un etiquetador basado en trigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo. | Python Code:
import nltk
from nltk.corpus import cess_esp
cess_esp = cess_esp.tagged_sents()
print(cess_esp[0])
Explanation: PoS tagging en Español
En este primer ejercicio vamos a jugar con uno de los corpus en español que está disponible desde NLTK: CESS_ESP, un treebank anotado a partir de una colección de noticias en español.
Este corpus está actualmente incluído en un recurso más amplio, el corpus AnCora que desarrollan en la Universitat de Barcelona. Para más información, podéis leer el artículo de M. Taulé, M. A. Martí y M. Recasens "AnCora: Multilevel Annotated Corpora for Catalan and Spanish". Proceedings of 6th International Conference on Language Resources and Evaluation (LREC 2008). 2008. Marrakesh (Morocco).
Antes de nada, ejecuta la siguiente celda para acceder al corpus y a otras herramientas que vamos a usar en este ejercicio.
End of explanation
# escribe tu código aquí
Explanation: Fíjate que las etiquetas que se usan en el treebank español son diferentes a las etiquetas que habíamos visto en inglés. Para empezar, el español es una lengua con una morfología más rica: si queremos reflejar el género y el número de los adjetivos, por ejemplo, no nos vale con etiquetar los adjetivos con una simple JJ.
Echa un vistazo a las etiquetas morfológicas y trata de interpretar su significado. En estas primeras 50 palabras encontramos:
da0ms0: determinante artículo masculino singular
ncms000: nombre común masculino singular
aq0cs0: adjetivo calificativo de género común singular
np00000: nombre propio
sps00: preposición
vmis3s0: verbo principal indicativo pasado 3ª persona del singular
Aquí tienes el la explicación de las etiquetas y el catálogo completo de rasgos para el etiquetado en español usadas en este corpus. A partir de lo que aprendas en el enlace anterior:
Imprime por pantalla solo las palabras etiquetadas como formas verbales en 3ª persona del plural del pretérito perfecto simple de indicativo.
Calcula qué porcentaje del total representan las palabras del corpus CEES_ESP etiquetadas como formas verbales en 3ª persona del plural del pretérito perfecto simple de indicativo.
End of explanation
!cp ../data/universal_tagset-ES.map ~/nltk_data/taggers/universal_tagset/es-ancora.map
Explanation: Las etiquetas morfológicas que hemos visto son bastante complejas, ya que incorporan los rasgos de la flexión del español. Afortunadamente, NLTK permite cargar los corpus etiquetados con un conjunto de etiquetas universal y simplificado (todos los detalles en el paper) utilizando la opcion tagset='universal'. Para ello, asegúrate de que has almacenado dentro de tu directorio de recursos de nltk el mapeo de etiquetas originales del corpus con su versión simplificada. Este fichero se llama universal_tagset-ES.map y lo tienes en la carpeta data del respositorio. Es recomendable renombrarlo, por ejemplo:
End of explanation
from nltk.corpus import cess_esp
cess_esp._tagset = 'es-ancora'
oraciones = cess_esp.tagged_sents(tagset='universal')
print(oraciones[0])
Explanation: Después, ejecuta la siguiente celda y fíjate cómo hemos cargado una lista de oraciones etiquetadas con esta nueva versión de las etiquetas.
End of explanation
# escribe tu código aquí
# prueba tu etiquetador basado en trigramas con las siguientes oraciones que,
# con toda seguridad, no aparecen en el corpus
print(trigramTagger.tag("Este banco está ocupado por un padre y por un hijo. El padre se llama Juan y el hijo ya te lo he dicho".split()))
print(trigramTagger.tag(El presidente del gobierno por fin ha dado la cara para anunciar aumentos de presupuesto en Educación y Sanidad a costa de dejar de subvencionar las empresas de los amigotes..split()))
print(trigramTagger.tag("El cacique corrupto y la tonadillera se comerán el turrón en prisión.".split()))
Explanation: Estas etiquetas son más sencillas, ¿verdad? Básicamente tenemos DET para determinante, NOUN para nombre, VERB para verbo, ADJ para adjetivo, ADP para preposición, etc.
Vamos a utilizar este corpus para entrenar varios etiquetadores basados en ngramas, tal y como hicimos en clase y se explica en la presentación nltk-pos.
Construye de manera incremental cuatro etiquetadores.
un etiquetador que por defecto que asuma que una palabra desconocida es un nombre común en masculino singular y asigne la etiqueta correspondiente a todas las palabras.
un etiquetador basado en unigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
un etiquetador basado en bigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
un etiquetador basado en trigramas que aprenda a partir de la lista oraciones y utilice en etiquetador anterior como respaldo.
End of explanation |
13,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy
The best part about Numpy is that, not only do we get massive speedups because numpy can perform many of its operations at the C-level, the vectorized api makes the code simpler and (to some extent more pythonic). The only "downside" is that we have to learn to write our code using Numpy idioms rather than Python idioms.
Step1: We load in the position and box information created in the intro notebook. If you haven't run that notebook, this line will not work! (You don't have to read the wall of text, just run the cells...)
Step2: Round 1
Step3: We can plot the potential energy again just to make sure this function behaves as expected.
Step4: Runtime profiling!
Step5: Memory profiling!
Step6: Round 2
Step7: Memory profiling! | Python Code:
%load_ext memory_profiler
%load_ext snakeviz
%load_ext cython
import holoviews as hv
hv.extension('bokeh','matplotlib')
from IPython.core import debugger
ist = debugger.set_trace
Explanation: Numpy
The best part about Numpy is that, not only do we get massive speedups because numpy can perform many of its operations at the C-level, the vectorized api makes the code simpler and (to some extent more pythonic). The only "downside" is that we have to learn to write our code using Numpy idioms rather than Python idioms.
End of explanation
import numpy as np
pos = np.loadtxt('data/positions.dat')
box = np.loadtxt('data/box.dat')
print('Read {:d} positions.'.format(pos.shape[0]))
print('x min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[0],pos.max(0)[0]))
print('y min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[1],pos.max(0)[1]))
print('z min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[2],pos.max(0)[2]))
Explanation: We load in the position and box information created in the intro notebook. If you haven't run that notebook, this line will not work! (You don't have to read the wall of text, just run the cells...)
End of explanation
import numpy as np
def potentialEnergyFunk(r,width=1.0,height=10.0):
'''
Calculates the (soft) potential energy between two atoms
Parameters
----------
r: ndarray (float)
separation distances between two atoms
height: float
breadth of the potential i.e. where the potential goes to zero
width: float
strength/height of the potential
'''
U = np.zeros_like(r)
mask = (r<width) #only do calculation below the cutoff width
U[mask] = 0.5 * height * (1 + np.cos(np.pi*r[mask]/width))
return U
Explanation: Round 1: Vectorized Operations
We need to re-implement the potential energy function in numpy.
End of explanation
%%opts Curve [width=600,show_grid=True,height=350]
dr = 0.05 # spacing of r points
rmax = 10.0 # maximum r value
pts = int(rmax/dr) # number of r points
r = np.arange(dr,rmax,dr)
def plotFunk(width,height,label='dynamic'):
U = potentialEnergyFunk(r,width,height)
return hv.Curve((r,U),kdims=['Separation Distance'],vdims=['Potential Energy'],label=label)
dmap = hv.DynamicMap(plotFunk,kdims=['width','height'])
dmap = dmap.redim.range(width=((1.0,10.0)),height=((1.0,5.0)))
dmap*plotFunk(10.0,5.0,label='width: 10., height: 5.')*plotFunk(1.0,1.0,label='width: 1., height: 1.')
from math import sqrt
def calcTotalEnergy1(pos,box):
'''
Parameters
----------
pos: ndarray, size (N,3), (float)
array of cartesian coordinate positions
box: ndarray, size (3), (float)
simulation box dimensions
'''
#sanity check
assert box.shape[0] == 3
# This next line is rather unpythonic but essentially it convinces
# numpy to perform a subtraction between the full Cartesian Product
# of the positions array
dr = np.abs(pos - pos[:,np.newaxis,:])
#still need to apply periodic boundary conditions
dr = np.where(dr>box/2.0,dr-box,dr)
dist = np.sqrt(np.sum(np.square(dr),axis=-1))
# calculate the full N x N distance matrix
U = potentialEnergyFunk(dist)
# extract the upper triangle from U
U = np.triu(U,k=1)
return U.sum()
Explanation: We can plot the potential energy again just to make sure this function behaves as expected.
End of explanation
%%prun -D prof/numpy1.prof
energy = calcTotalEnergy1(pos,box)
with open('energy/numpy1.dat','w') as f:
f.write('{}\n'.format(energy))
Explanation: Runtime profiling!
End of explanation
memprof = %memit -o calcTotalEnergy1(pos,box)
usage = memprof.mem_usage[0]
incr = memprof.mem_usage[0] - memprof.baseline
with open('prof/numpy1.memprof','w') as f:
f.write('{}\n{}\n'.format(usage,incr))
Explanation: Memory profiling!
End of explanation
from math import sqrt
def calcTotalEnergy2(pos,box):
'''
Parameters
----------
pos: ndarray, size (N,3), (float)
array of cartesian coordinate positions
box: ndarray, size (3), (float)
simulation box dimensions
'''
#sanity check
assert box.shape[0] == 3
# This next line is rather unpythonic but essentially it convinces
# numpy to perform a subtraction between the full Cartesian Product
# of the positions array
dr = np.abs(pos - pos[:,np.newaxis,:])
#extract out upper triangle
dr = dr[np.triu_indices(dr.shape[0],k=1)] #<<<<<<<
#still need to apply periodic boundary conditions
dr = np.where(dr>box/2.0,dr-box,dr)
dist = np.sqrt(np.sum(np.square(dr),axis=-1))
# calculate the full N x N distance matrix
U = potentialEnergyFunk(dist)
return U.sum()
%%prun -D prof/numpy2.prof
energy = calcTotalEnergy2(pos,box)
with open('energy/numpy2.dat','w') as f:
f.write('{}\n'.format(energy))
Explanation: Round 2: Less is More
This is good, but can we do better? With this implementation, we are actually calculating twice as potential energies as we need to! Let's reimplement the above to see if we can speed up this function (and possible reduce the memory usage).
End of explanation
memprof = %memit -o calcTotalEnergy2(pos,box)
usage = memprof.mem_usage[0]
incr = memprof.mem_usage[0] - memprof.baseline
with open('prof/numpy2.memprof','w') as f:
f.write('{}\n{}\n'.format(usage,incr))
Explanation: Memory profiling!
End of explanation |
13,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
token_dictionary = {
'.' : "||Period||",
',' : "||Comma||",
'"' : "||Quotation_Mark||",
';' : "||Semicolon||",
'!' : "||Exclamation_Mark||",
'?' : "||Question_Mark||",
'(' : "||Left_Parentheses||",
')' : "||Right_Parentheses||",
'--' : "||Dash||",
'\n' : "||Return||",
}
return token_dictionary
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None,None], "input")
targets = tf.placeholder(tf.int32, [None,None], "targets")
learning_rate = tf.placeholder(tf.float32, None, "learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), "initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(state, "final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, 200)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None, weights_initializer=tf.random_normal_initializer(mean=0.0, stddev=0.05, seed=None, dtype=tf.float32))
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
int_text_array = np.array(int_text)
slice_size = batch_size * seq_length
n_batches = int(len(int_text_array)/slice_size)
# Drop the last few characters to make only full batches
x_data = int_text_array[: n_batches*slice_size]
y_data = int_text_array[1: n_batches*slice_size + 1]
x = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
return (np.asarray(list(zip(x, y))))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 150
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_tensor = loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor = loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor = loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.random.choice(np.arange(len(int_to_vocab)),p=probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Read Data
Preprocessed and tokenized questions.
Step4: Pretrained word vector database.
Step5: Build Features
Step6: Save features | Python Code:
from pygoose import *
from gensim.models.wrappers.fasttext import FastText
from scipy.spatial.distance import cosine, euclidean, cityblock
Explanation: Feature: Phrase Embedding Distances
Based on the pre-trained word embeddings, we'll calculate the mean embedding vector of each question (as well as the unit-length normalized sum of word embeddings), and compute vector distances between these aggregate vectors.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'phrase_embedding'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_test.pickle')
tokens = tokens_train + tokens_test
Explanation: Read Data
Preprocessed and tokenized questions.
End of explanation
embedding_model = FastText.load_word2vec_format(project.aux_dir + 'fasttext_vocab.vec')
Explanation: Pretrained word vector database.
End of explanation
def get_phrase_embedding_distances(pair):
q1_vectors = [embedding_model[token] for token in pair[0] if token in embedding_model]
q2_vectors = [embedding_model[token] for token in pair[1] if token in embedding_model]
if len(q1_vectors) == 0:
q1_vectors.append(np.zeros(word_vector_dim))
if len(q2_vectors) == 0:
q2_vectors.append(np.zeros(word_vector_dim))
q1_mean = np.mean(q1_vectors, axis=0)
q2_mean = np.mean(q2_vectors, axis=0)
q1_sum = np.sum(q1_vectors, axis=0)
q2_sum = np.sum(q2_vectors, axis=0)
q1_norm = q1_sum / np.sqrt((q1_sum ** 2).sum())
q2_norm = q2_sum / np.sqrt((q2_sum ** 2).sum())
return [
cosine(q1_mean, q2_mean),
np.log(cityblock(q1_mean, q2_mean) + 1),
euclidean(q1_mean, q2_mean),
cosine(q1_norm, q2_norm),
np.log(cityblock(q1_norm, q2_norm) + 1),
euclidean(q1_norm, q2_norm),
]
distances = kg.jobs.map_batch_parallel(
tokens,
item_mapper=get_phrase_embedding_distances,
batch_size=1000,
)
distances = np.array(distances)
X_train = distances[:len(tokens_train)]
X_test = distances[len(tokens_train):]
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
Explanation: Build Features
End of explanation
feature_names = [
'phrase_emb_mean_cosine',
'phrase_emb_mean_cityblock_log',
'phrase_emb_mean_euclidean',
'phrase_emb_normsum_cosine',
'phrase_emb_normsum_cityblock_log',
'phrase_emb_normsum_euclidean',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
13,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Software Developer career satisfaction detection - DEMO
Created by Judit Acs
Step1: Load data
We use pandas for data loading and preprocessing.
Data source (kaggle.com)
Step2: Most answers are categorical
Step3: The table has 154 columns but many values are empty. These columns have the most non-empty values
Step4: Feature extraction
We will use a few columns as features and CareerSatisfaction as the target variable
Step5: We filter all rows that do not define every feature column.
Step6: CareerSatisfaction values are distributed unevenly, so we may want to include fewer samples from very large classes. Uncomment the second to last line to filter these these samples
Step7: Convert categorical features to one-hot vectors
Categorical features need to be encoded as one-hot vectors instead of a single integer value. LabelEncoder does this automatically
Step8: Scale labels to [0, 1]
Step9: Shuffle data
Step10: Define the model
Step11: Train the model
Step12: Predict labels
Step13: Compute loss
We compute it manually.
Step14: What would be the loss if we guessed 0.5 every time?
It is always a good idea to perform sanity checks. Did our model learn anything more useful than a trivial solution or a random generator?
Step15: Histogram of predictions vs. gold labels
Step16: Plot labels for 20 random samples
Step17: How similar are the answers for JobSafisfaction and CareerSatisfaction?
Step18: They are very similar
Try training the model with and without this feature. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
from keras.layers import Input, Dense, Dropout
from keras.models import Model
from keras.callbacks import EarlyStopping
Explanation: Software Developer career satisfaction detection - DEMO
Created by Judit Acs
End of explanation
df = pd.read_csv("data/stackoverflow/survey_results_public.csv")
df.head()
Explanation: Load data
We use pandas for data loading and preprocessing.
Data source (kaggle.com)
End of explanation
df.groupby("ProgramHobby").size()
Explanation: Most answers are categorical:
End of explanation
df.count().sort_values(ascending=False)[:20]
Explanation: The table has 154 columns but many values are empty. These columns have the most non-empty values:
End of explanation
feature_cols = ["Professional", "EmploymentStatus", "FormalEducation", "ProgramHobby", "HomeRemote",
"IDE", "MajorUndergrad"]
# I do not include JobSatisfaction because it's too similar to the target variable
# Uncomment this line to include it in the features
# feature_cols.append("JobSatisfaction")
target_col = "CareerSatisfaction"
Explanation: Feature extraction
We will use a few columns as features and CareerSatisfaction as the target variable:
End of explanation
condition = (df[target_col].notnull())
for c in feature_cols:
condition &= (df[c].notnull())
df = df[condition]
len(df)
Explanation: We filter all rows that do not define every feature column.
End of explanation
minval = df.groupby(target_col).size().min() * 2
filt = None
for grouper, group in df.groupby(target_col):
size = min(minval, len(group))
if filt is None:
filt = group.sample(size)
else:
filt = pd.concat((filt, group.sample(size)), axis=0)
#df = filt
len(df)
Explanation: CareerSatisfaction values are distributed unevenly, so we may want to include fewer samples from very large classes. Uncomment the second to last line to filter these these samples:
End of explanation
X = None
for col in feature_cols:
mtx = LabelEncoder().fit_transform(df[col])
maxval = np.max(mtx)
feat_mtx = np.zeros((mtx.shape[0], maxval+1))
feat_mtx[np.arange(feat_mtx.shape[0]), mtx] = 1
if X is None:
X = feat_mtx
else:
X = np.concatenate((X, feat_mtx), axis=1)
Explanation: Convert categorical features to one-hot vectors
Categorical features need to be encoded as one-hot vectors instead of a single integer value. LabelEncoder does this automatically:
End of explanation
y = df[target_col].as_matrix() / 10
Explanation: Scale labels to [0, 1]
End of explanation
rand_mtx = np.random.permutation(X.shape[0])
train_split = int(X.shape[0] * 0.9)
train_indices = rand_mtx[:train_split]
test_indices = rand_mtx[train_split:]
X_train = X[train_indices]
X_test = X[test_indices]
y_train = y[train_indices]
y_test = y[test_indices]
Explanation: Shuffle data
End of explanation
input_layer = Input(batch_shape=(None, X.shape[1]))
layer = Dense(100, activation="sigmoid")(input_layer)
layer = Dropout(.2)(layer)
layer = Dense(100, activation="sigmoid")(input_layer)
layer = Dropout(.2)(layer)
layer = Dense(100, activation="sigmoid")(input_layer)
layer = Dropout(.2)(layer)
layer = Dense(1, activation="sigmoid")(layer)
model = Model(inputs=input_layer, outputs=layer)
model.compile("rmsprop", loss="mse")
Explanation: Define the model
End of explanation
ea = EarlyStopping(patience=2)
model.fit(X_train, y_train, epochs=100, batch_size=512,
validation_split=.2, callbacks=[ea])
Explanation: Train the model
End of explanation
pred = model.predict(X_test)
Explanation: Predict labels
End of explanation
np.sqrt(np.mean(pred - y_test) ** 2)
Explanation: Compute loss
We compute it manually.
End of explanation
np.sqrt(np.mean(.5*np.ones(y_test.shape[0]) - y_test) ** 2)
Explanation: What would be the loss if we guessed 0.5 every time?
It is always a good idea to perform sanity checks. Did our model learn anything more useful than a trivial solution or a random generator?
End of explanation
prediction = pd.DataFrame({'gold': y_test, 'prediction': pred[:, 0]})
prediction['diff'] = prediction.gold - prediction.prediction
prediction.hist(['gold', 'prediction'], bins=11)
Explanation: Histogram of predictions vs. gold labels
End of explanation
prediction.sample(20).plot(y=['gold', 'prediction'], kind='bar')
Explanation: Plot labels for 20 random samples
End of explanation
df[['JobSatisfaction', 'CareerSatisfaction']].sample(20).plot(kind='bar')
Explanation: How similar are the answers for JobSafisfaction and CareerSatisfaction?
End of explanation
(df['JobSatisfaction'] - df['CareerSatisfaction']).hist(bins=20)
Explanation: They are very similar
Try training the model with and without this feature.
End of explanation |
13,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
文件系统相关操作
pathlib
The pathlib module was introduced in Python 3.4
比string 类型path 能提供更灵活功能
cheat sheet
Step1: useful functions
.read_text()
Step2: .name
Step3: Find the Last Modified File
Step4: Create a Unique File Name
Step5: dir exist and then glob with multiple extensions
Step6: shutil
Step7: collections Counter | Python Code:
from pathlib import Path
import pathlib
save_dir = "./test_dir"
Path(save_dir).mkdir(parents=True, exist_ok=True)
### get current directory
print(Path.cwd())
print(Path.home())
print(pathlib.Path.home().joinpath('python', 'scripts', 'test.py'))
Explanation: 文件系统相关操作
pathlib
The pathlib module was introduced in Python 3.4
比string 类型path 能提供更灵活功能
cheat sheet: https://github.com/chris1610/pbpython/blob/master/extras/Pathlib-Cheatsheet.pdf
shutil
文件夹操作
pathlib 创建文件夹,如果不存在
End of explanation
# Reading and Writing Files
path = pathlib.Path.cwd() / 'test.txt'
with open(path, mode='r') as fid:
headers = [line.strip() for line in fid if line.startswith('#')]
print('\n'.join(headers))
print('full text', path.read_text())
print(path.resolve().parent == pathlib.Path.cwd())
Explanation: useful functions
.read_text(): open the path in text mode and return the contents as a string.
.read_bytes(): open the path in binary/bytes mode and return the contents as a bytestring.
.write_text(): open the path and write string data to it.
.write_bytes(): open the path in binary/bytes mode and write data to it.
.resolve() method will find the full path.
End of explanation
print('path', path)
print('stem', path.stem)
print('suffix', path.suffix)
print('parent', path.parent)
print('parent of parent', path.parent.parent)
print('anchor', path.anchor)
# move or replace file
path.with_suffix('.py')
path.replace(path.with_suffix('.md')) # 改后缀
path.with_suffix('.md').replace(path.with_suffix('.txt'))
# Display a Directory Tree
def tree(directory):
print(f'+ {directory}')
for path in sorted(directory.rglob('*')):
depth = len(path.relative_to(directory).parts)
spacer = ' ' * depth
print(f'{spacer}+ {path.name}')
tree(pathlib.Path.cwd())
Explanation: .name: the file name without any directory
.parent: the directory containing the file, or the parent directory if path is a directory
.stem: the file name without the suffix
.suffix: the file extension
.anchor: the part of the path before the directories
End of explanation
from datetime import datetime
directory = pathlib.Path.cwd()
time, file_path = max((f.stat().st_mtime, f) for f in directory.iterdir())
print(datetime.fromtimestamp(time), file_path)
directory = pathlib.Path.home()
file_list = list(directory.glob('*.*'))
print(file_list)
Explanation: Find the Last Modified File
End of explanation
def unique_path(directory, name_pattern):
counter = 0
while True:
counter += 1
path = directory / name_pattern.format(counter)
if not path.exists():
return path
path = unique_path(pathlib.Path.cwd(), 'test{:03d}.txt')
print(path)
Explanation: Create a Unique File Name
End of explanation
input_path = Path("/mnt/d/code/image/hedian-demo/data/test/220425")
file_list = []
if input_path.exists():
if input_path.is_dir():
# for a in input_path.glob("*"):
# print(a)
file_list = [p.resolve() for p in input_path.glob("*") if
p.suffix in {".png", ".jpg", ".JPG", ".PNG"}]
print(len(file_list), file_list)
else:
print(p)
# PosixPath as str: str(p.resolve())
Explanation: dir exist and then glob with multiple extensions
End of explanation
# move all .txt file to achive fold
import glob
import os
import shutil
for file_name in glob.glob('*.txt'): # return a list of
new_path = os.path.join('archive', file_name)
shutil.move(file_name, new_path)
Explanation: shutil
End of explanation
# counting files
import collections
print(collections.Counter(p.suffix for p in pathlib.Path.cwd().iterdir()))
print('漂亮', collections.Counter(p.suffix for p in pathlib.Path.cwd().glob('*.t*')))
Explanation: collections Counter
End of explanation |
13,379 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I performed feature selection using ExtraTreesClassifier and SelectFromModel in data set that loaded as DataFrame, however i want to save these selected feature while maintaining columns name as well. So is there away to get selected columns names from SelectFromModel method? note that output is numpy array return important features whole columns not columns header. Please help me with the code below. | Problem:
import pandas as pd
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
import numpy as np
X, y = load_data()
clf = ExtraTreesClassifier(random_state=42)
clf = clf.fit(X, y)
model = SelectFromModel(clf, prefit=True)
column_names = X.columns[model.get_support()] |
13,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Structured data prediction using BigQuery ML </h1>
This notebook illustrates
Step1: Restart the kernel so that the new packages are picked up.
Step2: Create BigQuery output dataset
If necessary, create a BigQuery dataset to store the trained model and artifacts of training.
(you can also do this from the GCP web console)
Step5: Create BigQuery training dataset
Please see this notebook for more context on this problem and how the features were chosen.
Step6: Note a few things about the query
Step8: Create ML model training query
This is the query to train the model
Step9: Note a few things about the above query
Step10: Once the above job is complete, you can look at the training loss
Step12: Evaluate the model on an independent dataset
Let's look at overall RMSE (notice the use of ML.EVALUATE)
Step14: We can write a more sophisticated evaluation that computes the mean absolute percent error (MAPE) and group it by the taxifare to see how the errors vary with amount (notice the use of ML.PREDICT)
Step17: Note that the error is quadratic -- it decreases and then increases with fare amount
Feature engineering
Let's create some features that will improve our prediction result
Step18: Train
Step19: Evaluate the model once it is trained.
Step20: Notice that, with the feature crosses and spatial functions, we have gotten a lower RMSE and somewhat addressed the problem of errors increasing with fare amount.
More data?
What if we train on more data? Note the sample=100 to use 10 million rows. This will take <b> 10-15 min </b>
Step21: It's better (\$4.80~ vs~ \$4.96, which is promising). We have to experiment with changing the resolution of the feature cross also -- because we have more data, it is possible that we could use more feature crosses
Step22: Geo visualization
Instead of grouping by the total amount, we can group by a spatial thing. Let's look at how the taxifare error varies depending on the dropoff point, by running the following query in the BigQuery Geo Viz | Python Code:
%pip install google-cloud-bigquery seaborn
Explanation: <h1> Structured data prediction using BigQuery ML </h1>
This notebook illustrates:
<ol>
<li> Training Machine Learning models using BQML
<li> Predicting with model
<li> Using spatial queries in BigQuery
<li> Building a linear regression model with feature crosses
</ol>
The goal is to predict taxifare given the starting and ending points.
Set up notebook environment
End of explanation
# change these to try this notebook out
PROJECT = 'cloud-training-demos'
import os
os.environ['PROJECT'] = PROJECT
%%bash
gcloud config set project $PROJECT
Explanation: Restart the kernel so that the new packages are picked up.
End of explanation
%%bash
#bq mk demos
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)
Explanation: Create BigQuery output dataset
If necessary, create a BigQuery dataset to store the trained model and artifacts of training.
(you can also do this from the GCP web console)
End of explanation
import seaborn as sns
import pandas as pd
import numpy as np
import shutil
def create_input_dataset(split, sample=1000):
split is TRAIN or EVAL
sample=1000 pulls 1/1000 of full dataset
query=
WITH params AS (
SELECT
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
taxitrips AS (
SELECT
(tolls_amount + fare_amount) AS total_fare,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers
FROM
`nyc-tlc.yellow.trips`, daynames, params
WHERE
trip_distance > 0 AND fare_amount > 0 AND fare_amount < 100
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {0})) = params.{1}
)
SELECT *
FROM taxitrips
.format(sample, split)
return query
Explanation: Create BigQuery training dataset
Please see this notebook for more context on this problem and how the features were chosen.
End of explanation
query = create_input_dataset('TRAIN')
print(query)
trips = bq.query(query + " LIMIT 1000", project=PROJECT).to_dataframe()
trips.head()
trips.describe()
Explanation: Note a few things about the query:
* The main part of the query is at the bottom: (SELECT * from taxitrips).
* taxitrips does the bulk of the extraction for the NYC dataset, with the SELECT containing my training features and label.
* The WHERE removes data that I don't want to train on.
* The WHERE also includes a sampling clause to pick up only 1/1000th of the data
* I define a variable called TRAIN so that I can quickly build an independent EVAL set.
End of explanation
def create_train_query(dataset_query, model_name):
query=
CREATE or REPLACE MODEL {0}
OPTIONS
(model_type='linear_reg', labels=['total_fare'], min_rel_progress=0.005, l2_reg=0.1) AS
{1}
.format(model_name, dataset_query)
return query
train_query = create_train_query( create_input_dataset('TRAIN'), 'demos.taxifare_model' )
print(train_query)
Explanation: Create ML model training query
This is the query to train the model
End of explanation
bq.query(train_query, project=PROJECT).result()
Explanation: Note a few things about the above query:
* CREATE model is a safe way to ensure that you don't overwrite existing models. CREATE or REPLACE will … replace existing models.
* I specify my model type. Use linear_reg for regression problems and logistic_reg for classification problems.
* I specify that the total_fare column is the label.
* I ask that model training stop when the improvement is < 0.5%
* I specify the initial learning rate at 0.1 (this is optional, but shows you how to specify any optional parameters).
Train the ML model
This will take <b>5-10 min</b>. Wait for a message of the form "Job xyz completed".
End of explanation
def show_training_loss(model_name):
query = "SELECT iteration, loss from ML.TRAINING_INFO(MODEL {})".format(model_name)
print(query)
loss_df = bq.query(query, project=PROJECT).to_dataframe()
loss_df['loss'] = np.sqrt(loss_df['loss']) # mean square error to RMSE
if len(loss_df) > 1:
# Sometimes, BigQuery can compute a closed form solution.
# See: https://medium.com/google-cloud/bigquery-ml-gets-faster-by-computing-a-closed-form-solution-sometimes-1baa5a838eb6
loss_df.plot(x='iteration', y='loss');
else:
print(loss_df)
show_training_loss('demos.taxifare_model');
Explanation: Once the above job is complete, you can look at the training loss:
End of explanation
def create_eval_query(dataset_query, model_name):
query=
SELECT
*,
SQRT( mean_squared_error ) AS rmse
FROM
ML.EVALUATE(MODEL {0},
(
{1}
))
.format(model_name, dataset_query)
return query
eval_query = create_eval_query( create_input_dataset('EVAL'), 'demos.taxifare_model' )
print(eval_query)
eval_df = bq.query(eval_query, project=PROJECT).to_dataframe()
eval_df
Explanation: Evaluate the model on an independent dataset
Let's look at overall RMSE (notice the use of ML.EVALUATE)
End of explanation
def create_faceted_eval_query(dataset_query, model_name):
query=
WITH predictions AS (
SELECT
total_fare,
ABS(total_fare - predicted_total_fare)/total_fare AS error,
ROUND(total_fare) AS dollars
FROM
ML.PREDICT(MODEL {0},
(
{1}
)))
SELECT
dollars,
-- mean absolute percent error
AVG(100 * error) AS MAPE
FROM predictions
GROUP BY dollars
ORDER BY
dollars
.format(model_name, dataset_query)
return query
eval_query = create_faceted_eval_query( create_input_dataset('EVAL'), 'demos.taxifare_model')
print(eval_query)
eval_df = bq.query(eval_query, project=PROJECT).to_dataframe()
ax = eval_df.plot(x='dollars', y='MAPE');
ax.set_xlim(5, 25)
ax.set_ylim(0,100)
Explanation: We can write a more sophisticated evaluation that computes the mean absolute percent error (MAPE) and group it by the taxifare to see how the errors vary with amount (notice the use of ML.PREDICT):
End of explanation
def create_input_dataset_fc(split, sample=1000):
split is TRAIN or EVAL
sample=1000 pulls 1/1000 of full dataset
query=
WITH params AS (
SELECT
0.1 AS RES,
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
taxitrips AS (
SELECT
(tolls_amount + fare_amount) AS total_fare,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
ST_GeogPoint(pickup_longitude, pickup_latitude) AS pickup,
ST_GeogPoint(dropoff_longitude, dropoff_latitude) AS dropoff,
passenger_count AS passengers
FROM
`nyc-tlc.yellow.trips`, daynames, params
WHERE
trip_distance > 0 AND fare_amount > 0 AND fare_amount < 100
and fare_amount >= 2.5 and pickup_longitude > -78 and pickup_longitude < -70
and dropoff_longitude > -78 and dropoff_longitude < -70 and pickup_latitude > 37
and pickup_latitude < 45 and dropoff_latitude > 37 and dropoff_latitude < 45
and passenger_count > 0
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {0})) = params.{1}
),
feateng AS (
SELECT
total_fare,
ST_Distance(pickup, dropoff) AS euclidean,
CONCAT(dayofweek, CAST(hourofday AS STRING)) AS dayhr_fc,
CONCAT(ST_AsText(ST_SnapToGrid(pickup, params.RES)),
ST_AsText(ST_SnapToGrid(dropoff, params.RES))) AS loc_fc
FROM
taxitrips, params
)
SELECT *
FROM feateng
.format(sample, split)
return query
Explanation: Note that the error is quadratic -- it decreases and then increases with fare amount
Feature engineering
Let's create some features that will improve our prediction result:
<ol>
<li> Compute distance between pickup and dropoff points as ST_Distance
<li> Do a feature cross of day-hour combination to learn traffic
<li> Do a feature cross of pickup-droff points to learn tolls
</ol>
End of explanation
train_query = create_train_query( create_input_dataset_fc('TRAIN'), 'demos.taxifare_model_fc' )
print(train_query)
bq.query(train_query, project=PROJECT).result()
Explanation: Train: this will take about <b> 5-10 minutes </b>
End of explanation
show_training_loss('demos.taxifare_model_fc')
eval_query = create_eval_query( create_input_dataset_fc('EVAL'), 'demos.taxifare_model_fc' )
eval_df = bq.query(eval_query, project=PROJECT).to_dataframe()
eval_df
eval_query = create_faceted_eval_query( create_input_dataset_fc('EVAL'), 'demos.taxifare_model_fc')
eval_df = bq.query(eval_query, project=PROJECT).to_dataframe()
ax = eval_df.plot(x='dollars', y='MAPE');
ax.set_xlim(5, 25)
ax.set_ylim(0,100)
Explanation: Evaluate the model once it is trained.
End of explanation
train_query = create_train_query( create_input_dataset_fc('TRAIN', sample=100), 'demos.taxifare_model_fc10' )
bq.query(train_query, project=PROJECT).result()
show_training_loss('demos.taxifare_model_fc10')
eval_query = create_eval_query( create_input_dataset_fc('EVAL'), 'demos.taxifare_model_fc10' )
eval_df = bq.query(eval_query, project=PROJECT).to_dataframe()
eval_df
Explanation: Notice that, with the feature crosses and spatial functions, we have gotten a lower RMSE and somewhat addressed the problem of errors increasing with fare amount.
More data?
What if we train on more data? Note the sample=100 to use 10 million rows. This will take <b> 10-15 min </b>
End of explanation
eval_query = create_faceted_eval_query( create_input_dataset_fc('EVAL'), 'demos.taxifare_model_fc10')
eval_df = bq.query(eval_query, project=PROJECT).to_dataframe()
ax = eval_df.plot(x='dollars', y='MAPE');
ax.set_xlim(5, 25)
ax.set_ylim(0,100)
Explanation: It's better (\$4.80~ vs~ \$4.96, which is promising). We have to experiment with changing the resolution of the feature cross also -- because we have more data, it is possible that we could use more feature crosses
End of explanation
%pip install google-cloud-bigquery seaborn
Explanation: Geo visualization
Instead of grouping by the total amount, we can group by a spatial thing. Let's look at how the taxifare error varies depending on the dropoff point, by running the following query in the BigQuery Geo Viz:
<pre>
WITH predictions AS (
SELECT
ABS(total_fare - predicted_total_fare)/total_fare AS error,
total_fare, pickup_gridpt, dropoff_gridpt
FROM
ML.PREDICT(MODEL demos.taxifare_model_fc,
(
WITH params AS (
SELECT
0.1 AS RES,
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
taxitrips AS (
SELECT
(tolls_amount + fare_amount) AS total_fare,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
ST_GeogPoint(pickup_longitude, pickup_latitude) AS pickup,
ST_GeogPoint(dropoff_longitude, dropoff_latitude) AS dropoff,
passenger_count AS passengers
FROM
`nyc-tlc.yellow.trips`, daynames, params
WHERE
trip_distance > 0 AND fare_amount > 0 AND fare_amount < 100
and fare_amount >= 2.5 and pickup_longitude > -78 and pickup_longitude < -70
and dropoff_longitude > -78 and dropoff_longitude < -70 and pickup_latitude > 37
and pickup_latitude < 45 and dropoff_latitude > 37 and dropoff_latitude < 45
and passenger_count > 0
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = params.EVAL
),
feateng AS (
SELECT
total_fare,
ST_Distance(pickup, dropoff) AS euclidean,
CONCAT(dayofweek, CAST(hourofday AS STRING)) AS dayhr_fc,
CONCAT(ST_AsText(ST_SnapToGrid(pickup, params.RES)),
ST_AsText(ST_SnapToGrid(dropoff, params.RES))) AS loc_fc,
ST_AsText(ST_SnapToGrid(pickup, params.RES)) AS pickup_gridpt,
ST_AsText(ST_SnapToGrid(dropoff, params.RES)) AS dropoff_gridpt
FROM
taxitrips, params
)
SELECT *
FROM feateng
)))
SELECT
dropoff_gridpt,
ST_GeogFromText(dropoff_gridpt) AS geom,
COUNT(error) AS numpts,
-- mean absolute percent error
AVG(100 * error) AS MAPE
FROM predictions
GROUP BY dropoff_gridpt
HAVING numpts > 100
</pre>
Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
End of explanation |
13,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1><span style="color
Step1: Simulate a gene tree with 14 tips and MRCA of 1M generations
Step2: Simulate sequences on single gene tree and write to NEXUS
When Ne is greater the gene tree is more likely to deviate from the species tree topology and branch lengths.
Step3: View an example locus
This shows the 2 haploid samples simulated for each tip in the species tree.
Step4: (1) Infer a tree under a relaxed molecular clock model
Step5: (2) Concatenated sequences from a species tree
Here we use concatenated sequence data from 100 loci where each represents one or more distinct genealogies. In addition, Ne is increased to 1e5, allowing for more genealogical variation. We expect the accuracy of estimated edge lengths will decrease since we are now adequately modeling the genealogical variation when using concatenation.
Step6: To see the NEXUS file (data, parameters, priors)
Step7: (3) Tree inference (not fixed topology) and plotting support values
Here we will try to infer the topology from a concatenated data set (i.e., not set a constraint on the topology). I increased the ngen setting since the MCMC chain takes longer to converge when searching over topology space. | Python Code:
# conda install ipyrad toytree mrbayes -c conda-forge -c bioconda
import toytree
import ipcoal
import ipyrad.analysis as ipa
Explanation: <h1><span style="color:gray">ipyrad-analysis toolkit:</span> mrbayes</h1>
In these analyses our interest is primarily in inferring accurate branch lengths under a relaxed molecular clock model. This means that tips are forced to line up at the present (time) but that rates of substitutions are allowed to vary among branches to best explain the variation in the sequence data.
There is a huge range of models that can be employed using mrbayes by employing different combinations of parameter settings, model definitions, and prior settings. The ipyrad-analysis tool here is intended to make it easy to run such jobs many times (e.g., distributed in parallel) once you have decided on your settings. In addition, we provide a number of pre-set models (e.g., clock_model=2) that may be useful for simple scenarios.
Here we use simulations to demonstrate the accuracy of branch length estimation when sequences come from a single versus multiple distinct genealogies (e.g., gene tree vs species tree), and show an option to fix the topology to speed up analyses when your only interest is to estimate branch lengths.
End of explanation
TREE = toytree.rtree.bdtree(ntips=8, b=0.8, d=0.2, seed=123)
TREE = TREE.mod.node_scale_root_height(1e6)
TREE.draw(ts='o', layout='d', scalebar=True);
Explanation: Simulate a gene tree with 14 tips and MRCA of 1M generations
End of explanation
# init simulator
model = ipcoal.Model(TREE, Ne=2e4, nsamples=2, recomb=0)
# simulate sequence data on coalescent genealogies
model.sim_loci(nloci=1, nsites=20000)
# write results to database file
model.write_concat_to_nexus(name="mbtest-1", outdir='/tmp', diploid=True)
# the simulated genealogy of haploid alleles
gene = model.df.genealogy[0]
# draw the genealogy
toytree.tree(gene).draw(ts='o', layout='d', scalebar=True);
Explanation: Simulate sequences on single gene tree and write to NEXUS
When Ne is greater the gene tree is more likely to deviate from the species tree topology and branch lengths.
End of explanation
model.draw_seqview(idx=0, start=0, end=50);
Explanation: View an example locus
This shows the 2 haploid samples simulated for each tip in the species tree.
End of explanation
# init the mb object
mb = ipa.mrbayes(
data="/tmp/mbtest-1.nex",
name="itest-1",
workdir="/tmp",
clock_model=2,
constraints=TREE,
ngen=int(1e6),
nruns=2,
)
# summary of priors/params
print(mb.params)
# start the run
mb.run(force=True)
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-1.nex.con.tre", 10)
# scale root node to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);
# draw TRUE tree in orange on the same axes
TREE.draw(
axes=a,
ts='o', layout='d', scalebar=True,
edge_colors="darkorange",
node_sizes=0,
);
# check convergence statistics
mb.convergence_stats
Explanation: (1) Infer a tree under a relaxed molecular clock model
End of explanation
# init simulator
model = ipcoal.Model(TREE, Ne=1e5, nsamples=2, recomb=0)
# simulate sequence data on coalescent genealogies
model.sim_loci(nloci=100, nsites=200)
# write results to database file
model.write_concat_to_nexus(name="mbtest-2", outdir='/tmp', diploid=True)
# the simulated genealogies of haploid alleles
genes = model.df.genealogy[:4]
# draw the genealogies of the first four loci
toytree.mtree(genes).draw_tree_grid(ts='o', layout='r');
# init the mb object
mb = ipa.mrbayes(
data="/tmp/mbtest-2.nex",
workdir="/tmp",
name="itest-2",
clock_model=2,
constraints=TREE,
ngen=int(1e6),
nruns=2,
)
# summary of priors/params
print(mb.params)
# start the run
mb.run(force=True)
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-2.nex.con.tre", 10)
# scale root node from unitless to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);
# draw true tree in orange on the same axes
TREE.draw(
axes=a,
ts='o', layout='d', scalebar=True,
edge_colors="darkorange",
node_sizes=0,
);
mb.convergence_stats
Explanation: (2) Concatenated sequences from a species tree
Here we use concatenated sequence data from 100 loci where each represents one or more distinct genealogies. In addition, Ne is increased to 1e5, allowing for more genealogical variation. We expect the accuracy of estimated edge lengths will decrease since we are now adequately modeling the genealogical variation when using concatenation.
End of explanation
mb.print_nexus_string()
Explanation: To see the NEXUS file (data, parameters, priors):
End of explanation
# init the mb object
mb = ipa.mrbayes(
data="/tmp/mbtest-2.nex",
name="itest-3",
workdir="/tmp",
clock_model=2,
ngen=int(2e6),
nruns=2,
)
# summary of priors/params
print(mb.params)
# start run
mb.run(force=True)
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-3.nex.con.tre", 10)
# scale root node from unitless to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(
#ts='s',
layout='d',
scalebar=True,
node_sizes=18,
node_labels="prob{percent}",
);
Explanation: (3) Tree inference (not fixed topology) and plotting support values
Here we will try to infer the topology from a concatenated data set (i.e., not set a constraint on the topology). I increased the ngen setting since the MCMC chain takes longer to converge when searching over topology space.
End of explanation |
13,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Early Reinforcement Learning
With the advances of modern computing power, the study of Reinforcement Learning is having a heyday. Machines are now able to learn complex tasks once thought to be solely in the domain of humans, from controlling the heating and cooling in massive data centers to beating grandmasters at Starcraft. As magnificent as it may seem today, it had humble roots many decades ago. Seeing how far it's come, it's a wonder to see how far it will go!
Let's take a step back in time to see how these early algorithms developed. Many of these algorithms make sense given the context of when they were created. Challenge yourself and see if you can come up with the same strategies given the right problem. Ok! Time to cozy up for a story.
<img src="images/hero.jpg" width="488" height="172">
This is the hero of our story, the gumdrop emoji. It was enjoying a cool winter day building a snowman when suddenly, it slipped and fell on a frozen lake of death.
<img src="images/lake.jpg" width="900" height="680">
The lake can be thought of as a 4 x 4 grid where the gumdrop can move left (0), down (1), right (2) and up (3). Unfortunately, this frozen lake of death has holes of death where if the gumdrop enters that square, it will fall in and meet an untimely demise. To make matters worse, the lake is surrounded by icy boulders that if the gumdrop attempts to climb, will have it slip back into its original position. Thankfully, at the bottom right of the lake is a safe ramp that leads to a nice warm cup of hot cocoa.
Set Up
We can try and save the gumdrop ourselves! This is a common game people begin their Reinforcement Learning journey with, and is included in the OpenAI's python package Gym and is aptly named FrozenLake-v0 (code). No time to waste, let's get the environment up and running. Run the below to install the needed libraries if they are not installed already.
Step1: NOTE
Step2: There are four methods from Gym that are going to be useful to us in order to save the gumdrop.
* make allows us to build the environment or game that we can pass actions to
* reset will reset an environment to it's starting configuration and return the state of the player
* render displays the environment for human eyes
* step takes an action and returns the player's next state.
Let's make, reset, and render the game. The output is an ANSI string with the following characters
Step3: If we print the state we'll get 0. This is telling us which square we're in. Each square is labeled from 0 to 15 from left to right, top to bottom, like this
Step4: We can make a simple print function to let us know whether it's game won, game over, or game on.
Step5: We can control the gumdrop ourselves with the step method. Run the below cell over and over again trying to move from the starting position to the goal. Good luck!
Step6: Were you able to reach the hot chocolate? If so, great job! There are multiple paths through the maze. One solution is [1, 1, 2, 2, 1, 2]. Let's loop through our actions in order to get used to interacting with the environment programmatically.
Step7: Nice, so we know how to get through the maze, but how do we teach that to the gumdrop? It's just some bytes in an android phone. It doesn't have our human insight.
We could give it our list of actions directly, but then it would be copying us and not really learning. This was a tricky one to the mathematicians and computer scientists originally trying to solve this problem. How do we teach a machine to do this without human insight?
Value Iteration
Let's turn the clock back on our time machines to 1957 to meet Mr. Richard Bellman. Bellman started his academic career in mathematics, but due to World War II, left his postgraduate studies at John Hopkins to teach electronics as part of the war effort (as chronicled by J. J. O'Connor and E. F. Robertson here). When the war was over, and it came time for him to focus on his next area of research, he became fascinated with Dynamic Programming
Step8: The Gym environment class has a handy property for finding the number of states in an environment called observation_space. In our case, there a 16 integer states, so it will label it as "Discrete". Similarly, action_space will tell us how many actions are available to the agent.
Let's take advantage of these to make our code portable between different lakes sizes.
Step10: We'll need some sort of function to figure out what the best neighboring cell is. The below function take's a cell of the lake, and looks at the current value mapping (to be called with current_values, and see's what the value of the adjacent state is corresponding to the given action.
Step12: But this doesn't find the best action, and the gumdrop is going to need that if it wants to greedily get off the lake. The get_max_neighbor function we've defined below takes a number corresponding to a cell as state_number and the same value mapping as get_neighbor_value.
Step14: Now, let's write our value iteration code. We'll write a function that comes out one step of the iteration by checking each state and finding its maximum neighbor. The values will be reshaped so that it's in the form of the lake, but the policy will stay as a list of ints. This way, when Gym returns a state, all we need to do is look at the corresponding index in the policy list to tell our agent where to go.
Step15: This is what our values look like after one step. Right now, it just looks like the lake. That's because we started with an array of zeros for current_values, and the terminal states of the lake were loaded in.
Step16: And this is what our policy looks like reshaped into the form of the lake. The -1's are terminal states. Right now, the agent will move left in any non-terminal state, because it sees all of those states as equal. Remember, if the gumdrop is along the leftmost side of the lake, and tries to move left, it will slip on a boulder and return to the same position.
Step17: There's one last step to apply the Bellman Equation, the discount! We'll multiply our next states by the discount and set that to our current_values. One loop done!
Step18: Run the below cell over and over again to see how our values change with each iteration. It should be complete after six iterations when the values no longer change. The policy will also change as the values are updated.
Step19: Have a completed policy? Let's see it in action! We'll update our play_game function to instead take our list of policies. That way, we can start in a random position and still get to the end.
Step20: Phew! Good job, team! The gumdrop made it out alive. So what became of our gumdrop hero? Well, the next day, it was making another snowman and fell onto an even more slippery and deadly lake. Doh! Turns out this story is part of a trilogy. Feel free to move onto the next section after your own sip of cocoa, coffee, tea, or poison of choice.
Policy Iteration
You may have noticed that the first lake was built with the parameter is_slippery=False. This time, we're going to switch it to True.
Step21: Hmm, looks the same as before. Let's try applying our old policy and see what happens.
Step23: Was there a game over? There's a small chance that the gumdrop made it to the end, but it's much more likely that it accidentally slipped and fell into a hole. Oh no! We can try repeatedly testing the above code cell over and over again, but it might take a while. In fact, this is a similar roadblock Bellman and his colleagues faced.
How efficient is Value Iteration? On our modern machines, this algorithm ran fairly quickly, but back in 1960, that wasn't the case. Let's say our lake is a long straight line like this
Step25: After we've calculated our new values, then we'll update the policy (and not the values) based on the maximum neighbor. If there's no change in the policy, then we're done. The below is very similar to our get_max_neighbor function. Can you see the differences?
Step27: To complete the Policy Iteration algorithm, we'll combine the two functions above. Conceptually, we'll be alternating between updating our value function and updating our policy function.
Step28: Next, let's modify the get_neighbor_value function to now include the slippery ice. Remember the P in the Bellman Equation above? It stands for the probability of ending up in a new state given the current state and action taken. That is, we'll take a weighted sum of the values of all possible states based on our chances to be in those states.
How does the physics of the slippery ice work? For this lake, whenever the gumdrop tries to move in a particular direction, there are three possible positions that it could end up with. It could move where it was intending to go, but it could also end up to the left or right of the direction it was facing. For instance, if it wanted to move right, it could end up on the square above or below it! This is depicted below, with the yellow squares being potential positions after attempting to move right.
<img src="images/slipping.jpg" width="360" height="270">
Each of these has an equal probability chance of happening. So since there are three outcomes, they each have about a 33% chance to happen. What happens if we slip in the direction of a boulder? No problem, we'll just end up not moving anywhere. Let's make a function to find what our possible locations could be given a policy and state coordinates.
Step30: Then, we can add it to get_neighbor_value to find the weighted value of all the possible states the gumdrop can end up in.
Step31: For Policy Iteration, we'll start off with a random policy if only because the Gumdrop doesn't know any better yet. We'll reset our current values while we're at it.
Step32: As before with Value Iteration, run the cell below multiple until the policy no longer changes. It should only take 2-3 clicks compared to Value Iteration's 6.
Step33: Hmm, does this work? Let's see! Run the cell below to watch the gumdrop slip its way to victory.
Step34: So what was the learned strategy here? The gumdrop learned to hug the left wall of boulders until it was down far enough to make a break for the exit. Instead of heading directly for it though, it took advantage of actions that did not have a hole of death in them. Patience is a virtue!
We promised this story was a trilogy, and yes, the next day, the gumdrop fell upon a frozen lake yet again.
Q Learning
Value Iteration and Policy Iteration are great techniques, but what if we don't know how big the lake is? With real world problems, not knowing how many potential states are can be a definite possibility.
Enter Chris Watkins. Inspired by how animals learn with delayed rewards, he came up with the idea of Q Learning as an evolution of Richard Sutton's Temporal Difference Learning. Watkins noticed that animals learn from positive and negative rewards, and that they often make mistakes in order to optimize a skill.
From this emerged the idea of a Q table. In the lake case, it would look something like this.
| |Left|Down|Right|Up|
|-|-|-|-|-|
|0| | | | |
|1| | | | |
|...| | | | |
Here's the strategy
Step36: Our new get_action function will help us read the q_table and find the best action.
First, we'll give the agent the ability to act randomly as opposed to choosing the best known action. This gives it the ability to explore and find new situations. This is done with a random chance to act randomly. So random!
When the Gumdrop chooses not to act randomly, it will instead act based on the best action recorded in the q_table. Numpy's argwhere is used to find the indexes with the maximum value in the q-table row corresponding to our current state. Since numpy is often used with higher dimensional data, each index is returned as a list of ints. Our indexes are really one dimensional since we're just looking within a single row, so we'll use np.squeeze to remove the extra brackets. To randomly select from the indexes, we'll use np.random.choice.
Step38: Here, we'll define how the q_table gets updated. We'll apply the Bellman Equation as before, but since there is so much luck involved between slipping and random actions, we'll update our q_table as a weighted average between the old_value we're updating and the future_value based on the best action in the next state. That way, there's a little bit of memory between old and new experiences.
Step39: We'll update our play_game function to take our table and mapping, and at the end, we'll return any updates to them. Once we observe new states, we'll check our mapping and add then to the table if space isn't allocated for them already.
Finally, for every state - action - new-state transition, we'll update the cell in q_table that corresponds to the state and action with the Bellman Equation.
There's a little secret to solving this lake problem, and that's to have a small negative reward when moving between states. Otherwise, the gumdrop will become too afraid of slipping in a death hole to explore out of what is thought to be safe positions.
Step40: Ok, time to shine, gumdrop emoji! Let's do one simulation and see what happens.
Step41: Unless the gumdrop was incredibly lucky, chances were, it fell in some death water. Q-learning is markedly different from Value Iteration or Policy Iteration in that it attempts to simulate how an animal learns in unknown situations. Since the layout of the lake is unknown to the Gumdrop, it doesn't know which states are death holes, and which ones are safe. Because of this, it's going to make many mistakes before it can start making successes.
Feel free to run the above cell multiple times to see how the gumdrop steps through trial and error. When you're ready, run the below cell to have the gumdrop play 1000 times.
Step42: Cats have nine lives, our Gumdrop lived a thousand! Moment of truth. Can it get out of the lake now that it matters? | Python Code:
# Ensure the right version of Tensorflow is installed.
!pip install tensorflow==2.6 --user
Explanation: Early Reinforcement Learning
With the advances of modern computing power, the study of Reinforcement Learning is having a heyday. Machines are now able to learn complex tasks once thought to be solely in the domain of humans, from controlling the heating and cooling in massive data centers to beating grandmasters at Starcraft. As magnificent as it may seem today, it had humble roots many decades ago. Seeing how far it's come, it's a wonder to see how far it will go!
Let's take a step back in time to see how these early algorithms developed. Many of these algorithms make sense given the context of when they were created. Challenge yourself and see if you can come up with the same strategies given the right problem. Ok! Time to cozy up for a story.
<img src="images/hero.jpg" width="488" height="172">
This is the hero of our story, the gumdrop emoji. It was enjoying a cool winter day building a snowman when suddenly, it slipped and fell on a frozen lake of death.
<img src="images/lake.jpg" width="900" height="680">
The lake can be thought of as a 4 x 4 grid where the gumdrop can move left (0), down (1), right (2) and up (3). Unfortunately, this frozen lake of death has holes of death where if the gumdrop enters that square, it will fall in and meet an untimely demise. To make matters worse, the lake is surrounded by icy boulders that if the gumdrop attempts to climb, will have it slip back into its original position. Thankfully, at the bottom right of the lake is a safe ramp that leads to a nice warm cup of hot cocoa.
Set Up
We can try and save the gumdrop ourselves! This is a common game people begin their Reinforcement Learning journey with, and is included in the OpenAI's python package Gym and is aptly named FrozenLake-v0 (code). No time to waste, let's get the environment up and running. Run the below to install the needed libraries if they are not installed already.
End of explanation
!pip install gym==0.12.5 --user
Explanation: NOTE: In the output of the above cell you may ignore any WARNINGS or ERRORS related to the dependency resolver.
If you get any related errors mentioned above please rerun the above cell.
End of explanation
import gym
import numpy as np
import random
env = gym.make('FrozenLake-v0', is_slippery=False)
state = env.reset()
env.render()
Explanation: There are four methods from Gym that are going to be useful to us in order to save the gumdrop.
* make allows us to build the environment or game that we can pass actions to
* reset will reset an environment to it's starting configuration and return the state of the player
* render displays the environment for human eyes
* step takes an action and returns the player's next state.
Let's make, reset, and render the game. The output is an ANSI string with the following characters:
* S for starting point
* F for frozen
* H for hole
* G for goal
* A red square indicates the current position
Note: Restart the kernel if the above libraries needed to be installed
End of explanation
print(state)
Explanation: If we print the state we'll get 0. This is telling us which square we're in. Each square is labeled from 0 to 15 from left to right, top to bottom, like this:
| | | | |
|-|-|-|-|
|0|1|2|3|
|4|5|6|7|
|8|9|10|11|
|12|13|14|15|
End of explanation
def print_state(state, done):
statement = "Still Alive!"
if done:
statement = "Cocoa Time!" if state == 15 else "Game Over!"
print(state, "-", statement)
Explanation: We can make a simple print function to let us know whether it's game won, game over, or game on.
End of explanation
#0 left
#1 down
#2 right
#3 up
# Uncomment to reset the game
#env.reset()
action = 2 # Change me, please!
state, _, done, _ = env.step(action)
env.render()
print_state(state, done)
Explanation: We can control the gumdrop ourselves with the step method. Run the below cell over and over again trying to move from the starting position to the goal. Good luck!
End of explanation
def play_game(actions):
state = env.reset()
step = 0
done = False
while not done and step < len(actions):
action = actions[step]
state, _, done, _ = env.step(action)
env.render()
step += 1
print_state(state, done)
actions = [1, 1, 2, 2, 1, 2] # Replace with your favorite path.
play_game(actions)
Explanation: Were you able to reach the hot chocolate? If so, great job! There are multiple paths through the maze. One solution is [1, 1, 2, 2, 1, 2]. Let's loop through our actions in order to get used to interacting with the environment programmatically.
End of explanation
LAKE = np.array([[0, 0, 0, 0],
[0, -1, 0, -1],
[0, 0, 0, -1],
[-1, 0, 0, 1]])
LAKE_WIDTH = len(LAKE[0])
LAKE_HEIGHT = len(LAKE)
DISCOUNT = .9 # Change me to be a value between 0 and 1.
current_values = np.zeros_like(LAKE)
Explanation: Nice, so we know how to get through the maze, but how do we teach that to the gumdrop? It's just some bytes in an android phone. It doesn't have our human insight.
We could give it our list of actions directly, but then it would be copying us and not really learning. This was a tricky one to the mathematicians and computer scientists originally trying to solve this problem. How do we teach a machine to do this without human insight?
Value Iteration
Let's turn the clock back on our time machines to 1957 to meet Mr. Richard Bellman. Bellman started his academic career in mathematics, but due to World War II, left his postgraduate studies at John Hopkins to teach electronics as part of the war effort (as chronicled by J. J. O'Connor and E. F. Robertson here). When the war was over, and it came time for him to focus on his next area of research, he became fascinated with Dynamic Programming: the idea of breaking a problem down into sub-problems and using recursion to solve the larger problem.
Eventually, his research landed him on Markov Decision Processes. These processes are a graphical way of representing how to make a decision based on a current state. States are connected to other states with positive and negative rewards that can be picked up along the way.
Sound familiar at all? Perhaps our Frozen Lake?
In the lake case, each cell is a state. The Hs and the G are a special type of state called a "Terminal State", meaning they can be entered, but they have no leaving connections. What of rewards? Let's say the value of losing our life is the negative opposite of getting to the goal and staying alive. Thus, we can assign the reward of entering a death hole as -1, and the reward of escaping as +1.
Bellman's first breakthrough with this type of problem is now known as Value Iteration (his original paper). He introduced a variable, gamma (γ), to represent discounted future rewards. He also introduced a function of policy (π) that takes a state (s), and outputs corresponding suggested action (a). The goal is to find the value of a state (V), given the rewards that occur when following an action in a particular state (R).
Gamma, the discount, is the key ingredient here. If my time steps were in days, and my gamma was .9, $100 would be worth $100 to me today, $90 tomorrow, $81 the day after, and so on. Putting this all together, we get the Bellman Equation
<img src="images/bellman_equation.jpg" width="500">
source: Wikipedia
In other words, the value of our current state, current_values, is equal to the discount times the value of the next state, next_values, given the policy the agent will follow. For now, we'll have our agent assume a greedy policy: it will move towards the state with the highest calculated value. If you're wondering what P is, don't worry, we'll get to that later.
Let's program it out and see it in action! We'll set up an array representing the lake with -1 as the holes, and 1 as the goal. Then, we'll set up an array of zeros to start our iteration.
End of explanation
print("env.observation_space -", env.observation_space)
print("env.observation_space.n -", env.observation_space.n)
print("env.action_space -", env.action_space)
print("env.action_space.n -", env.action_space.n)
STATE_SPACE = env.observation_space.n
ACTION_SPACE = env.action_space.n
STATE_RANGE = range(STATE_SPACE)
ACTION_RANGE = range(ACTION_SPACE)
Explanation: The Gym environment class has a handy property for finding the number of states in an environment called observation_space. In our case, there a 16 integer states, so it will label it as "Discrete". Similarly, action_space will tell us how many actions are available to the agent.
Let's take advantage of these to make our code portable between different lakes sizes.
End of explanation
def get_neighbor_value(state_x, state_y, values, action):
Returns the value of a state's neighbor.
Args:
state_x (int): The state's horizontal position, 0 is the lake's left.
state_y (int): The state's vertical position, 0 is the lake's top.
values (float array): The current iteration's state values.
policy (int): Which action to check the value for.
Returns:
The corresponding action's value.
left = [state_y, state_x-1]
down = [state_y+1, state_x]
right = [state_y, state_x+1]
up = [state_y-1, state_x]
actions = [left, down, right, up]
direction = actions[action]
check_x = direction[1]
check_y = direction[0]
is_boulder = check_y < 0 or check_y >= LAKE_HEIGHT \
or check_x < 0 or check_x >= LAKE_WIDTH
value = values[state_y, state_x]
if not is_boulder:
value = values[check_y, check_x]
return value
Explanation: We'll need some sort of function to figure out what the best neighboring cell is. The below function take's a cell of the lake, and looks at the current value mapping (to be called with current_values, and see's what the value of the adjacent state is corresponding to the given action.
End of explanation
def get_state_coordinates(state_number):
state_x = state_number % LAKE_WIDTH
state_y = state_number // LAKE_HEIGHT
return state_x, state_y
def get_max_neighbor(state_number, values):
Finds the maximum valued neighbor for a given state.
Args:
state_number (int): the state to find the max neighbor for
state_values (float array): the respective value of each state for
each cell of the lake.
Returns:
max_value (float): the value of the maximum neighbor.
policy (int): the action to take to move towards the maximum neighbor.
state_x, state_y = get_state_coordinates(state_number)
# No policy or best value yet
best_policy = -1
max_value = -np.inf
# If the cell has something other than 0, it's a terminal state.
if LAKE[state_y, state_x]:
return LAKE[state_y, state_x], best_policy
for action in ACTION_RANGE:
neighbor_value = get_neighbor_value(state_x, state_y, values, action)
if neighbor_value > max_value:
max_value = neighbor_value
best_policy = action
return max_value, best_policy
Explanation: But this doesn't find the best action, and the gumdrop is going to need that if it wants to greedily get off the lake. The get_max_neighbor function we've defined below takes a number corresponding to a cell as state_number and the same value mapping as get_neighbor_value.
End of explanation
def iterate_value(current_values):
Finds the future state values for an array of current states.
Args:
current_values (int array): the value of current states.
Returns:
next_values (int array): The value of states based on future states.
next_policies (int array): The recommended action to take in a state.
next_values = []
next_policies = []
for state in STATE_RANGE:
value, policy = get_max_neighbor(state, current_values)
next_values.append(value)
next_policies.append(policy)
next_values = np.array(next_values).reshape((LAKE_HEIGHT, LAKE_WIDTH))
return next_values, next_policies
next_values, next_policies = iterate_value(current_values)
Explanation: Now, let's write our value iteration code. We'll write a function that comes out one step of the iteration by checking each state and finding its maximum neighbor. The values will be reshaped so that it's in the form of the lake, but the policy will stay as a list of ints. This way, when Gym returns a state, all we need to do is look at the corresponding index in the policy list to tell our agent where to go.
End of explanation
next_values
Explanation: This is what our values look like after one step. Right now, it just looks like the lake. That's because we started with an array of zeros for current_values, and the terminal states of the lake were loaded in.
End of explanation
np.array(next_policies).reshape((LAKE_HEIGHT ,LAKE_WIDTH))
Explanation: And this is what our policy looks like reshaped into the form of the lake. The -1's are terminal states. Right now, the agent will move left in any non-terminal state, because it sees all of those states as equal. Remember, if the gumdrop is along the leftmost side of the lake, and tries to move left, it will slip on a boulder and return to the same position.
End of explanation
current_values = DISCOUNT * next_values
current_values
Explanation: There's one last step to apply the Bellman Equation, the discount! We'll multiply our next states by the discount and set that to our current_values. One loop done!
End of explanation
next_values, next_policies = iterate_value(current_values)
print("Value")
print(next_values)
print("Policy")
print(np.array(next_policies).reshape((4,4)))
current_values = DISCOUNT * next_values
Explanation: Run the below cell over and over again to see how our values change with each iteration. It should be complete after six iterations when the values no longer change. The policy will also change as the values are updated.
End of explanation
def play_game(policy):
state = env.reset()
step = 0
done = False
while not done:
action = policy[state] # This line is new.
state, _, done, _ = env.step(action)
env.render()
step += 1
print_state(state, done)
play_game(next_policies)
Explanation: Have a completed policy? Let's see it in action! We'll update our play_game function to instead take our list of policies. That way, we can start in a random position and still get to the end.
End of explanation
env = gym.make('FrozenLake-v0', is_slippery=True)
state = env.reset()
env.render()
Explanation: Phew! Good job, team! The gumdrop made it out alive. So what became of our gumdrop hero? Well, the next day, it was making another snowman and fell onto an even more slippery and deadly lake. Doh! Turns out this story is part of a trilogy. Feel free to move onto the next section after your own sip of cocoa, coffee, tea, or poison of choice.
Policy Iteration
You may have noticed that the first lake was built with the parameter is_slippery=False. This time, we're going to switch it to True.
End of explanation
play_game(next_policies)
Explanation: Hmm, looks the same as before. Let's try applying our old policy and see what happens.
End of explanation
def find_future_values(current_values, current_policies):
Finds the next set of future values based on the current policy.
next_values = []
for state in STATE_RANGE:
current_policy = current_policies[state]
state_x, state_y = get_state_coordinates(state)
# If the cell has something other than 0, it's a terminal state.
value = LAKE[state_y, state_x]
if not value:
value = get_neighbor_value(
state_x, state_y, current_values, current_policy)
next_values.append(value)
return np.array(next_values).reshape((LAKE_HEIGHT, LAKE_WIDTH))
Explanation: Was there a game over? There's a small chance that the gumdrop made it to the end, but it's much more likely that it accidentally slipped and fell into a hole. Oh no! We can try repeatedly testing the above code cell over and over again, but it might take a while. In fact, this is a similar roadblock Bellman and his colleagues faced.
How efficient is Value Iteration? On our modern machines, this algorithm ran fairly quickly, but back in 1960, that wasn't the case. Let's say our lake is a long straight line like this:
| | | | | | | |
|-|-|-|-|-|-|-|
|S|F|F|F|F|F|H|
This is the worst case scenario for value iteration. In each iteration, we look at every state (s) and each action per state (a), so one step of value iteration is O(s*a). In the case of our lake line, each iteration only updates one cell. In other words, the value iteration step needs to be run s times. In total, that's O(s<sup>2</sup>a).
Back in 1960, that was computationally heavy, and so Ronald Howard developed an alteration of Value Iteration that mildly sacrificed mathematical accuracy for speed.
Here's the strategy: it was observed that the optimal policy often converged before value iteration was complete. To take advantage of this, we'll start with random policy. When we iterate over our values, we'll use this policy instead of trying to find the maximum neighbor. This has been coded out in find_future_values below.
End of explanation
def find_best_policy(next_values):
Finds the best policy given a value mapping.
next_policies = []
for state in STATE_RANGE:
state_x, state_y = get_state_coordinates(state)
# No policy or best value yet
max_value = -np.inf
best_policy = -1
if not LAKE[state_y, state_x]:
for policy in ACTION_RANGE:
neighbor_value = get_neighbor_value(
state_x, state_y, next_values, policy)
if neighbor_value > max_value:
max_value = neighbor_value
best_policy = policy
next_policies.append(best_policy)
return next_policies
Explanation: After we've calculated our new values, then we'll update the policy (and not the values) based on the maximum neighbor. If there's no change in the policy, then we're done. The below is very similar to our get_max_neighbor function. Can you see the differences?
End of explanation
def iterate_policy(current_values, current_policies):
Finds the future state values for an array of current states.
Args:
current_values (int array): the value of current states.
current_policies (int array): a list where each cell is the recommended
action for the state matching its index.
Returns:
next_values (int array): The value of states based on future states.
next_policies (int array): The recommended action to take in a state.
next_values = find_future_values(current_values, current_policies)
next_policies = find_best_policy(next_values)
return next_values, next_policies
Explanation: To complete the Policy Iteration algorithm, we'll combine the two functions above. Conceptually, we'll be alternating between updating our value function and updating our policy function.
End of explanation
def get_locations(state_x, state_y, policy):
left = [state_y, state_x-1]
down = [state_y+1, state_x]
right = [state_y, state_x+1]
up = [state_y-1, state_x]
directions = [left, down, right, up]
num_actions = len(directions)
gumdrop_right = (policy - 1) % num_actions
gumdrop_left = (policy + 1) % num_actions
locations = [gumdrop_left, policy, gumdrop_right]
return [directions[location] for location in locations]
Explanation: Next, let's modify the get_neighbor_value function to now include the slippery ice. Remember the P in the Bellman Equation above? It stands for the probability of ending up in a new state given the current state and action taken. That is, we'll take a weighted sum of the values of all possible states based on our chances to be in those states.
How does the physics of the slippery ice work? For this lake, whenever the gumdrop tries to move in a particular direction, there are three possible positions that it could end up with. It could move where it was intending to go, but it could also end up to the left or right of the direction it was facing. For instance, if it wanted to move right, it could end up on the square above or below it! This is depicted below, with the yellow squares being potential positions after attempting to move right.
<img src="images/slipping.jpg" width="360" height="270">
Each of these has an equal probability chance of happening. So since there are three outcomes, they each have about a 33% chance to happen. What happens if we slip in the direction of a boulder? No problem, we'll just end up not moving anywhere. Let's make a function to find what our possible locations could be given a policy and state coordinates.
End of explanation
def get_neighbor_value(state_x, state_y, values, policy):
Returns the value of a state's neighbor.
Args:
state_x (int): The state's horizontal position, 0 is the lake's left.
state_y (int): The state's vertical position, 0 is the lake's top.
values (float array): The current iteration's state values.
policy (int): Which action to check the value for.
Returns:
The corresponding action's value.
locations = get_locations(state_x, state_y, policy)
location_chance = 1.0 / len(locations)
total_value = 0
for location in locations:
check_x = location[1]
check_y = location[0]
is_boulder = check_y < 0 or check_y >= LAKE_HEIGHT \
or check_x < 0 or check_x >= LAKE_WIDTH
value = values[state_y, state_x]
if not is_boulder:
value = values[check_y, check_x]
total_value += location_chance * value
return total_value
Explanation: Then, we can add it to get_neighbor_value to find the weighted value of all the possible states the gumdrop can end up in.
End of explanation
current_values = np.zeros_like(LAKE)
policies = np.random.choice(ACTION_RANGE, size=STATE_SPACE)
np.array(policies).reshape((4,4))
Explanation: For Policy Iteration, we'll start off with a random policy if only because the Gumdrop doesn't know any better yet. We'll reset our current values while we're at it.
End of explanation
next_values, policies = iterate_policy(current_values, policies)
print("Value")
print(next_values)
print("Policy")
print(np.array(policies).reshape((4,4)))
current_values = DISCOUNT * next_values
Explanation: As before with Value Iteration, run the cell below multiple until the policy no longer changes. It should only take 2-3 clicks compared to Value Iteration's 6.
End of explanation
play_game(policies)
Explanation: Hmm, does this work? Let's see! Run the cell below to watch the gumdrop slip its way to victory.
End of explanation
new_row = np.zeros((1, env.action_space.n))
q_table = np.copy(new_row)
q_map = {0: 0}
def print_q(q_table, q_map):
print("mapping")
print(q_map)
print("q_table")
print(q_table)
print_q(q_table, q_map)
Explanation: So what was the learned strategy here? The gumdrop learned to hug the left wall of boulders until it was down far enough to make a break for the exit. Instead of heading directly for it though, it took advantage of actions that did not have a hole of death in them. Patience is a virtue!
We promised this story was a trilogy, and yes, the next day, the gumdrop fell upon a frozen lake yet again.
Q Learning
Value Iteration and Policy Iteration are great techniques, but what if we don't know how big the lake is? With real world problems, not knowing how many potential states are can be a definite possibility.
Enter Chris Watkins. Inspired by how animals learn with delayed rewards, he came up with the idea of Q Learning as an evolution of Richard Sutton's Temporal Difference Learning. Watkins noticed that animals learn from positive and negative rewards, and that they often make mistakes in order to optimize a skill.
From this emerged the idea of a Q table. In the lake case, it would look something like this.
| |Left|Down|Right|Up|
|-|-|-|-|-|
|0| | | | |
|1| | | | |
|...| | | | |
Here's the strategy: our agent will explore the environment. As the agent observes new states, we'll add more rows to our table. Whenever it moves from one state to the next, we'll update the cell corresponding to the old state based on the Bellman Equation. The agent doesn't need to know what the probabilities are between transitions. It'll learn the value of these as it experiments.
For Q learning, this works by looking at the row that corresponds to the agent's current state. Then, we'll select the action with the highest value. There are multiple ways to initialize the Q-table, but for us, we'll start with all zeros. In that case, when selecting the best action, we'll randomly select between tied max values. If we don't, the agent will favor certain actions which will limit its exploration.
To be able to handle an unknown number of states, we'll initialize our q_table as one row to represent our initial state. Then, we'll make a dictionary to map new states to rows in the table.
End of explanation
def get_action(q_map, q_table, state_row, random_rate):
Find max-valued actions and randomly select from them.
if random.random() < random_rate:
return random.randint(0, ACTION_SPACE-1)
action_values = q_table[state_row]
max_indexes = np.argwhere(action_values == action_values.max())
max_indexes = np.squeeze(max_indexes, axis=-1)
action = np.random.choice(max_indexes)
return action
Explanation: Our new get_action function will help us read the q_table and find the best action.
First, we'll give the agent the ability to act randomly as opposed to choosing the best known action. This gives it the ability to explore and find new situations. This is done with a random chance to act randomly. So random!
When the Gumdrop chooses not to act randomly, it will instead act based on the best action recorded in the q_table. Numpy's argwhere is used to find the indexes with the maximum value in the q-table row corresponding to our current state. Since numpy is often used with higher dimensional data, each index is returned as a list of ints. Our indexes are really one dimensional since we're just looking within a single row, so we'll use np.squeeze to remove the extra brackets. To randomly select from the indexes, we'll use np.random.choice.
End of explanation
def update_q(q_table, new_state_row, reward, old_value):
Returns an updated Q-value based on the Bellman Equation.
learning_rate = .1 # Change to be between 0 and 1.
future_value = reward + DISCOUNT * np.max(q_table[new_state_row])
return old_value + learning_rate * (future_value - old_value)
Explanation: Here, we'll define how the q_table gets updated. We'll apply the Bellman Equation as before, but since there is so much luck involved between slipping and random actions, we'll update our q_table as a weighted average between the old_value we're updating and the future_value based on the best action in the next state. That way, there's a little bit of memory between old and new experiences.
End of explanation
def play_game(q_table, q_map, random_rate, render=False):
state = env.reset()
step = 0
done = False
while not done:
state_row = q_map[state]
action = get_action(q_map, q_table, state_row, random_rate)
new_state, _, done, _ = env.step(action)
#Add new state to table and mapping if it isn't there already.
if new_state not in q_map:
q_map[new_state] = len(q_table)
q_table = np.append(q_table, new_row, axis=0)
new_state_row = q_map[new_state]
reward = -.01 #Encourage exploration.
if done:
reward = 1 if new_state == 15 else -1
current_q = q_table[state_row, action]
q_table[state_row, action] = update_q(
q_table, new_state_row, reward, current_q)
step += 1
if render:
env.render()
print_state(new_state, done)
state = new_state
return q_table, q_map
Explanation: We'll update our play_game function to take our table and mapping, and at the end, we'll return any updates to them. Once we observe new states, we'll check our mapping and add then to the table if space isn't allocated for them already.
Finally, for every state - action - new-state transition, we'll update the cell in q_table that corresponds to the state and action with the Bellman Equation.
There's a little secret to solving this lake problem, and that's to have a small negative reward when moving between states. Otherwise, the gumdrop will become too afraid of slipping in a death hole to explore out of what is thought to be safe positions.
End of explanation
# Run to refresh the q_table.
random_rate = 1
q_table = np.copy(new_row)
q_map = {0: 0}
q_table, q_map = play_game(q_table, q_map, random_rate, render=True)
print_q(q_table, q_map)
Explanation: Ok, time to shine, gumdrop emoji! Let's do one simulation and see what happens.
End of explanation
for _ in range(1000):
q_table, q_map = play_game(q_table, q_map, random_rate)
random_rate = random_rate * .99
print_q(q_table, q_map)
random_rate
Explanation: Unless the gumdrop was incredibly lucky, chances were, it fell in some death water. Q-learning is markedly different from Value Iteration or Policy Iteration in that it attempts to simulate how an animal learns in unknown situations. Since the layout of the lake is unknown to the Gumdrop, it doesn't know which states are death holes, and which ones are safe. Because of this, it's going to make many mistakes before it can start making successes.
Feel free to run the above cell multiple times to see how the gumdrop steps through trial and error. When you're ready, run the below cell to have the gumdrop play 1000 times.
End of explanation
q_table, q_map = play_game(q_table, q_map, 0, render=True)
Explanation: Cats have nine lives, our Gumdrop lived a thousand! Moment of truth. Can it get out of the lake now that it matters?
End of explanation |
13,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
String processing
Let us start by having a look at some of the functionality that is built into Python strings.
The Python string object
The Python string object has many useful features built into it. Let us look at some of these.
Step1: Note that the text starts and ends with some whitespace characters. One often wants to get rid of these.
Step2: It is possible to do this only for the left/right hand side of the string.
Step3: Another common scenarios is to check whether or not a string starts with a particular word.
Step4: Note that the above returns False as the words have different capitalisation. One can work around this by forcing the input text to be lower case.
Step5: There is also an endswith() method that can be useful for checking file extensions.
Step6: Let us search for particular words within our string.
Step7: The find argument returns the index of the first letter of the search term.
Step8: If the search term is not found the find() method returns -1.
Step9: If we search for something that exists more than once we get the index of the first instance.
Step10: We can find the next instance by specifying the index to start the search from.
Step11: String objects also have functionality for enabling substitutions.
Step12: It is possible to specify the number of substituitons that one wishes to make.
Step13: One of the most useful features of string objects is the ability to split them based on a separator.
Step14: Note the extra items at the begining and end of the line arising from the extra white spaces. One can get around this by using the strip() function.
Step15: This function is particularly useful when dealing with csv files.
Step16: However, depending on your CSV file it may be safer to do something along the lines of the below.
Does everyone know what list comprehension is?
Step17: Futhermore, if you know that you are wanting to deal with integers you may even want to include the string to integer conversion as well.
Step18: There are many variaitons on the string operators described above. It is useful to familiarise yourself with the
Python documentation on strings.
Regular expressions
Regular expressions can be defined as a series of characters that define a search pattern.
Regular expressions can be very powerful. However, they can be difficult to build up. Often it is a process of trial and error. This means that once they have been created, and the trial and error process has been forgotten, it can be extremely difficult to understand what the regular expression does and why it is constructed the way it is.
Use regular expressions only as a last resort!
To use regular expressionsions in Python we need to import the re module.
Step19: Let us search for the word "cat".
Step20: There are two things to note here
Step21: Now suppose that we wanted the first letter to be any alphanumberic character. We can achieve this using the regular expression "word" meta character \w.
Step22: It is also possible to find all matches. However, note that this returns strings as opposed to regular expression match objects.
Step23: Similarly we can use regular expressions to perform substitutions.
Step24: However, more commonly we want to extract particular pieces of information from a string. For example the accession and version from the NCBI header. (Format
Step25: Note how horrible and incomprehensible the regular expression is.
It took me a couple of attempts to get this one right as I forgot that | is a regular expression meta character that needs to be escaped using a backslash \.
However, we can now access the groups specified by the parenthesis.
Step26: Individual groups can also be accessed. Note that the first group includes everything matched by the regular expression.
Step27: Let us have a look at a common pitfall when using regular expressions in Python
Step28: Basically match() only looks for a match at the beginning of the string to be searched. For more information see the
search() vs match() section in the Python documentation.
Finally if you are using the same regular expression many times you may find it advantageous to compile the regular expression. This may speed up your program. | Python Code:
some_text = " Postman Pat has a cat named Jess. "
Explanation: String processing
Let us start by having a look at some of the functionality that is built into Python strings.
The Python string object
The Python string object has many useful features built into it. Let us look at some of these.
End of explanation
stripped_text = some_text.strip()
stripped_text
Explanation: Note that the text starts and ends with some whitespace characters. One often wants to get rid of these.
End of explanation
some_text.lstrip()
some_text.rstrip()
Explanation: It is possible to do this only for the left/right hand side of the string.
End of explanation
stripped_text.startswith("postman")
Explanation: Another common scenarios is to check whether or not a string starts with a particular word.
End of explanation
stripped_text.lower()
stripped_text.lower().startswith("postman")
Explanation: Note that the above returns False as the words have different capitalisation. One can work around this by forcing the input text to be lower case.
End of explanation
"/my/great/picture.png".endswith(".png")
Explanation: There is also an endswith() method that can be useful for checking file extensions.
End of explanation
some_text.find("cat")
Explanation: Let us search for particular words within our string.
End of explanation
some_text[20:23]
Explanation: The find argument returns the index of the first letter of the search term.
End of explanation
some_text.find("dog")
Explanation: If the search term is not found the find() method returns -1.
End of explanation
some_text.find("at")
Explanation: If we search for something that exists more than once we get the index of the first instance.
End of explanation
some_text.find("at", 12)
some_text.find("at", 22)
Explanation: We can find the next instance by specifying the index to start the search from.
End of explanation
"One cat, two cats, three cats.".replace("cat", "dog")
Explanation: String objects also have functionality for enabling substitutions.
End of explanation
"One cat, two cats, three cats.".replace("cat", "dog", 2)
Explanation: It is possible to specify the number of substituitons that one wishes to make.
End of explanation
some_text.split(" ")
Explanation: One of the most useful features of string objects is the ability to split them based on a separator.
End of explanation
some_text.strip().split(" ")
Explanation: Note the extra items at the begining and end of the line arising from the extra white spaces. One can get around this by using the strip() function.
End of explanation
"1,2,3,4".split(",")
Explanation: This function is particularly useful when dealing with csv files.
End of explanation
[s.strip() for s in "1, 2,3, 4".split(",")]
Explanation: However, depending on your CSV file it may be safer to do something along the lines of the below.
Does everyone know what list comprehension is?
End of explanation
[int(s.strip()) for s in "1, 2,3, 4".split(",")]
Explanation: Futhermore, if you know that you are wanting to deal with integers you may even want to include the string to integer conversion as well.
End of explanation
import re
some_text = " Postman Pat has a cat named Jess. "
Explanation: There are many variaitons on the string operators described above. It is useful to familiarise yourself with the
Python documentation on strings.
Regular expressions
Regular expressions can be defined as a series of characters that define a search pattern.
Regular expressions can be very powerful. However, they can be difficult to build up. Often it is a process of trial and error. This means that once they have been created, and the trial and error process has been forgotten, it can be extremely difficult to understand what the regular expression does and why it is constructed the way it is.
Use regular expressions only as a last resort!
To use regular expressionsions in Python we need to import the re module.
End of explanation
re.search(r"cat", some_text)
Explanation: Let us search for the word "cat".
End of explanation
match = re.search(r"cat", some_text)
if match:
print(some_text[match.start():match.end()])
Explanation: There are two things to note here:
We use a raw string to represent our regular expression
The regular expression search() method returns a match object (or None if no match is found)
The index of the first and last matched characters can be accessed as using the match object's start() and end() methods.
End of explanation
match = re.search(r"\wat", some_text)
if match:
print(match.string[match.start():match.end()])
Explanation: Now suppose that we wanted the first letter to be any alphanumberic character. We can achieve this using the regular expression "word" meta character \w.
End of explanation
matches = re.findall(r"\wat", some_text)
for m in matches:
print(m)
Explanation: It is also possible to find all matches. However, note that this returns strings as opposed to regular expression match objects.
End of explanation
re.sub(r"\wat", "dog", some_text)
Explanation: Similarly we can use regular expressions to perform substitutions.
End of explanation
ncbi_header = ">gi|568336023|gb|CM000663.2| Homo sapiens chromosome 1, GRCh38 reference primary assembly."
match = re.search(r">gi\|[0-9]*\|\w*\|(\w*).([0-9])*\|.*", ncbi_header)
Explanation: However, more commonly we want to extract particular pieces of information from a string. For example the accession and version from the NCBI header. (Format: ">gi|xx|dbsrc|accession.version|description".)
End of explanation
match.groups()
Explanation: Note how horrible and incomprehensible the regular expression is.
It took me a couple of attempts to get this one right as I forgot that | is a regular expression meta character that needs to be escaped using a backslash \.
However, we can now access the groups specified by the parenthesis.
End of explanation
match.group(0)
match.group(1)
match.group(2)
Explanation: Individual groups can also be accessed. Note that the first group includes everything matched by the regular expression.
End of explanation
re.search(r"cat", "my cat has a hat")
print( re.match(r"cat", "my cat has a hat") )
re.match(r"my", "my cat has a hat")
Explanation: Let us have a look at a common pitfall when using regular expressions in Python: the difference between the methods
search() and match().
End of explanation
cat_regex = re.compile(r"cat")
cat_regex.search("my cat has a hat")
Explanation: Basically match() only looks for a match at the beginning of the string to be searched. For more information see the
search() vs match() section in the Python documentation.
Finally if you are using the same regular expression many times you may find it advantageous to compile the regular expression. This may speed up your program.
End of explanation |
13,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IAF neurons singularity
This notebook describes how NEST handles the singularities appearing in the ODE's of integrate-and-fire model neurons with alpha- or exponentially-shaped current, when the membrane and the synaptic time-constants are identical.
Step1: For alpha-shaped currents we have
Step2: Non-singular case ($\tau_m\neq \tau_s$)
The propagator is
Step3: Note that the entry in the third line and the second column $A_{32}$ would also appear in the propagator matrix in case of an exponentially shaped current
Singular case ($\tau_m = \tau_s$)
We have
Step4: The propagator is
Step5: Numeric stability of propagator elements
For the lines $\tau_s\rightarrow\tau_m$ the entry $PA_{32}$ becomes numerically unstable, since denominator and enumerator go to zero.
1. We show that $PAs_{32}$ is the limit of $PA_{32}(\tau_s)$ for $\tau_s\rightarrow\tau_m$.
Step6: 2. The Taylor-series up to the second order of the function $PA_{32}(\tau_s)$ is
Step7: Therefore we have
$T(PA_{32}(\tau_s,\tau_m))=PAs_{32}+PA_{32}^{lin}+O(2)$ where $PA_{32}^{lin}=h^2(-\tau_m + \tau_s)*exp(-h/\tau_m)/(2C\tau_m^2)$
3. We define
$dev
Step8: Neuron, simulation and plotting parameters
Step9: Loop through epsilon array
Step10: Show maximum values of voltage traces | Python Code:
import sympy as sp
sp.init_printing(use_latex=True)
from sympy.matrices import zeros
tau_m, tau_s, C, h = sp.symbols('tau_m, tau_s, C, h')
Explanation: IAF neurons singularity
This notebook describes how NEST handles the singularities appearing in the ODE's of integrate-and-fire model neurons with alpha- or exponentially-shaped current, when the membrane and the synaptic time-constants are identical.
End of explanation
A = sp.Matrix([[-1/tau_s,0,0],[1,-1/tau_s,0],[0,1/C,-1/tau_m]])
Explanation: For alpha-shaped currents we have:
End of explanation
PA = sp.simplify(sp.exp(A*h))
PA
Explanation: Non-singular case ($\tau_m\neq \tau_s$)
The propagator is:
End of explanation
As = sp.Matrix([[-1/tau_m,0,0],[1,-1/tau_m,0],[0,1/C,-1/tau_m]])
As
Explanation: Note that the entry in the third line and the second column $A_{32}$ would also appear in the propagator matrix in case of an exponentially shaped current
Singular case ($\tau_m = \tau_s$)
We have
End of explanation
PAs = sp.simplify(sp.exp(As*h))
PAs
Explanation: The propagator is
End of explanation
PA_32 = PA.row(2).col(1)[0]
sp.limit(PA_32, tau_s, tau_m)
Explanation: Numeric stability of propagator elements
For the lines $\tau_s\rightarrow\tau_m$ the entry $PA_{32}$ becomes numerically unstable, since denominator and enumerator go to zero.
1. We show that $PAs_{32}$ is the limit of $PA_{32}(\tau_s)$ for $\tau_s\rightarrow\tau_m$.:
End of explanation
PA_32_series = PA_32.series(x=tau_s,x0=tau_m,n=2)
PA_32_series
Explanation: 2. The Taylor-series up to the second order of the function $PA_{32}(\tau_s)$ is:
End of explanation
import nest
import numpy as np
import pylab as pl
Explanation: Therefore we have
$T(PA_{32}(\tau_s,\tau_m))=PAs_{32}+PA_{32}^{lin}+O(2)$ where $PA_{32}^{lin}=h^2(-\tau_m + \tau_s)*exp(-h/\tau_m)/(2C\tau_m^2)$
3. We define
$dev:=|PA_{32}-PAs_{32}|$
We also define $PA_{32}^{real}$ which is the correct value of P32 without misscalculation (instability).
In the following we assume $0<|\tau_s-\tau_m|<0.1$. We consider two different cases
a) When $dev \geq 2|PA_{32}^{lin}|$ we do not trust the numeric evaluation of $PA_{32}$, since it strongly deviates from the first order correction. In this case the error we make is
$|PAs_{32}-PA_{32}^{real}|\approx |P_{32}^{lin}|$
b) When $dev \le |2PA_{32}^{lin}|$ we trust the numeric evaluation of $PA_{32}$. In this case the maximal error occurs when $dev\approx 2 PA_{32}^{lin}$ due to numeric instabilities. The order of the error is again
$|PAs_{32}-PA_{32}^{real}|\approx |P_{32}^{lin}|$
The entry $A_{31}$ is numerically unstable, too and we treat it analogously.
Tests and examples
We will now show that the stability criterion explained above leads to a reasonable behavior for $\tau_s\rightarrow\tau_m$
End of explanation
taum = 10.
C_m = 250.
# array of distances between tau_m and tau_ex
epsilon_array = np.hstack(([0.],10.**(np.arange(-6.,1.,1.))))[::-1]
dt = 0.1
fig = pl.figure(1)
NUM_COLORS = len(epsilon_array)
cmap = pl.get_cmap('gist_ncar')
maxVs = []
Explanation: Neuron, simulation and plotting parameters
End of explanation
for i,epsilon in enumerate(epsilon_array):
nest.ResetKernel() # reset simulation kernel
nest.resolution = dt
# Current based alpha neuron
neuron = nest.Create('iaf_psc_alpha')
neuron.set(C_m=C_m, tau_m=taum, t_ref=0., V_reset=-70., V_th=1e32,
tau_syn_ex=taum+epsilon, tau_syn_in=taum+epsilon, I_e=0.)
# create a spike generator
spikegenerator_ex = nest.Create('spike_generator')
spikegenerator_ex.spike_times = [50.]
# create a voltmeter
vm = nest.Create('voltmeter', params={'interval':dt})
## connect spike generator and voltmeter to the neuron
nest.Connect(spikegenerator_ex, neuron, 'all_to_all', {'weight':100.})
nest.Connect(vm, neuron)
# run simulation for 200ms
nest.Simulate(200.)
# read out recording time and voltage from voltmeter
times = vm.get('events','times')
voltage = vm.get('events', 'V_m')
# store maximum value of voltage trace in array
maxVs.append(np.max(voltage))
# plot voltage trace
if epsilon == 0.:
pl.plot(times,voltage,'--',color='black',label='singular')
else:
pl.plot(times,voltage,color = cmap(1.*i/NUM_COLORS),label=str(epsilon))
pl.legend()
pl.xlabel('time t (ms)')
pl.ylabel('voltage V (mV)')
Explanation: Loop through epsilon array
End of explanation
fig = pl.figure(2)
pl.semilogx(epsilon_array,maxVs,color='red',label='maxV')
#show singular solution as horizontal line
pl.semilogx(epsilon_array,np.ones(len(epsilon_array))*maxVs[-1],color='black',label='singular')
pl.xlabel('epsilon')
pl.ylabel('max(voltage V) (mV)')
pl.legend()
pl.show()
Explanation: Show maximum values of voltage traces
End of explanation |
13,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference
Step1: imports for Python, Pandas
Step2: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source
Step3: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source
Step4: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
Read json file using read_json command
Step5: Find top 10 countries according to most number of projects
Step6: Find the top 10 major project themes (using column 'mjtheme_namecode')
Step7: Create a dataframe with the missing names filled in. | Python Code:
import pandas as pd
Explanation: JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader
data source: http://jsonstudio.com/resources/
End of explanation
import json
from pandas.io.json import json_normalize
Explanation: imports for Python, Pandas
End of explanation
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
Explanation: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization
End of explanation
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
Explanation: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/
End of explanation
json_df = pd.read_json('data/world_bank_projects.json')
Explanation: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
Read json file using read_json command
End of explanation
json_df.countryshortname.value_counts().sort_values(ascending=False).head(10)
Explanation: Find top 10 countries according to most number of projects
End of explanation
dfnorm = json_normalize([json_df.mjtheme_namecode[i][0] for i in range(len(json_df))])
dfnorm.name.value_counts()\
.sort_values(ascending=False).head(10)
Explanation: Find the top 10 major project themes (using column 'mjtheme_namecode')
End of explanation
# create a new dataframe with unique project code and name
dfUniq = dfnorm[(dfnorm.name != '')].drop_duplicates('code')
# create index on column code
dfnorm = dfnorm.set_index('code')
dfUniq = dfUniq.set_index('code')
# construct the final dataframe by supplementing missing names from unique codes dataframe
finaldf = dfnorm.loc[:, dfnorm.columns.union(dfUniq.columns)]
finaldf.update(dfUniq)
#redo the query in #2 with supplemented data on project names
finaldf['name'].value_counts().sort_values(ascending=False).head(10)
Explanation: Create a dataframe with the missing names filled in.
End of explanation |
13,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
bfscraper
a notebook for scraping data from bringfido. Note, mechanize only works with python 2.X.
Step1: Searching for Hotels in a Given City
In this section I will search for hotels in a user-specified city. The city I will use for this quick test is New Haven, CT.
Step2: Reading in form-returned data with requests
Unfortunately, the hotel information is not in the source returned from urlopen.read(). This is where requests comes to the rescue!
Step3: Conclusion
That's great, but it only returns 3 out of the 5 hotels from bringfido.com. Now to try scrapy and see if the results are any better...
Starting a scrapy project
Step4: Reading in a Hotel Page
In this section, I will figure out the code for reading in the comments for a single hotel. | Python Code:
from bs4 import BeautifulSoup
#import urllib.request
import requests
Explanation: bfscraper
a notebook for scraping data from bringfido. Note, mechanize only works with python 2.X.
End of explanation
url="http://www.bringfido.com/lodging/city/new_haven_ct_us/"
try:
from urllib.request import Request, urlopen # Python 3
except:
from urllib2 import Request, urlopen # Python 2
q = Request(url)
q.add_header('User-Agent', 'Mozilla/5.0')
page = urlopen(q).read()
print(page)
soup = BeautifulSoup(page)
soup
Explanation: Searching for Hotels in a Given City
In this section I will search for hotels in a user-specified city. The city I will use for this quick test is New Haven, CT.
End of explanation
html = requests.get(url)
soup = BeautifulSoup(html.content, 'html.parser')
q = soup.findAll('div', id='results_list')
for i in q:
print(i)
soup.select('results_list')
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("http://www.bringfido.com/lodging/city/new_haven_ct_us/")
x = browser.find_element_by_id('results_list')
len(x.text)
url = "http://www.bringfido.com/lodging/service/search/?q=new%20haven"
hotel_json_text = requests.get(url).text
hotel_json_text
import json
hotels = json.loads(hotel_json_text)
for i in hotels['mappable_objects']:
print(i['name'])
print(' ')
Explanation: Reading in form-returned data with requests
Unfortunately, the hotel information is not in the source returned from urlopen.read(). This is where requests comes to the rescue!
End of explanation
import scrapy
from scrapy.cmdline import execute as scrapy_execute
scrapy_execute(argv=['/Applications/anaconda/bin/scrapy', 'startproject', 'bfscraps'])
Explanation: Conclusion
That's great, but it only returns 3 out of the 5 hotels from bringfido.com. Now to try scrapy and see if the results are any better...
Starting a scrapy project
End of explanation
url_quinta = "http://www.bringfido.com/lodging/70449/?cid=14745&ar=&dt=&rm=1&ad=1&ch=0&dg=1&rt=75.01"
q = Request(url_quinta)
q.add_header('User-Agent', 'Mozilla/5.0')
page = urlopen(q).read()
soup = BeautifulSoup(page)
soup
Explanation: Reading in a Hotel Page
In this section, I will figure out the code for reading in the comments for a single hotel.
End of explanation |
13,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Какое количество мужчин и женщин ехало на корабле? В качестве ответа приведите два числа через пробел
Step1: 2. Какой части пассажиров удалось выжить? Посчитайте долю выживших пассажиров. Ответ приведите в процентах (число в интервале от 0 до 100, знак процента не нужен), округлив до двух знаков.
Step2: 3. Какую долю пассажиры первого класса составляли среди всех пассажиров? Ответ приведите в процентах (число в интервале от 0 до 100, знак процента не нужен), округлив до двух знаков.
Step3: 4. Какого возраста были пассажиры? Посчитайте среднее и медиану возраста пассажиров. Посчитайте среднее и медиану возраста пассажиров. В качестве ответа приведите два числа через пробел.
Step4: 5. Коррелируют ли число братьев/сестер с числом родителей/детей? Посчитайте корреляцию Пирсона между признаками SibSp и Parch.
Step5: 6. Какое самое популярное женское имя на корабле? Извлеките из полного имени пассажира (колонка Name) его личное имя (First Name). Это задание — типичный пример того, с чем сталкивается специалист по анализу данных. Данные очень разнородные и шумные, но из них требуется извлечь необходимую информацию. Попробуйте вручную разобрать несколько значений столбца Name и выработать правило для извлечения имен, а также разделения их на женские и мужские. | Python Code:
sex_counts = df['Sex'].value_counts()
print('{} {}'.format(sex_counts['male'], sex_counts['female']))
Explanation: 1. Какое количество мужчин и женщин ехало на корабле? В качестве ответа приведите два числа через пробел
End of explanation
survived_df = df['Survived']
count_of_survived = survived_df.value_counts()[1]
survived_percentage = 100.0 * count_of_survived / survived_df.value_counts().sum()
print("{:0.2f}".format(survived_percentage))
Explanation: 2. Какой части пассажиров удалось выжить? Посчитайте долю выживших пассажиров. Ответ приведите в процентах (число в интервале от 0 до 100, знак процента не нужен), округлив до двух знаков.
End of explanation
pclass_df = df['Pclass']
count_of_first_class_passengers = pclass_df.value_counts()[1]
first_class_percentage = 100.0 * count_of_first_class_passengers / survived_df.value_counts().sum()
print("{:0.2f}".format(first_class_percentage))
Explanation: 3. Какую долю пассажиры первого класса составляли среди всех пассажиров? Ответ приведите в процентах (число в интервале от 0 до 100, знак процента не нужен), округлив до двух знаков.
End of explanation
ages = df['Age'].dropna()
print("{:0.2f} {:0.2f}".format(ages.mean(), ages.median()))
Explanation: 4. Какого возраста были пассажиры? Посчитайте среднее и медиану возраста пассажиров. Посчитайте среднее и медиану возраста пассажиров. В качестве ответа приведите два числа через пробел.
End of explanation
correlation = df['SibSp'].corr(df['Parch'])
print("{:0.2f}".format(correlation))
Explanation: 5. Коррелируют ли число братьев/сестер с числом родителей/детей? Посчитайте корреляцию Пирсона между признаками SibSp и Parch.
End of explanation
def clean_name(name):
# First word before comma is a surname
s = re.search('^[^,]+, (.*)', name)
if s:
name = s.group(1)
# get name from braces (if in braces)
s = re.search('\(([^)]+)\)', name)
if s:
name = s.group(1)
# Removing appeal
name = re.sub('(Miss\. |Mrs\. |Ms\. )', '', name)
# Get first left word and removing quotes
name = name.split(' ')[0].replace('"', '')
return name
names = df[df['Sex'] == 'female']['Name'].map(clean_name)
name_counts = names.value_counts()
name_counts.head()
print(name_counts.head(1).index.values[0])
Explanation: 6. Какое самое популярное женское имя на корабле? Извлеките из полного имени пассажира (колонка Name) его личное имя (First Name). Это задание — типичный пример того, с чем сталкивается специалист по анализу данных. Данные очень разнородные и шумные, но из них требуется извлечь необходимую информацию. Попробуйте вручную разобрать несколько значений столбца Name и выработать правило для извлечения имен, а также разделения их на женские и мужские.
End of explanation |
13,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Minimal word count
The following example is the "Hello, World!" of data processing, a basic implementation of word count. We're creating a simple data processing pipeline that reads a text file and counts the number of occurrences of every word.
There are many scenarios where all the data does not fit in memory. Notice that the outputs of the pipeline go to the file system, which allows for large processing jobs in distributed environments.
Step2: Word count with comments
Below is mostly the same code as above, but with comments explaining every line in more detail. | Python Code:
# Run and print a shell command.
def run(cmd):
print('>> {}'.format(cmd))
!{cmd}
print('')
# Install apache-beam.
run('pip install --quiet apache-beam')
# Copy the input file into the local file system.
run('mkdir -p data')
run('gsutil cp gs://dataflow-samples/shakespeare/kinglear.txt data/')
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/get-started/try-apache-beam-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Try Apache Beam - Python
In this notebook, we set up your development environment and work through a simple example using the DirectRunner. You can explore other runners with the Beam Capatibility Matrix.
To navigate through different sections, use the table of contents. From View drop-down list, select Table of contents.
To run a code cell, you can click the Run cell button at the top left of the cell, or by select it and press Shift+Enter. Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see Welcome to Colaboratory!.
Setup
First, you need to set up your environment, which includes installing apache-beam and downloading a text file from Cloud Storage to your local file system. We are using this file to test your pipeline.
End of explanation
import apache_beam as beam
import re
inputs_pattern = 'data/*'
outputs_prefix = 'outputs/part'
# Running locally in the DirectRunner.
with beam.Pipeline() as pipeline:
(
pipeline
| 'Read lines' >> beam.io.ReadFromText(inputs_pattern)
| 'Find words' >> beam.FlatMap(lambda line: re.findall(r"[a-zA-Z']+", line))
| 'Pair words with 1' >> beam.Map(lambda word: (word, 1))
| 'Group and sum' >> beam.CombinePerKey(sum)
| 'Format results' >> beam.Map(lambda word_count: str(word_count))
| 'Write results' >> beam.io.WriteToText(outputs_prefix)
)
# Sample the first 20 results, remember there are no ordering guarantees.
run('head -n 20 {}-00000-of-*'.format(outputs_prefix))
Explanation: Minimal word count
The following example is the "Hello, World!" of data processing, a basic implementation of word count. We're creating a simple data processing pipeline that reads a text file and counts the number of occurrences of every word.
There are many scenarios where all the data does not fit in memory. Notice that the outputs of the pipeline go to the file system, which allows for large processing jobs in distributed environments.
End of explanation
import apache_beam as beam
import re
inputs_pattern = 'data/*'
outputs_prefix = 'outputs/part'
# Running locally in the DirectRunner.
with beam.Pipeline() as pipeline:
# Store the word counts in a PCollection.
# Each element is a tuple of (word, count) of types (str, int).
word_counts = (
# The input PCollection is an empty pipeline.
pipeline
# Read lines from a text file.
| 'Read lines' >> beam.io.ReadFromText(inputs_pattern)
# Element type: str - text line
# Use a regular expression to iterate over all words in the line.
# FlatMap will yield an element for every element in an iterable.
| 'Find words' >> beam.FlatMap(lambda line: re.findall(r"[a-zA-Z']+", line))
# Element type: str - word
# Create key-value pairs where the value is 1, this way we can group by
# the same word while adding those 1s and get the counts for every word.
| 'Pair words with 1' >> beam.Map(lambda word: (word, 1))
# Element type: (str, int) - key: word, value: 1
# Group by key while combining the value using the sum() function.
| 'Group and sum' >> beam.CombinePerKey(sum)
# Element type: (str, int) - key: word, value: counts
)
# We can process a PCollection through other pipelines too.
(
# The input PCollection is the word_counts created from the previous step.
word_counts
# Format the results into a string so we can write them to a file.
| 'Format results' >> beam.Map(lambda word_count: str(word_count))
# Element type: str - text line
# Finally, write the results to a file.
| 'Write results' >> beam.io.WriteToText(outputs_prefix)
)
# Sample the first 20 results, remember there are no ordering guarantees.
run('head -n 20 {}-00000-of-*'.format(outputs_prefix))
Explanation: Word count with comments
Below is mostly the same code as above, but with comments explaining every line in more detail.
End of explanation |
13,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Safely refactoring ACLs and firewall rules
Changing ACLs or firewall rules (or filters) is one of the riskiest updates to a network. Even a small error can block connectivity for a large set of critical services or open up sensitive resources to the world at large. Earlier notebooks showed how to analyze filters for what they do and do not allow and how to make specific changes in a provably safe manner.
This notebook shows how to refactor complex filters in a way that the full impact of refactoring can be understood and analyzed for correctness before refactored filters are pushed to the network.
Original ACL
We will use the following ACL as a running example in this notebook. The ACL can be read as a few separate sections
Step3: Compressed ACL
Now, assume that we want to compress this ACL to make it more manageable. We do the following operations
Step4: The challenge for us is to find out if and how this compressed ACL differs from the original. That is, is there is traffic that is treated differently by the two ACLs, and if so, which lines are responsible for the difference.
This task is difficult to get right through manual reasoning alone, which is why we developed the compareFilters question in Batfish.
Comparing filters
We can compare the two ACLs above as follows. To initialize snapshots, we will use Batfish's init_snapshot_from_text function which creates a snapshot with a single device who configuration is the provided text. The analysis shown below can be done even when the filters are embedded within bigger device configurations.
Step6: The compareFilters question compares two filters and returns pairs of lines, one from each filter, that match the same flow(s) but treat them differently. If it reports no output, the filters are guaranteed to be identical. The analysis is exhaustive and considers all possible flows.
As we can see from the output above, our compressed ACL is not the same as the original one. In particular, line 210 of the compressed ACL will deny some flows that were being permitted by line 510 of the original; and line 510 of the compressed ACL will permit some flows that were being denied by line 220 of the original ACL. Because the permit statements correspond to ICMP traffic, we can tell that the traffic treated by the two filters is ICMP. To narrow learn specific source and destination IPs that are impacted, one may run the searchFilters question, as shown here.
By looking at the output above, we can immediately understand the difference
Step7: Given the split ACLs above, one analysis may be to figure out if each untrusted source subnet was included in a smaller ACL. Otherwise, we have lost protection that was present in the original ACL. We can accomplish this analysis via the findMatchingFilterLines question, as shown below.
Once we are satisfied with analysis of filters, for an end-to-end safety guarantee, we should also analyze if there are new flows that the network will allow (or disallow) after the change. Such an analysis can be done via the differentialReachability question, as shown here. | Python Code:
# The ACL before refactoring
original_acl =
ip access-list acl
10 deny icmp any any redirect
20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 eq 3784
30 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 eq 3785
40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp
50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp
60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp
70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp
80 permit tcp 205.248.58.190/32 205.248.58.188/32 eq bgp
90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
110 deny ip 54.0.0.0/8 any
120 deny ip 163.157.0.0/16 any
130 deny ip 166.144.0.0/12 any
140 deny ip 198.170.50.0/24 any
150 deny ip 198.120.0.0/16 any
160 deny ip 11.36.192.0/19 any
170 deny ip 11.125.64.0/19 any
180 permit ip 166.146.58.184/32 any
190 deny ip 218.66.57.0/24 any
200 deny ip 218.66.56.0/24 any
210 deny ip 218.67.71.0/24 any
220 deny ip 218.67.72.0/24 any
230 deny ip 218.67.96.0/22 any
240 deny ip 8.89.120.0/22 any
250 deny ip 54.203.159.1/32 any
260 permit ip 218.8.104.0/25 any
270 permit ip 218.8.104.128/25 any
280 permit ip 218.8.103.0/24 any
290 deny ip 144.49.45.40/32 any
300 deny ip 163.255.18.63/32 any
310 deny ip 202.45.130.141/32 any
320 deny ip 212.26.132.18/32 any
330 deny ip 218.111.16.132/32 any
340 deny ip 218.246.165.90/32 any
350 deny ip 29.228.179.210/32 any
360 deny ip 194.181.135.214/32 any
370 deny ip 10.64.90.249/32 any
380 deny ip 207.70.46.217/32 any
390 deny ip 219.185.241.117/32 any
400 deny ip 2.80.3.219/32 any
410 deny ip 27.212.145.150/32 any
420 deny ip 131.159.53.215/32 any
430 deny ip 214.220.213.107/32 any
440 deny ip 196.64.84.239/32 any
450 deny ip 28.69.250.136/32 any
460 deny ip 200.45.87.238/32 any
470 deny ip any 11.125.89.32/30
480 deny ip any 11.125.89.36/30
490 deny ip any 11.125.89.40/30
500 deny ip any 11.125.89.44/30
510 permit icmp any any echo-reply
520 deny ip any 11.36.199.216/30
530 deny ip any 11.36.199.36/30
540 deny ip any 11.36.199.2/30
550 deny ip any 11.36.199.52/30
560 deny ip any 11.36.199.20/30
570 deny ip any 11.125.82.216/30
580 deny ip any 11.125.82.220/32
590 deny ip any 11.125.82.36/30
600 deny ip any 11.125.82.12/30
610 deny ip any 11.125.80.136/30
620 deny ip any 11.125.80.141/32
630 deny ip any 11.125.87.48/30
640 deny ip any 11.125.87.168/30
650 deny ip any 11.125.87.173/32
660 deny ip any 11.125.90.56/30
670 deny ip any 11.125.90.240/30
680 deny ip any 11.125.74.224/30
690 deny ip any 11.125.91.132/30
700 deny ip any 11.125.89.132/30
710 deny ip any 11.125.89.12/30
720 deny ip any 11.125.92.108/30
730 deny ip any 11.125.92.104/32
740 deny ip any 11.125.92.28/30
750 deny ip any 11.125.92.27/32
760 deny ip any 11.125.92.160/30
770 deny ip any 11.125.92.164/32
780 deny ip any 11.125.92.204/30
790 deny ip any 11.125.92.202/32
800 deny ip any 11.125.93.192/29
810 deny ip any 11.125.95.204/30
820 deny ip any 11.125.95.224/30
830 deny ip any 11.125.95.180/30
840 deny ip any 11.125.95.156/30
850 deny tcp any any
860 deny icmp any any
870 deny udp any any
880 deny ip any any
Explanation: Safely refactoring ACLs and firewall rules
Changing ACLs or firewall rules (or filters) is one of the riskiest updates to a network. Even a small error can block connectivity for a large set of critical services or open up sensitive resources to the world at large. Earlier notebooks showed how to analyze filters for what they do and do not allow and how to make specific changes in a provably safe manner.
This notebook shows how to refactor complex filters in a way that the full impact of refactoring can be understood and analyzed for correctness before refactored filters are pushed to the network.
Original ACL
We will use the following ACL as a running example in this notebook. The ACL can be read as a few separate sections:
Line 10: Deny ICMP redirects
Lines 20, 23: Permit BFD traffic on certain blocks
Lines 40-80: Permit BGP traffic
Lines 90-100: Permit DNS traffic a /24 subnet while denying it from a /32 within that
Lines 110-500: Permit or deny IP traffic from certain subnets
Line 510: Permit ICMP echo reply
Lines 520-840: Deny IP traffic to certain subnets
Lines 850-880: Deny all other types of traffic
(The IP address space in the ACL appears all over the place because it has been anonymized via Netconan. Netconan preserves the super- and sub-prefix relationships when anonymizing IP addresses and prefixes.)
End of explanation
compressed_acl =
ip access-list acl
10 deny icmp any any redirect
20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785
! 30 MERGED WITH LINE ABOVE
40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp
50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp
60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp
70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp
! 80 DECOMMISSIONED BGP SESSION
90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
110 deny ip 54.0.0.0/8 any
120 deny ip 163.157.0.0/16 any
130 deny ip 166.144.0.0/12 any
140 deny ip 198.170.50.0/24 any
150 deny ip 198.120.0.0/16 any
160 deny ip 11.36.192.0/19 any
170 deny ip 11.125.64.0/19 any
! 180 REMOVED UNREACHABLE LINE
190 deny ip 218.66.56.0/23 any
! 200 MERGED WITH LINE ABOVE
210 deny ip 218.67.71.0/23 any
! 220 MERGED WITH LINE ABOVE
230 deny ip 218.67.96.0/22 any
240 deny ip 8.89.120.0/22 any
! 250 REMOVED UNREACHABLE LINE
260 permit ip 218.8.104.0/24 any
! 270 MERGED WITH LINE ABOVE
280 permit ip 218.8.103.0/24 any
290 deny ip 144.49.45.40/32 any
300 deny ip 163.255.18.63/32 any
310 deny ip 202.45.130.141/32 any
320 deny ip 212.26.132.18/32 any
330 deny ip 218.111.16.132/32 any
340 deny ip 218.246.165.90/32 any
350 deny ip 29.228.179.210/32 any
360 deny ip 194.181.135.214/32 any
370 deny ip 10.64.90.249/32 any
380 deny ip 207.70.46.217/32 any
390 deny ip 219.185.241.117/32 any
400 deny ip 2.80.3.219/32 any
410 deny ip 27.212.145.150/32 any
420 deny ip 131.159.53.215/32 any
430 deny ip 214.220.213.107/32 any
440 deny ip 196.64.84.239/32 any
450 deny ip 28.69.250.136/32 any
460 deny ip 200.45.87.238/32 any
470 deny ip any 11.125.89.32/28
510 permit icmp any any echo-reply
! 520-870 REMOVED UNNECESSARY DENIES
880 deny ip any any
Explanation: Compressed ACL
Now, assume that we want to compress this ACL to make it more manageable. We do the following operations:
Merge the two BFD permit statements on lines 20-30 into one statement using the range directive.
Remove the BGP session on line 80 because it has been decommissioned
Remove lines 180 and 250 because they are shadowed by earlier lines and will never match a packet. Such lines can be found via the filterLineReachability question, as shown here.
Merge pairs of lines (190, 200), (210, 220), and (260, 270) by combining their prefixes into a less specific prefix.
Remove all deny statements on lines 520-870. They are not needed given the final deny on line 880.
The result of these actions, which halve the ACL size, is shown below. To enable easy observation of changes, we have preserved the line numbers.
End of explanation
# Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize a snapshot with the original ACL
original_snapshot = bf.init_snapshot_from_text(
original_acl,
platform="cisco-nx",
snapshot_name="original",
overwrite=True)
# Initialize a snapshot with the compressed ACL
compressed_snapshot = bf.init_snapshot_from_text(
compressed_acl,
platform="cisco-nx",
snapshot_name="compressed",
overwrite=True)
# Now, compare the two ACLs in the two snapshots
answer = bf.q.compareFilters().answer(snapshot=compressed_snapshot, reference_snapshot=original_snapshot)
show(answer.frame())
Explanation: The challenge for us is to find out if and how this compressed ACL differs from the original. That is, is there is traffic that is treated differently by the two ACLs, and if so, which lines are responsible for the difference.
This task is difficult to get right through manual reasoning alone, which is why we developed the compareFilters question in Batfish.
Comparing filters
We can compare the two ACLs above as follows. To initialize snapshots, we will use Batfish's init_snapshot_from_text function which creates a snapshot with a single device who configuration is the provided text. The analysis shown below can be done even when the filters are embedded within bigger device configurations.
End of explanation
smaller_acls =
ip access-list deny-icmp-redirect
10 deny icmp any any redirect
ip access-list permit-bfd
20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785
ip access-list permit-bgp-session
40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp
50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp
60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp
70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp
ip access-list acl-dns
90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
ip access-list deny-untrusted-sources-group1
110 deny ip 54.0.0.0/8 any
120 deny ip 163.157.0.0/16 any
130 deny ip 166.144.0.0/12 any
140 deny ip 198.170.50.0/24 any
150 deny ip 198.120.0.0/16 any
160 deny ip 11.36.192.0/19 any
ip access-list deny-untrusted-sources-group2
160 deny ip 11.36.192.0/20 any
190 deny ip 218.66.56.0/23 any
210 deny ip 218.67.71.0/23 any
230 deny ip 218.67.96.0/22 any
240 deny ip 8.89.120.0/22 any
ip access-list permit-trusted-sources
260 permit ip 218.8.104.0/24 any
280 permit ip 218.8.103.0/24 any
ip access-list deny-untrusted-sources-group3
290 deny ip 144.49.45.40/32 any
300 deny ip 163.255.18.63/32 any
310 deny ip 202.45.130.141/32 any
320 deny ip 212.26.132.18/32 any
300 deny ip 218.111.16.132/32 any
340 deny ip 218.246.165.90/32 any
350 deny ip 29.228.179.210/32 any
360 deny ip 194.181.135.214/32 any
370 deny ip 10.64.90.249/32 any
380 deny ip 207.70.46.217/32 any
390 deny ip 219.185.241.117/32 any
ip access-list deny-untrusted-sources-group4
400 deny ip 2.80.3.219/32 any
410 deny ip 27.212.145.150/32 any
420 deny ip 131.159.53.215/32 any
430 deny ip 214.220.213.107/32 any
440 deny ip 196.64.84.239/32 any
450 deny ip 28.69.250.136/32 any
460 deny ip 200.45.87.238/32 any
ip access-list acl-tail
470 deny ip any 11.125.89.32/28
510 permit icmp any any echo-reply
880 deny ip any any
Explanation: The compareFilters question compares two filters and returns pairs of lines, one from each filter, that match the same flow(s) but treat them differently. If it reports no output, the filters are guaranteed to be identical. The analysis is exhaustive and considers all possible flows.
As we can see from the output above, our compressed ACL is not the same as the original one. In particular, line 210 of the compressed ACL will deny some flows that were being permitted by line 510 of the original; and line 510 of the compressed ACL will permit some flows that were being denied by line 220 of the original ACL. Because the permit statements correspond to ICMP traffic, we can tell that the traffic treated by the two filters is ICMP. To narrow learn specific source and destination IPs that are impacted, one may run the searchFilters question, as shown here.
By looking at the output above, we can immediately understand the difference:
The first line is showing that the compressed ACL is denying some traffic on line 210 (with index 16) that the original ACL was permitting via line 510, and the compressed ACL is permitting some traffic on line 510 that the original ACL was denying via line 220.
It turns out that the address space merger we did for lines 210 and 220 in the original ACL, where we combined 218.67.72.0/24 and 218.67.71.0/24 into 218.67.71.0/23, was not correct. The other similar mergers of 218.66.57.0/24 and 218.66.56.0/24 into 218.66.56.0/23 and of 218.8.104.0/25 and 218.8.104.128/25 into 218.8.104.0/24 were correct.
The third line is showing that the compressed ACL is denying some traffic at the end of the ACL that the original ACL was permitting via line 80. This is an expected change of decommissioning the BGP session on line 80.
It is not always the case that refactoring is semantics preserving. Where compareFilters helps is succinctly enumerating all differences. Engineers can look at the differences and decide if the refactored filter meets their intent.
Splitting ACLs
Compressing large ACLs is one type of refactoring engineers do; another one is splitting a large ACL into multiple smaller ACLs and composing them on the same device or spreading across multiple devices in the network. Smaller ACLs are easier to maintain and evolve. However, the split operation is risky. We may forget to include in the smaller ACLs some protections that exist in the original ACL. We show how such splits can be safely done using Batfish.
Suppose we want to split the compressed ACL above into multiple smaller ACLs that handle different concerns. So, we should have different ACLs for different types of traffic and different ACLs for different logical groups of nodes in the network. The result of such splitting is shown below. For ease of exposition, we have retained the line numbers from the original ACL and mimic a scenario in which all ACLs live on the same device.
End of explanation
# Initialize a snapshot with the smaller ACLs
smaller_snapshot = bf.init_snapshot_from_text(
smaller_acls,
platform="cisco-nx",
snapshot_name="smaller",
overwrite=True)
# All untrusted subnets
untrusted_source_subnets = ["54.0.0.0/8",
"163.157.0.0/16",
"166.144.0.0/12",
"198.170.50.0/24",
"198.120.0.0/16",
"11.36.192.0/19",
"11.125.64.0/19",
"218.66.56.0/24",
"218.66.57.0/24",
"218.67.71.0/23",
"218.67.96.0/22",
"8.89.120.0/22"
]
for subnet in untrusted_source_subnets:
# Find which ACLs match traffic from this source subnet
answer = bf.q.findMatchingFilterLines(
headers=HeaderConstraints(srcIps=subnet),
filters="/deny-untrusted/").answer(snapshot=smaller_snapshot)
# Each source subnet should match exactly one ACL
af = answer.frame()
if len(af) == 1:
print("{} .... OK".format(subnet))
elif len(af) == 0:
print("{} .... ABSENT".format(subnet))
else:
print("{} .... Multiply present".format(subnet))
show(af)
Explanation: Given the split ACLs above, one analysis may be to figure out if each untrusted source subnet was included in a smaller ACL. Otherwise, we have lost protection that was present in the original ACL. We can accomplish this analysis via the findMatchingFilterLines question, as shown below.
Once we are satisfied with analysis of filters, for an end-to-end safety guarantee, we should also analyze if there are new flows that the network will allow (or disallow) after the change. Such an analysis can be done via the differentialReachability question, as shown here.
End of explanation |
13,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Read Data
Preprocessed and tokenized questions.
Step4: Extract a set of unique question texts (document corpus).
Step5: Train TF-IDF vectorizer
Create a bag-of-token-unigrams vectorizer.
Step6: Vectorize train and test sets, compute distances
Step7: Save features | Python Code:
from pygoose import *
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_distances, euclidean_distances
Explanation: Feature: TF-IDF Distances
Create TF-IDF vectors from question texts and compute vector distances between them.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'tfidf'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_test.pickle')
tokens = tokens_train + tokens_test
Explanation: Read Data
Preprocessed and tokenized questions.
End of explanation
all_questions_flat = np.array(tokens).ravel()
documents = list(set(' '.join(question) for question in all_questions_flat))
del all_questions_flat
Explanation: Extract a set of unique question texts (document corpus).
End of explanation
vectorizer = TfidfVectorizer(
encoding='utf-8',
analyzer='word',
strip_accents='unicode',
ngram_range=(1, 1),
lowercase=True,
norm='l2',
use_idf=True,
smooth_idf=True,
sublinear_tf=True,
)
vectorizer.fit(documents)
model_filename = 'tfidf_vectorizer_{}_ngrams_{}_{}_penalty_{}.pickle'.format(
vectorizer.analyzer,
vectorizer.ngram_range[0],
vectorizer.ngram_range[1],
vectorizer.norm,
)
kg.io.save(vectorizer, project.trained_model_dir + model_filename)
Explanation: Train TF-IDF vectorizer
Create a bag-of-token-unigrams vectorizer.
End of explanation
def compute_pair_distances(pair):
q1_doc = ' '.join(pair[0])
q2_doc = ' '.join(pair[1])
pair_dtm = vectorizer.transform([q1_doc, q2_doc])
q1_doc_vec = pair_dtm[0]
q2_doc_vec = pair_dtm[1]
return [
cosine_distances(q1_doc_vec, q2_doc_vec)[0][0],
euclidean_distances(q1_doc_vec, q2_doc_vec)[0][0],
]
features = kg.jobs.map_batch_parallel(
tokens,
item_mapper=compute_pair_distances,
batch_size=1000,
)
X_train = np.array(features[:len(tokens_train)], dtype='float64')
X_test = np.array(features[len(tokens_train):], dtype='float64')
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
Explanation: Vectorize train and test sets, compute distances
End of explanation
feature_names = [
'tfidf_cosine',
'tfidf_euclidean',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
13,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Discrete Dynamic Movement Primitives
Arne Böckmann
What are DMPs
<table>
<tr>
<td>
<ul style="list-style-type
Step1: Adding temporal scaling
Add temporal scaling factor $\color{red}\tau$
\begin{align}
\color{red}\tau\dot{z} &= \alpha_z(\beta_z(g-y)-z)\
\color{red}\tau\dot{y} &= z\\
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1}) * \Delta t}{\color{red}\tau}\
y_t &= y_{t-1} + \frac{z_{t-1} * \Delta t}{\color{red}\tau}
\end{align}
Adding temporal scaling (Demo)
Step2: Shaping the Trajectory (The Transformation System)
Introduce the forcing term $\color{red}f$
\begin{align}
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1} + \color{red}f) * \Delta t}{\tau}\
\end{align}
* 'forces' the system into a certain shape.
* could be a function of time
Step3: Restoring Goal Convergence
Influence of $f$ should diminish over time
Introducing the phase $\color{red}s$
\begin{align}
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1} + \color{red}sf) * \Delta t}{\tau}\
\end{align}
$\color{red}s$ starts at 1 and exponentially decays to 0
Restoring Goal Convergence (The Canonical System)
$s$ can be generated by the following system
Step4: Sin Forcing Term and Canonical System (Demo)
Step5: Restoring Temporal Scaling
Because $f$ is time dependent
Solution
Step6: Imitating arbitrary Trajectories
Set $f(s) = \frac{\sum_{i=1}^N \psi_i(s)w_i}{\sum_{i=1}^N \psi_i(s)}$
$\psi_i$ are fixed basis functions (e.g. radial basis functions)
$N$ is the number of basis functions/weights
$f$ can be shaped by adjusting the $w_i$
$f$ can imitate any given trajectory using locally weighted regression (not part of this talk)
Imitating arbitrary Trajectories
This is a part of B-Human's current kick trajectory
Step7: Imitating arbitrary Trajectories (Demo)
Step8: Adding Spatial Scaling
$w_i$ are learned for one specific goal position
Are too strong/weak for other goal positions
Solution
Step9: Perturbation robustness
Step10: Extension to multiple dimensions
As long as the dimensions are independent
Use one Transformation System per dimension
Use the same Canonical System for all dimensions
Extensions
Oscillating movements
Enforcing a goal velocity
Obstacle avoidance
Closed-loop perception-action control
Movement synchroization
Rotation Dmps (quaternion based and rotation matrix based)
...
Recap
Dmps are a toolbox to imitate movements and modify them on the fly
Robust against perturbations
Always converge to the goal in time
Can be executed and modified in real-time
Can be extended to multiple dimensions
Can be extended using arbitrary coupling terms to achieve any desired behavior
Recap
Parameters
$\alpha_z$ and $\beta_z$ are constant $\rightarrow$ can be ignored
$\alpha_s$ can be calculated automatically
Good values for the parameters of $f$ can be found automatically
The weights of $f$ can be learned from demonstration using locally weighted regression
Examples | Python Code:
interact(plotPD, g=(-2.0, 6.0, 0.1), y_start=(-1.0, 2.0, 0.1), yd_start=(-50.0, 50.0, 5.0))
Explanation: Introduction to Discrete Dynamic Movement Primitives
Arne Böckmann
What are DMPs
<table>
<tr>
<td>
<ul style="list-style-type:disc">
<li>Dynamical systems</li>
<li>Represent goal directed movements in joint or task space </li>
<li>Guaranteed to converge to the goal</li>
<li>Scale spatial and temporal</li>
<li>Robust against pertubations</li>
<li>Adaptable online</li>
<li>Extendable</li>
<li>Learnable from demonstration</li>
</ul>
</td>
<td><img src="dmp.png"></td>
</tr>
</table>
Basic Idea
Use well understood stable dynamical system with convenient properties
Add nonlinear terms to achieve the desired movement behavior
Damped Spring Model
\begin{align}
\dot{z} &= \alpha_z(\beta_z(g-y)-z)\
\dot{y} &= z
\end{align}
$y$ - position, $g$ - goal position
$\alpha_z$, $\beta_z$ - Dampening constants. Use $\beta_z = \alpha_z/4$ for critical dampening.
Damped Spring Model
Difference Equations
\begin{align}
z_t &= z_{t-1} + \alpha_z(\beta_z(g-y_{t-1})-z_{t-1}) * \Delta t\
y_t &= y_{t-1} + z_{t-1} * \Delta t
\end{align}
Damped Spring Model (Demo)
End of explanation
interact(plotPDT, g=(-2.0, 6.0, 0.1), T=(0.01, 1.5, 0.1))
Explanation: Adding temporal scaling
Add temporal scaling factor $\color{red}\tau$
\begin{align}
\color{red}\tau\dot{z} &= \alpha_z(\beta_z(g-y)-z)\
\color{red}\tau\dot{y} &= z\\
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1}) * \Delta t}{\color{red}\tau}\
y_t &= y_{t-1} + \frac{z_{t-1} * \Delta t}{\color{red}\tau}
\end{align}
Adding temporal scaling (Demo)
End of explanation
interact(plotSin, K=(0.1, 1000.0, 0.1))
Explanation: Shaping the Trajectory (The Transformation System)
Introduce the forcing term $\color{red}f$
\begin{align}
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1} + \color{red}f) * \Delta t}{\tau}\
\end{align}
* 'forces' the system into a certain shape.
* could be a function of time: $f(t)$
Shaping the Trajectory (Demo)
set $f(t) = K * sin(t)$
End of explanation
interact(plotCS, T=(0.1, 2.0))
Explanation: Restoring Goal Convergence
Influence of $f$ should diminish over time
Introducing the phase $\color{red}s$
\begin{align}
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1} + \color{red}sf) * \Delta t}{\tau}\
\end{align}
$\color{red}s$ starts at 1 and exponentially decays to 0
Restoring Goal Convergence (The Canonical System)
$s$ can be generated by the following system:
\begin{align}
\tau \dot{s} = -\alpha_s s\\
s_t = s_{t-1} + \frac{-\alpha_s \Delta t}{T}
\end{align}
* $\alpha_s$ defines how fast the system decays.
Canonical System (Demo)
End of explanation
interact(plotSinCS, sin_scale=(-1000, 1000, 1.0), T=(0.1, 1.5, 0.1), g=(0.0, 6.0, 0.1))
Explanation: Sin Forcing Term and Canonical System (Demo)
End of explanation
interact(plotSinCS2, sin_scale=(-1000, 1000, 1.0), T=(0.1, 1.5, 0.1), g=(0.0, 6.0, 0.1))
Explanation: Restoring Temporal Scaling
Because $f$ is time dependent
Solution: Replace the time dependency with a phase dependency
$f = f(s)$
Restoring Temporal Scaling (Demo)
End of explanation
plt.plot(np.linspace(start=0.0, stop=0.01 * len(values), num = len(values)), values); plt.xlabel("Time"); plt.ylabel("X Position")
Explanation: Imitating arbitrary Trajectories
Set $f(s) = \frac{\sum_{i=1}^N \psi_i(s)w_i}{\sum_{i=1}^N \psi_i(s)}$
$\psi_i$ are fixed basis functions (e.g. radial basis functions)
$N$ is the number of basis functions/weights
$f$ can be shaped by adjusting the $w_i$
$f$ can imitate any given trajectory using locally weighted regression (not part of this talk)
Imitating arbitrary Trajectories
This is a part of B-Human's current kick trajectory
End of explanation
interact(plotImitateDmp, g=(-10, 200, 1), T=(0.1, 1.5, 0.1), y0=(-30, 30, 1))
Explanation: Imitating arbitrary Trajectories (Demo)
End of explanation
interact(plotImitateScaleDmp, g=(-10, 200, 1), T=(0.1, 1.5, 0.1), y0=(-30, 30, 1))
Explanation: Adding Spatial Scaling
$w_i$ are learned for one specific goal position
Are too strong/weak for other goal positions
Solution: Scale the forcing term according to the difference $\color{red}d$ between the original trajectory and the modified one
\begin{align}
\color{red}d &= (y_e - y_0)/(y_{de} - y_{d0})\
z_t &= z_{t-1} + \frac{\alpha_z(\beta_z(g-y_{t-1})-z_{t-1} + \color{red}dsf) * \Delta t}{\tau}\
\end{align}
$y_{de}$ and $y_{d0}$ - End and start position of the demonstrated trajectory
Spatial Scaling (Demo)
End of explanation
interact(plotPerturbateDmp, p_loc=(0.01, 0.5, 0.01), p_strength=(-200, 200, 1))
Explanation: Perturbation robustness
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('Ge0GduY1rtE')
from IPython.display import YouTubeVideo
YouTubeVideo('SH3bADiB7uQ')
Explanation: Extension to multiple dimensions
As long as the dimensions are independent
Use one Transformation System per dimension
Use the same Canonical System for all dimensions
Extensions
Oscillating movements
Enforcing a goal velocity
Obstacle avoidance
Closed-loop perception-action control
Movement synchroization
Rotation Dmps (quaternion based and rotation matrix based)
...
Recap
Dmps are a toolbox to imitate movements and modify them on the fly
Robust against perturbations
Always converge to the goal in time
Can be executed and modified in real-time
Can be extended to multiple dimensions
Can be extended using arbitrary coupling terms to achieve any desired behavior
Recap
Parameters
$\alpha_z$ and $\beta_z$ are constant $\rightarrow$ can be ignored
$\alpha_s$ can be calculated automatically
Good values for the parameters of $f$ can be found automatically
The weights of $f$ can be learned from demonstration using locally weighted regression
Examples
End of explanation |
13,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Features
https
Step1: Product
~~users~~
~~orders~~
~~order frequency~~
~~reorder rate~~
recency
~~mean/std add_to_cart_order~~
etc.
Step2: User
Products purchased
Orders made
frequency and recency of orders
Aisle purchased from
Department purchased from
frequency and recency of reorders
tenure
mean order size
etc.
Step3: Aisle
users
orders
order frequency
reorder rate
recency
mean add_to_cart_order
etc.
Department
users
orders
order frequency
reorder rate
recency
mean add_to_cart_order
etc.
User Product Interaction (UP)
purchases
reorders
day since last purchase
order since last purchase
etc.
Step4: User aisle interaction (UA)
purchases
reorders
day since last purchase
order since last purchase
etc.
User department interaction (UD)
purchases
reorders
day since last purchase
order since last purchase
etc.
User time interaction (UT)
user preferred day of week
user preferred time of day
similar features for products and aisles
Combine
Step5: Train
Step6: Predict
Step7: CV
https
Step8: 0.372658477911
Step9: Модель определения кол-ва reordered | Python Code:
priors = priors.join(orders, on='order_id', rsuffix='_')
priors = priors.join(products, on='product_id', rsuffix='_')
priors.drop(['product_id_', 'order_id_'], inplace=True, axis=1)
Explanation: Features
https://www.kaggle.com/c/instacart-market-basket-analysis/discussion/35468
Here are some feature ideas that can help new participants get started and may be you will find something you have missed:
End of explanation
prods = pd.DataFrame()
prods['orders'] = priors.groupby(priors.product_id).size().astype(np.float32)
prods['order_freq'] = prods['orders'] / len(priors.order_id.unique())
prods['users'] = priors.groupby(priors.product_id).user_id.unique().apply(len)
prods['add_to_cart_order_mean'] = priors.groupby(priors.product_id).add_to_cart_order.mean()
prods['add_to_cart_order_std'] = priors.groupby(priors.product_id).add_to_cart_order.std()
prods['reorders'] = priors['reordered'].groupby(priors.product_id).sum().astype(np.float32)
prods['reorder_rate'] = (prods.reorders / prods.orders).astype(np.float32)
products = products.join(prods, on='product_id')
products.set_index('product_id', drop=False, inplace=True)
del prods
Explanation: Product
~~users~~
~~orders~~
~~order frequency~~
~~reorder rate~~
recency
~~mean/std add_to_cart_order~~
etc.
End of explanation
usr = pd.DataFrame()
usr['average_days_between_orders'] = orders.groupby('user_id')['days_since_prior_order'].mean().astype(np.float32)
usr["period"] = orders.groupby('user_id').days_since_prior_order.fillna(0).sum()
usr['nb_orders'] = orders.groupby('user_id').size().astype(np.int16)
users = pd.DataFrame()
users['total_items'] = priors.groupby('user_id').size().astype(np.int16)
users['all_products'] = priors.groupby('user_id')['product_id'].apply(set)
users['total_distinct_items'] = (users.all_products.map(len)).astype(np.int16)
users = users.join(usr)
del usr
users['average_basket'] = (users.total_items / users.nb_orders).astype(np.float32)
gc.collect()
print('user f', users.shape)
Explanation: User
Products purchased
Orders made
frequency and recency of orders
Aisle purchased from
Department purchased from
frequency and recency of reorders
tenure
mean order size
etc.
End of explanation
# %%cache userXproduct.pkl userXproduct
priors['user_product'] = priors.product_id + priors.user_id * 100000
d = dict()
for row in tqdm(priors.itertuples(), total=len(priors)):
z = row.user_product
if z not in d:
d[z] = (
1,
(row.order_number, row.order_id),
row.add_to_cart_order,
row.reordered
)
else:
d[z] = (
d[z][0] + 1,
max(d[z][1], (row.order_number, row.order_id)),
d[z][2] + row.add_to_cart_order,
d[z][3] + row.reordered
)
print('to dataframe (less memory)')
d = pd.DataFrame.from_dict(d, orient='index')
d.columns = ['nb_orders', 'last_order_id', 'sum_pos_in_cart', 'reorders']
d.nb_orders = d.nb_orders.astype(np.int16)
d.last_order_id = d.last_order_id.map(lambda x: x[1]).astype(np.int32)
d.sum_pos_in_cart = d.sum_pos_in_cart.astype(np.int16)
userXproduct = d
print('user X product f', len(userXproduct))
Explanation: Aisle
users
orders
order frequency
reorder rate
recency
mean add_to_cart_order
etc.
Department
users
orders
order frequency
reorder rate
recency
mean add_to_cart_order
etc.
User Product Interaction (UP)
purchases
reorders
day since last purchase
order since last purchase
etc.
End of explanation
### build list of candidate products to reorder, with features ###
train_index = set(op_train.index)
def features(selected_orders, labels_given=False):
order_list = []
product_list = []
labels = []
for row in tqdm(selected_orders.itertuples(), total=len(selected_orders)):
order_id = row.order_id
user_id = row.user_id
user_products = users.all_products[user_id]
product_list += user_products
order_list += [order_id] * len(user_products)
if labels_given:
labels += [(order_id, product) in train_index for product in user_products]
df = pd.DataFrame({'order_id':order_list, 'product_id':product_list})
df.order_id = df.order_id.astype(np.int32)
df.product_id = df.product_id.astype(np.int32)
labels = np.array(labels, dtype=np.int8)
del order_list
del product_list
print('user related features')
df['user_id'] = df.order_id.map(orders.user_id).astype(np.int32)
df['user_total_orders'] = df.user_id.map(users.nb_orders)
df['user_total_items'] = df.user_id.map(users.total_items)
df['user_total_distinct_items'] = df.user_id.map(users.total_distinct_items)
df['user_average_days_between_orders'] = df.user_id.map(users.average_days_between_orders)
df['user_average_basket'] = df.user_id.map(users.average_basket)
df['user_period'] = df.user_id.map(users.period)
print('order related features')
# df['dow'] = df.order_id.map(orders.order_dow)
df['order_hour_of_day'] = df.order_id.map(orders.order_hour_of_day)
df['days_since_prior_order'] = df.order_id.map(orders.days_since_prior_order)
df['days_since_ratio'] = df.days_since_prior_order / df.user_average_days_between_orders
print('product related features')
df['aisle_id'] = df.product_id.map(products.aisle_id).astype(np.int8)
df['department_id'] = df.product_id.map(products.department_id).astype(np.int8)
df['product_orders'] = df.product_id.map(products.orders).astype(np.float32)
df['product_users'] = df.product_id.map(products.users).astype(np.float32)
df['product_order_freq'] = df.product_id.map(products.order_freq).astype(np.float32)
df['product_reorders'] = df.product_id.map(products.reorders).astype(np.float32)
df['product_reorder_rate'] = df.product_id.map(products.reorder_rate)
print('user_X_product related features')
df['z'] = df.product_id + df.user_id * 100000
df['UP_orders'] = df.z.map(userXproduct.nb_orders)
df['UP_orders_ratio'] = (df.UP_orders / df.user_total_orders).astype(np.float32)
df['UP_last_order_id'] = df.z.map(userXproduct.last_order_id)
df['UP_average_pos_in_cart'] = (df.z.map(userXproduct.sum_pos_in_cart) / df.UP_orders).astype(np.float32)
df['UP_reorders'] = df.z.map(userXproduct.reorders)
df['UP_orders_since_last'] = df.user_total_orders - df.UP_last_order_id.map(orders.order_number)
df['UP_delta_hour_vs_last'] = abs(df.order_hour_of_day - \
df.UP_last_order_id.map(orders.order_hour_of_day)).map(lambda x: min(x, 24-x)).astype(np.int8)
# df['UP_days_past_last_buy'] =
#df['UP_same_dow_as_last_order'] = df.UP_last_order_id.map(orders.order_dow) == \
# df.order_id.map(orders.order_dow)
df.drop(['UP_last_order_id', 'z'], axis=1, inplace=True)
gc.collect()
return (df, labels)
### train / test orders ###
print('split orders : train, test')
test_orders = orders[orders.eval_set == 'test']
train_orders = orders[orders.eval_set == 'train']
df_train, labels = features(train_orders, labels_given=True)
df_test, _ = features(test_orders)
Explanation: User aisle interaction (UA)
purchases
reorders
day since last purchase
order since last purchase
etc.
User department interaction (UD)
purchases
reorders
day since last purchase
order since last purchase
etc.
User time interaction (UT)
user preferred day of week
user preferred time of day
similar features for products and aisles
Combine
End of explanation
f_to_use = [
'user_total_orders', 'user_total_items', 'user_total_distinct_items',
'user_average_days_between_orders', 'user_average_basket',
'order_hour_of_day', 'days_since_prior_order', 'days_since_ratio',
'aisle_id', 'department_id', 'product_orders', 'product_reorders',
'product_reorder_rate', 'UP_orders', 'UP_orders_ratio',
'UP_average_pos_in_cart', 'UP_reorders', 'UP_orders_since_last',
'UP_delta_hour_vs_last'
]
def feature_select(df):
return df.drop(["user_id", "order_id", "product_id"], axis=1, errors="ignore")
params = {
'task': 'train',
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': {'binary_logloss'},
'num_leaves': 96,
'feature_fraction': 0.9,
'bagging_fraction': 0.95,
'bagging_freq': 5
}
ROUNDS = 98
def train(traindf, y):
d_train = lgb.Dataset(
feature_select(traindf),
label=y,
categorical_feature=['aisle_id', 'department_id']
)
model = lgb.train(params, d_train, ROUNDS)
return model
model = train(df_train, labels)
Explanation: Train
End of explanation
def predict(model, df_test, TRESHOLD=0.19, predicted_basket_size=None):
### build candidates list for test ###
df_test['pred'] = model.predict(feature_select(df_test))
d = dict()
if not predicted_basket_size:
for row in df_test.itertuples():
if row.pred > TRESHOLD:
try:
d[row.order_id] += ' ' + str(row.product_id)
except KeyError:
d[row.order_id] = str(row.product_id)
else:
# Вот тут можно отрезать не по threshold, а с помощью модели определять кол-во покупок
current_order_id = None
current_order_count = 0
for row in df_test.sort_values(
by=["order_id", "pred"],
ascending=[False, False]
).itertuples():
order_id = row.order_id
if order_id != current_order_id:
current_order_id = order_id
current_order_count = 0
if current_order_count >= predicted_basket_size[current_order_id]:
continue
current_order_count += 1
try:
d[order_id] += ' ' + str(row.product_id)
except KeyError:
d[order_id] = str(row.product_id)
for order_id in df_test.order_id:
if order_id not in d:
d[order_id] = 'None'
sub = pd.DataFrame.from_dict(d, orient='index')
sub.reset_index(inplace=True)
sub.columns = ['order_id', 'products']
return sub
# Загружаем предсказанное кол-во покупок
predicted_basket_size = pd.read_csv("test_orders_products_count.csv", index_col="order_id")
predicted_basket_size = predicted_basket_size["pred_products_count"].to_dict()
sub = predict(model, df_test, predicted_basket_size=predicted_basket_size)
sub.to_csv('sub.csv', index=False)
Explanation: Predict
End of explanation
lgb.cv(params, d_train, ROUNDS, nfold=5, verbose_eval=10)
%%cache df_train_gt.pkl df_train_gt
from functools import partial
products_raw = pd.read_csv(IDIR + 'products.csv')
# combine aisles, departments and products (left joined to products)
goods = pd.merge(left=pd.merge(left=products_raw, right=departments, how='left'), right=aisles, how='left')
# to retain '-' and make product names more "standard"
goods.product_name = goods.product_name.str.replace(' ', '_').str.lower()
# retype goods to reduce memory usage
goods.product_id = goods.product_id.astype(np.int32)
goods.aisle_id = goods.aisle_id.astype(np.int16)
goods.department_id = goods.department_id.astype(np.int8)
# initialize it with train dataset
train_details = pd.merge(
left=op_train,
right=orders,
how='left',
on='order_id'
).apply(partial(pd.to_numeric, errors='ignore', downcast='integer'))
# add order hierarchy
train_details = pd.merge(
left=train_details,
right=goods[['product_id',
'aisle_id',
'department_id']].apply(partial(pd.to_numeric,
errors='ignore',
downcast='integer')),
how='left',
on='product_id'
)
train_gtl = []
for uid, subset in train_details.groupby('user_id'):
subset1 = subset[subset.reordered == 1]
oid = subset.order_id.values[0]
if len(subset1) == 0:
train_gtl.append((oid, 'None'))
continue
ostr = ' '.join([str(int(e)) for e in subset1.product_id.values])
# .strip is needed because join can have a padding space at the end
train_gtl.append((oid, ostr.strip()))
del train_details
del goods
del products_raw
gc.collect()
df_train_gt = pd.DataFrame(train_gtl)
df_train_gt.columns = ['order_id', 'products']
df_train_gt.set_index('order_id', inplace=True)
df_train_gt.sort_index(inplace=True)
from sklearn.model_selection import GroupKFold
def f1_score(cvpred):
joined = df_train_gt.join(cvpred, rsuffix="_cv", how="inner")
lgts = joined.products.replace("None", "-1").apply(lambda x: x.split(" ")).values
lpreds = joined.products_cv.replace("None", "-1").apply(lambda x: x.split(" ")).values
f1 = []
for lgt, lpred in zip(lgts, lpreds):
rr = (np.intersect1d(lgt, lpred))
precision = np.float(len(rr)) / len(lpred)
recall = np.float(len(rr)) / len(lgt)
denom = precision + recall
f1.append(((2 * precision * recall) / denom) if denom > 0 else 0)
return np.mean(f1)
def cv(threshold=0.22):
gkf = GroupKFold(n_splits=5)
scores = []
for train_idx, test_idx in gkf.split(df_train.index, groups=df_train.user_id):
dftrain = df_train.iloc[train_idx]
dftest = df_train.iloc[test_idx]
y = labels[train_idx]
model = train(dftrain, y)
pred = predict(model, dftest, threshold).set_index("order_id")
f1 = f1_score(pred)
print f1
scores.append(f1)
del dftrain
del dftest
gc.collect()
return np.mean(scores), np.std(scores)
cv()
for th in np.arange(0.18, 0.22, 0.01):
print th
print cv(threshold=th)
print
Explanation: CV
https://www.kaggle.com/happycube/validation-demo-325-cv-3276-lb/notebook
End of explanation
0.18
0.375669602808
0.37518960199
0.376068733519
0.374880658158
0.371575669134
(0.37467685312194482, 0.0016027896306283745)
0.19
0.375981281546
0.375613273106
0.37623495823
0.374958453045
0.371884026622
(0.3749343985097483, 0.0015845275427144021)
0.2
0.376141810192
0.375593739202
0.375961736002
0.375124046483
0.371748172351
(0.37491390084571824, 0.001620734287706205)
0.21
0.375454836995
0.374657579102
0.375585106194
0.374639123067
0.371277685501
(0.37432286617177202, 0.0015722458019732746)
0.2
0.376141810192
0.375593739202
0.375961736002
0.375124046483
0.371748172351
(0.37491390084571824, 0.001620734287706205)
0.374504880043
0.372459365153
0.374241429517
0.373332070018
0.370178093483
(0.37294316764289259, 0.0015591904647740879) 0.22
0.370290530162
0.369518178297
0.370515696117
0.369568282123
0.3673846793
(0.36945547319979183, 0.0011069090226251931) 0.24
0.363691285892
0.363725106289
0.363492700824
0.364412180878
0.363024994542
(0.36366925368510306, 0.00044761289123321511) 0.26
Explanation: 0.372658477911
End of explanation
prior_orders_count = priors[["order_id", "reordered"]].groupby("order_id").sum()
prior_orders_count = prior_orders_count.rename(columns={"reordered": "product_counts"})
train_orders_count = op_train.drop(["product_id", "order_id"], axis=1, errors="ignore")
train_orders_count = train_orders_count.reset_index()[["order_id", "reordered"]].groupby("order_id").sum()
train_orders_count = train_orders_count.rename(columns={"reordered": "product_counts"})
prior_orders_count = orders.join(prior_orders_count, how='inner')
train_orders_count = orders.join(train_orders_count, how='inner')
def extend_prev_prod_count(df, period=1):
global prior_orders_count
prior_orders_count["next_order_number"] = prior_orders_count["order_number"] + period
mdf = prior_orders_count[["user_id", "next_order_number", "product_counts"]]
mdf = mdf.add_suffix("_prev%s" % period)
try:
return df.merge(
mdf,
left_on=["user_id", "order_number"],
right_on=["user_id_prev%s" % period, "next_order_number_prev%s" % period],
how="left",
).drop([
"next_order_number",
"next_order_number_prev%s" % period,
"user_id_prev%s" % period,
], axis=1, errors="ignore")
finally:
prior_orders_count.drop("next_order_number", axis=1, inplace=True)
train_orders_count = extend_prev_prod_count(train_orders_count, 1)
train_orders_count = extend_prev_prod_count(train_orders_count, 2)
prior_orders_count = extend_prev_prod_count(prior_orders_count, 1)
prior_orders_count = extend_prev_prod_count(prior_orders_count, 2)
prior_orders_count.head(15)
from sklearn.linear_model import Lasso
from sklearn.metrics import mean_squared_error
def get_order_count(order, alpha=0.5):
user_id = order["user_id"]
df = prior_orders_count[prior_orders_count["user_id"] == user_id]
feats = [
"order_number", "product_counts_prev1", "product_counts_prev2",
"order_dow", "order_hour_of_day", "days_since_prior_order"
]
X = df[feats].values
# X = np.nan_to_num(X, 0)
y = df["product_counts"].values
# create dataset for lightgbm
# lgb_train = lgb.Dataset(X, y)
# params = {
# 'task': 'train',
# 'boosting_type': 'gbdt',
# 'objective': 'regression',
# 'metric': {'rmse'},
# 'num_leaves': 100,
# 'learning_rate': 0.01,
# 'feature_fraction': 0.9,
# 'bagging_fraction': 0.8,
# 'bagging_freq': 5,
# 'verbose': 0,
# }
# clf = lgb.train(params,
# lgb_train,
# num_boost_round=40)
xgb_params = {
'max_depth': 3,
'n_estimators': 70,
'learning_rate': 0.05,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
dtrain_all = xgb.DMatrix(X, y)
clf = xgb.train(xgb_params, dtrain_all, num_boost_round=400)
# clf = Lasso(alpha=0.01)
# clf.fit(X, y)
Xpred = np.array([order[f] or 0 for f in feats]).reshape(1, -1)
# Xpred = np.nan_to_num(Xpred, 0)
Xpred = xgb.DMatrix(Xpred)
return int(round(np.round(clf.predict(Xpred)[0])))
df = train_orders_count.head(100)
df["pred_products_count"] = df.apply(get_order_count, axis=1)
print(mean_squared_error(
df["product_counts"],
df["pred_products_count"]
))
df = orders[orders.eval_set == 'test']
df = extend_prev_prod_count(df, 1)
df = extend_prev_prod_count(df, 2)
df["pred_products_count"] = df.progress_apply(get_order_count, axis=1)
df.to_csv("test_orders_products_count.csv", index=False, header=True)
df.head()
Explanation: Модель определения кол-ва reordered
End of explanation |
13,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scattering Function Normalization
Scott Prahl
Jan 2022
Step1: Solid Angles
Solid angles are the 3D analog of 2D angles. A radian $\theta$ is defined as the angle represented by an arc length on circle divided by the radius $R$ of the circle
$$
\theta = \frac{\mbox{arc length}}{R}
$$
A steradian $\Omega$ is defined as the area on a surface of a sphere divided by the square of its radius $R$.
$$
\Omega = \frac{\mbox{surface area}}{R^2}
$$
Thus the total number of steradians in a sphere is $4\pi R^2/R^2=4\pi$
For example usage, a circular detector (pointed at the center) with radius $r_d$ at a distance $R$ from the center will subtend an approximate angle (the detector is flat and does not curve with the sphere surface)
$$
\Omega \approx \frac{\pi r_d^2}{R^2}
$$
(assuming $r_d\ll R$). Now if $P_0$ of light is scattered by a sphere located at the center then the scattered power on the detector will be
$$
P_d = P_0 \cdot \Omega \cdot p(\mu)
$$
where $\mu$ is the cosine of the angle between the incoming light and a ray to the center of the detector.
Normalization of the scattered light
Mie scattering is used in a wide variety of disciplines to describe the scattering pattern from spheres. Not surprisingly a bunch of different normalizations have arisen that can be described based on the integrating the scattering function over all directions or $4\pi$ steradians.
Integrating over all solid angles suggests the following normalizations
$$
\begin{align}
\int_{4\pi} p(\theta,\phi) \,d\Omega &= 1 \qquad\qquad \mbox{one}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= 4\pi \qquad\qquad \mbox{4pi}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= a \qquad\qquad \mbox{albedo (default)}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= Q_{sca} \qquad\qquad \mbox{qsca}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= Q_{ext} \qquad\qquad \mbox{qext}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= 4\pi x^2 Q_{sca}\qquad\qquad \mbox{book}\[2mm]
\end{align}
$$
where $d\Omega=\sin\theta d\theta\,d\phi$ is a differential solid angle. The single scattering albedo is $a$
$$
a = \frac{Q_\mathrm{sca}}{Q_\mathrm{ext}} = \frac{Q_\mathrm{sca}}{Q_\mathrm{sca}+Q_\mathrm{abs}}
$$
and $Q_\mathrm{sca}$, $Q_\mathrm{ext}$, and $Q_\mathrm{abs}$ are the scattering, extinction, and absorption efficiencies.
Verifying normalization
The integral of the scattering function over all solid angles is
$$
\mbox{total} = \int_0^{2\pi}\int_0^\pi \, p(\theta,\phi)\,\sin\theta\,d\theta\,d\phi
$$
or with a change of variables $\mu=\cos\theta$ and using the symmetry to the integral in $\phi$
$$
\mbox{total} = 2\pi \int_{-1}^1 \, p(\mu)\,d\mu
$$
This integral can be done numerically by simply summing all the rectangles
$$
\mbox{total} = 2\pi \sum_{i=0}^N p(\mu_i)\,\Delta\mu_i
$$
which can be found using np.trapz()
Case 1. n=1.5, x=1
For this non-strongly peaked scattering function, the simple integration remains close to the expected value.
Step2: Case 2
Step3: Case III, evenly spaced $\theta$
For this non-strongly peaked scattering function, even spacing in $\theta$ improves the accuracy of the integration.
Step4: Differential Scattering Cross Section
The differential scattering cross section $\frac{d\sigma_{sca}}{d\Omega}$ is defined in terms of the total scattering cross section
$$
\sigma_\mathrm{sca} = \pi r^2 Q_\mathrm{sca} = \int_{4\pi} \frac{d\sigma_{sca}}{d\Omega}\,d\Omega
$$
Thus if the unpolarized scattering is normalized so its integral is the scattering efficiency
$$
Q_\mathrm{sca} = \int_{4\pi} p(\mu) \,d\Omega
$$
then
$$
\frac{d\sigma_{sca}}{d\Omega} = \pi r^2 p(\theta,\phi)
$$
The differential scattering cross section can be obtained miepython by normalizing to qsca and multiplying the result by the geometric cross section
diff_sca = np.pi * r**2 * miepython.i_unpolarized(m,x,mu,norm='qsca')
For example, here is a replica of figure 4
Step6: Comparison to Wiscombe's Mie Program
Wiscombe normalizes as
$$
\int_{4\pi} p(\theta,\phi) \,d\Omega = \pi x^2 Q_{sca}
$$
where $p(\theta)$ is the scattered light.
Once corrected for differences in phase function normalization, Wiscombe's test cases match those from miepython exactly.
Wiscombe's Test Case 14
Step8: Wiscombe's Test Case 10
Step10: Wiscombe's Test Case 7
Step12: Comparison to Bohren & Huffmans's Mie Program
Bohren & Huffman normalizes as
$$
\int_{4\pi} p(\theta,\phi) \,d\Omega = 4 \pi x^2 Q_{sca}
$$
Bohren & Huffmans's Test Case 14
Step13: Bohren & Huffman, water droplets
Tiny water droplet (0.26 microns) in clouds has pretty strong forward scattering! A graph of this is figure 4.9 in Bohren and Huffman's Absorption and Scattering of Light by Small Particles.
A bizarre scaling factor of $1/4$ is needed to make the miepython results match those in the figure 4.9. | Python Code:
#!pip install --user miepython
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
Explanation: Scattering Function Normalization
Scott Prahl
Jan 2022
End of explanation
m = 1.5
x = 1
mu = np.linspace(-1,1,501)
intensity = miepython.i_unpolarized(m,x,mu)
qext, qsca, qback, g = miepython.mie(m,x)
norms = ['albedo','one','4pi','qsca','qext','bohren','wiscombe']
expected = [qsca/qext,1.0,4*np.pi,qsca,qext,4*np.pi*x**2*qsca,np.pi*x**2*qsca]
plt.plot(mu,intensity)
plt.xlabel(r'$\cos(\theta)$')
plt.ylabel('Unpolarized Scattering Intensity [1/sr]')
plt.title('m=%.3f%+.3fj, x=%.2f, default normalization'%(m.real,m.imag,x))
plt.show()
print(' Normalization Total Expected')
for i,norm in enumerate(norms):
intensity = miepython.i_unpolarized(m,x,mu,norm)
total = 2 * np.pi * np.trapz(intensity, mu)
print("%14s %8.3f %8.3f" % (norm, total, expected[i]))
Explanation: Solid Angles
Solid angles are the 3D analog of 2D angles. A radian $\theta$ is defined as the angle represented by an arc length on circle divided by the radius $R$ of the circle
$$
\theta = \frac{\mbox{arc length}}{R}
$$
A steradian $\Omega$ is defined as the area on a surface of a sphere divided by the square of its radius $R$.
$$
\Omega = \frac{\mbox{surface area}}{R^2}
$$
Thus the total number of steradians in a sphere is $4\pi R^2/R^2=4\pi$
For example usage, a circular detector (pointed at the center) with radius $r_d$ at a distance $R$ from the center will subtend an approximate angle (the detector is flat and does not curve with the sphere surface)
$$
\Omega \approx \frac{\pi r_d^2}{R^2}
$$
(assuming $r_d\ll R$). Now if $P_0$ of light is scattered by a sphere located at the center then the scattered power on the detector will be
$$
P_d = P_0 \cdot \Omega \cdot p(\mu)
$$
where $\mu$ is the cosine of the angle between the incoming light and a ray to the center of the detector.
Normalization of the scattered light
Mie scattering is used in a wide variety of disciplines to describe the scattering pattern from spheres. Not surprisingly a bunch of different normalizations have arisen that can be described based on the integrating the scattering function over all directions or $4\pi$ steradians.
Integrating over all solid angles suggests the following normalizations
$$
\begin{align}
\int_{4\pi} p(\theta,\phi) \,d\Omega &= 1 \qquad\qquad \mbox{one}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= 4\pi \qquad\qquad \mbox{4pi}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= a \qquad\qquad \mbox{albedo (default)}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= Q_{sca} \qquad\qquad \mbox{qsca}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= Q_{ext} \qquad\qquad \mbox{qext}\[2mm]
\int_{4\pi} p(\theta,\phi) \,d\Omega &= 4\pi x^2 Q_{sca}\qquad\qquad \mbox{book}\[2mm]
\end{align}
$$
where $d\Omega=\sin\theta d\theta\,d\phi$ is a differential solid angle. The single scattering albedo is $a$
$$
a = \frac{Q_\mathrm{sca}}{Q_\mathrm{ext}} = \frac{Q_\mathrm{sca}}{Q_\mathrm{sca}+Q_\mathrm{abs}}
$$
and $Q_\mathrm{sca}$, $Q_\mathrm{ext}$, and $Q_\mathrm{abs}$ are the scattering, extinction, and absorption efficiencies.
Verifying normalization
The integral of the scattering function over all solid angles is
$$
\mbox{total} = \int_0^{2\pi}\int_0^\pi \, p(\theta,\phi)\,\sin\theta\,d\theta\,d\phi
$$
or with a change of variables $\mu=\cos\theta$ and using the symmetry to the integral in $\phi$
$$
\mbox{total} = 2\pi \int_{-1}^1 \, p(\mu)\,d\mu
$$
This integral can be done numerically by simply summing all the rectangles
$$
\mbox{total} = 2\pi \sum_{i=0}^N p(\mu_i)\,\Delta\mu_i
$$
which can be found using np.trapz()
Case 1. n=1.5, x=1
For this non-strongly peaked scattering function, the simple integration remains close to the expected value.
End of explanation
m = 1.5 - 1.5j
x = 1
mu = np.linspace(-1,1,501)
intensity = miepython.i_unpolarized(m,x,mu)
qext, qsca, qback, g = miepython.mie(m,x)
norms = ['albedo','one','4pi','qsca','qext','bohren','wiscombe']
expected = [qsca/qext,1.0,4*np.pi,qsca,qext,4*np.pi*x**2*qsca,np.pi*x**2*qsca]
plt.plot(mu,intensity)
plt.xlabel(r'$\cos(\theta)$')
plt.ylabel('Unpolarized Scattering Intensity [1/sr]')
plt.title('m=%.3f%+.3fj, x=%.2f, default normalization'%(m.real,m.imag,x))
plt.show()
print(' Normalization Total Expected')
for i,norm in enumerate(norms):
intensity = miepython.i_unpolarized(m,x,mu,norm)
total = 2 * np.pi * np.trapz(intensity, mu)
print("%14s %8.3f %8.3f" % (norm, total, expected[i]))
Explanation: Case 2: m=1.5-1.5j, x=1
For this non-strongly peaked scattering function, the simple integration remains close to the expected value.
End of explanation
m = 1.5-1.5j
x = 2
theta = np.linspace(np.pi,0,361)
mu = np.cos(theta)
intensity = miepython.i_unpolarized(m,x,mu)
qext, qsca, qback, g = miepython.mie(m,x)
norms = ['albedo','one','4pi','qsca','qext','bohren','wiscombe']
expected = [qsca/qext,1.0,4*np.pi,qsca,qext,4*np.pi*x**2*qsca,np.pi*x**2*qsca]
plt.plot(mu,intensity)
plt.xlabel(r'$\cos(\theta)$')
plt.ylabel('Unpolarized Scattering Intensity [1/sr]')
plt.title('m=%.3f%+.3fj, x=%.2f, default normalization'%(m.real,m.imag,x))
plt.show()
print(' Normalization Total Expected')
for i,norm in enumerate(norms):
intensity = miepython.i_unpolarized(m,x,mu,norm)
total = 2 * np.pi * np.trapz(intensity, mu)
print("%14s %8.3f %8.3f" % (norm, total, expected[i]))
Explanation: Case III, evenly spaced $\theta$
For this non-strongly peaked scattering function, even spacing in $\theta$ improves the accuracy of the integration.
End of explanation
m = 1.4-0j
lambda0 = 532e-9 # m
theta = np.linspace(0,180,1000)
mu = np.cos(theta* np.pi/180)
d = 1700e-9 # m
x = 2 * np.pi/lambda0 * d/2
geometric_cross_section = np.pi * d**2/4 * 1e4 # cm**2
qext, qsca, qback, g = miepython.mie(m,x)
sigma_sca = geometric_cross_section * miepython.i_unpolarized(m,x,mu,'qsca')
plt.semilogy(theta, sigma_sca*1e-3, color='blue')
plt.text(15, sigma_sca[0]*3e-4, "%.0fnm\n(x10$^{-3}$)" % (d*1e9), color='blue')
d = 170e-9 # m
x = 2 * np.pi/lambda0 * d/2
geometric_cross_section = np.pi * d**2/4 * 1e4 # cm**2
qext, qsca, qback, g = miepython.mie(m,x)
sigma_sca = geometric_cross_section * miepython.i_unpolarized(m,x,mu,'qsca')
plt.semilogy(theta, sigma_sca, color='red')
plt.text(110, sigma_sca[-1]/2, "%.0fnm" % (d*1e9), color='red')
d = 17e-9 # m
x = 2 * np.pi/lambda0 * d/2
geometric_cross_section = np.pi * d**2/4 * 1e4 # cm**2
qext, qsca, qback, g = miepython.mie(m,x)
sigma_sca = geometric_cross_section * miepython.i_unpolarized(m,x,mu,'qsca')
plt.semilogy(theta, sigma_sca*1e6, color='green')
plt.text(130, sigma_sca[-1]*1e6, "(x10$^6$)\n%.0fnm" % (d*1e9), color='green')
plt.title("Refractive index m=1.4, $\lambda$=532nm")
plt.xlabel("Scattering Angle (degrees)")
plt.ylabel("Diff. Scattering Cross Section (cm$^2$/sr)")
plt.grid(True)
plt.show()
Explanation: Differential Scattering Cross Section
The differential scattering cross section $\frac{d\sigma_{sca}}{d\Omega}$ is defined in terms of the total scattering cross section
$$
\sigma_\mathrm{sca} = \pi r^2 Q_\mathrm{sca} = \int_{4\pi} \frac{d\sigma_{sca}}{d\Omega}\,d\Omega
$$
Thus if the unpolarized scattering is normalized so its integral is the scattering efficiency
$$
Q_\mathrm{sca} = \int_{4\pi} p(\mu) \,d\Omega
$$
then
$$
\frac{d\sigma_{sca}}{d\Omega} = \pi r^2 p(\theta,\phi)
$$
The differential scattering cross section can be obtained miepython by normalizing to qsca and multiplying the result by the geometric cross section
diff_sca = np.pi * r**2 * miepython.i_unpolarized(m,x,mu,norm='qsca')
For example, here is a replica of figure 4
End of explanation
MIEV0 Test Case 14: Refractive index: real 1.500 imag -1.000E+00, Mie size parameter = 1.000
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 5.84080E-01 1.90515E-01 5.84080E-01 1.90515E-01 3.77446E-01 0.0000
30.00 0.866025 5.65702E-01 1.87200E-01 5.00161E-01 1.45611E-01 3.13213E-01 -0.1336
60.00 0.500000 5.17525E-01 1.78443E-01 2.87964E-01 4.10540E-02 1.92141E-01 -0.5597
90.00 0.000000 4.56340E-01 1.67167E-01 3.62285E-02 -6.18265E-02 1.20663E-01 -0.9574
x=1.0
m=1.5-1.0j
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m, x, mu, 'wiscombe')
unpolar_miev = np.array([3.77446E-01,3.13213E-01,1.92141E-01,1.20663E-01])
ratio = unpolar_miev/unpolar
print("MIEV0 Test Case 14: m=1.500-1.000j, Mie size parameter = 1.000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("MIEV0 %9.5f %9.5f %9.5f %9.5f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3]))
print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
Explanation: Comparison to Wiscombe's Mie Program
Wiscombe normalizes as
$$
\int_{4\pi} p(\theta,\phi) \,d\Omega = \pi x^2 Q_{sca}
$$
where $p(\theta)$ is the scattered light.
Once corrected for differences in phase function normalization, Wiscombe's test cases match those from miepython exactly.
Wiscombe's Test Case 14
End of explanation
MIEV0 Test Case 10: Refractive index: real 1.330 imag -1.000E-05, Mie size parameter = 100.000
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 5.25330E+03 -1.24319E+02 5.25330E+03 -1.24319E+02 2.76126E+07 0.0000
30.00 0.866025 -5.53457E+01 -2.97188E+01 -8.46720E+01 -1.99947E+01 5.75775E+03 0.3146
60.00 0.500000 1.71049E+01 -1.52010E+01 3.31076E+01 -2.70979E+00 8.13553E+02 0.3563
90.00 0.000000 -3.65576E+00 8.76986E+00 -6.55051E+00 -4.67537E+00 7.75217E+01 -0.1645
x=100.0
m=1.33-1e-5j
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m,x,mu,'wiscombe')
unpolar_miev = np.array([2.76126E+07,5.75775E+03,8.13553E+02,7.75217E+01])
ratio = unpolar_miev/unpolar
print("MIEV0 Test Case 10: m=1.330-0.00001j, Mie size parameter = 100.000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("MIEV0 %9.0f %9.0f %9.0f %9.0f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3]))
print("miepython %9.0f %9.0f %9.0f %9.0f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
Explanation: Wiscombe's Test Case 10
End of explanation
MIEV0 Test Case 7: Refractive index: real 0.750 imag 0.000E+00, Mie size parameter = 10.000
Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn
0.00 1.000000 5.58066E+01 -9.75810E+00 5.58066E+01 -9.75810E+00 3.20960E+03 0.0000
30.00 0.866025 -7.67288E+00 1.08732E+01 -1.09292E+01 9.62967E+00 1.94639E+02 0.0901
60.00 0.500000 3.58789E+00 -1.75618E+00 3.42741E+00 8.08269E-02 1.38554E+01 -0.1517
90.00 0.000000 -1.78590E+00 -5.23283E-02 -5.14875E-01 -7.02729E-01 1.97556E+00 -0.6158
x=10.0
m=0.75
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
albedo = qsca/qext
unpolar = miepython.i_unpolarized(m,x,mu,'wiscombe')
unpolar_miev = np.array([3.20960E+03,1.94639E+02,1.38554E+01,1.97556E+00])
ratio = unpolar_miev/unpolar
print("MIEV0 Test Case 7: m=0.75, Mie size parameter = 10.000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("MIEV0 %9.2f %9.2f %9.2f %9.2f"%(unpolar_miev[0],unpolar_miev[1],unpolar_miev[2],unpolar_miev[3]))
print("miepython %9.2f %9.2f %9.2f %9.2f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
Explanation: Wiscombe's Test Case 7
End of explanation
BHMie Test Case 14, Refractive index = 1.5000-1.0000j, Size parameter = 1.0000
Angle Cosine S1 S2
0.00 1.0000 -8.38663e-01 -8.64763e-01 -8.38663e-01 -8.64763e-01
0.52 0.8660 -8.19225e-01 -8.61719e-01 -7.21779e-01 -7.27856e-01
1.05 0.5000 -7.68157e-01 -8.53697e-01 -4.19454e-01 -3.72965e-01
1.57 0.0000 -7.03034e-01 -8.43425e-01 -4.44461e-02 6.94424e-02
x=1.0
m=1.5-1j
mu=np.cos(np.linspace(0,90,4) * np.pi/180)
qext, qsca, qback, g = miepython.mie(m,x)
unpolar = miepython.i_unpolarized(m,x,mu,norm='bohren')
s1_bh = np.empty(4,dtype=complex)
s1_bh[0] = -8.38663e-01 - 8.64763e-01*1j
s1_bh[1] = -8.19225e-01 - 8.61719e-01*1j
s1_bh[2] = -7.68157e-01 - 8.53697e-01*1j
s1_bh[3] = -7.03034e-01 - 8.43425e-01*1j
s2_bh = np.empty(4,dtype=complex)
s2_bh[0] = -8.38663e-01 - 8.64763e-01*1j
s2_bh[1] = -7.21779e-01 - 7.27856e-01*1j
s2_bh[2] = -4.19454e-01 - 3.72965e-01*1j
s2_bh[3] = -4.44461e-02 + 6.94424e-02*1j
# BHMie seems to normalize their intensities to 4 * pi * x**2 * Qsca
unpolar_bh = (abs(s1_bh)**2+abs(s2_bh)**2)/2
ratio = unpolar_bh/unpolar
print("BHMie Test Case 14: m=1.5000-1.0000j, Size parameter = 1.0000")
print()
print(" %9.1f°%9.1f°%9.1f°%9.1f°"%(0,30,60,90))
print("BHMIE %9.5f %9.5f %9.5f %9.5f"%(unpolar_bh[0],unpolar_bh[1],unpolar_bh[2],unpolar_bh[3]))
print("miepython %9.5f %9.5f %9.5f %9.5f"%(unpolar[0],unpolar[1],unpolar[2],unpolar[3]))
print("ratio %9.5f %9.5f %9.5f %9.5f"%(ratio[0],ratio[1],ratio[2],ratio[3]))
print()
print("Note that this test is identical to MIEV0 Test Case 14 above.")
print()
print("Wiscombe's code is much more robust than Bohren's so I attribute errors all to Bohren")
Explanation: Comparison to Bohren & Huffmans's Mie Program
Bohren & Huffman normalizes as
$$
\int_{4\pi} p(\theta,\phi) \,d\Omega = 4 \pi x^2 Q_{sca}
$$
Bohren & Huffmans's Test Case 14
End of explanation
x=3
m=1.33-1e-8j
theta = np.linspace(0,180,181)
mu = np.cos(theta*np.pi/180)
scaling_factor = 1/4
iper = scaling_factor*miepython.i_per(m,x,mu,'bohren')
ipar = scaling_factor*miepython.i_par(m,x,mu,'bohren')
P = (iper-ipar)/(iper+ipar)
plt.subplots(2,1,figsize=(8,8))
plt.subplot(2,1,1)
plt.semilogy(theta,ipar,label='$i_{par}$')
plt.semilogy(theta,iper,label='$i_{per}$')
plt.xlim(0,180)
plt.xticks(range(0,181,30))
plt.ylabel('i$_{par}$ and i$_{per}$')
plt.legend()
plt.grid(True)
plt.title('Figure 4.9 from Bohren & Huffman')
plt.subplot(2,1,2)
plt.plot(theta,P)
plt.ylim(-1,1)
plt.xticks(range(0,181,30))
plt.xlim(0,180)
plt.ylabel('Polarization')
plt.plot([0,180],[0,0],':k')
plt.xlabel('Angle (Degrees)')
plt.show()
Explanation: Bohren & Huffman, water droplets
Tiny water droplet (0.26 microns) in clouds has pretty strong forward scattering! A graph of this is figure 4.9 in Bohren and Huffman's Absorption and Scattering of Light by Small Particles.
A bizarre scaling factor of $1/4$ is needed to make the miepython results match those in the figure 4.9.
End of explanation |
13,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyWCPS
The problem
Step1: The following examples are from the Domain Examples Notebook of GeoPython 2017 (mine is at
Step2: The mechanics of the above should be fairly obvious. The user needs to define a function and decorate it with wcps. The decorator performs some magic so that WCPS code can be obtained from the function definition. The generated code can always be inspected, by the way
Step3: Another simple example, from the same notebook
Step4: That would be the most direct translation, but we can begin to see the advantadges of a EDSL when we realize that we can use plain Python to factor away the repetitive stuff. Getting away with repetitions increases readability and shows better what the intent of the query was.
We can also easly parameterize queries as normal Python functions, and integrate them into other Python code...
Step5: Finally, we can write this crime of a query
Step6: which looks kinda cute. Notice that the helper functions do not need to be inner functions. They could just as well be outside for reuse, if needed. For example, it would be entirely possible to write a function that automatically generates the switch statement for a colorscale like the above from a list of value cuts, which would make the above function even shorter.
Before you ask, I got the coverage constructor, too... I called it "New"
Step7: Now, if you buy it...
The Plan
Writing a EDSL for a language that I do not know that much can be a bit risky,
so I prefer to start small. What I am more interested now is in getting the
structure right before adding a whole lot of functions.
For example, I have just implemented the functions and operators that I needed
for the queries above (avg, count, +, *). I have left the others out
intentionally because I first need to make sure the structure is solid.
That is where I would like to get feedback/help | Python Code:
!pip install astunparse
import matplotlib.pyplot as plt
%matplotlib inline
# Imports will be simplified as API stabilizes... by now let's eval this...
from pywcps.dsl import *
from pywcps.ast_rewrite import wcps
from pywcps.wcps_client import WCPSClient, emit_fun
# This is a client helper object
# eo = WCPSClient('http://earthserver.pml.ac.uk/rasdaman/ows/wcps')
icgc = WCPSClient('http://rasdaman.icgc.local:8080/rasdaman/ows/wcps')
#(minx, maxx, miny, maxy) = (381030,464790,4557753,4611056)
(minx, maxx, miny, maxy) = (424894, 425342, 4596544, 4596926)
@wcps
def ofcosta(minx, miny, maxx, maxy):
return For(c='BeamIrradTest')[
encode(cast('float',
clip(c[axis('E', minx, maxx),
axis('N', miny, maxy),
axis('ansi', '2018-01-01T15:00:00.000Z')],
'POLYGON((424894.0 4596544.0, 424894.0 4596800.0, 425150.0 4596544.0, 424894.0 4596544.0))'))
,
'netcdf')
]
icgc.save_to(ofcosta, r'e:\public\prova.nc', minx=minx, miny=miny, maxx=minx + 0.5*512, maxy=miny + 0.5*512)
emit_fun(ofcosta, minx=minx, miny=miny, maxx=minx + 0.5*512, maxy=miny + 0.5*512)
import gdal
from gdalconst import *
dataset = gdal.Open( r'e:\public\prova.nc', GA_ReadOnly )
dataset
arr = dataset.GetRasterBand(1).ReadAsArray()
import matplotlib.pyplot as plt
plt.imshow(arr)
plt.show()
plt.show()
Explanation: PyWCPS
The problem: accessing WCPS from Python often means generating a string with the
query source code and then sending it to the server endpoint via REST or POST. For
non trivial queries, generating the string from Python code can be cumbersome
because of:
Lack of tooling support (editor indentation, paren matching, etc...).
The code is mostly generated with ad-hoc interpolation and concatenation of
strings, which is weak in terms of abstraction and composability.
WCPS code gets obscured since it is intertwined with Python code.
These problems can be greatly reduced by designing an Embedded Domain Specific
Language (EDSL). The EDSL design pattern aims to map the syntax of the embedded
language (here the WCPS language) into the syntax of a host language (here
Python). This allows for automatically reusing the abstraction capabilities
and tooling of the host language.
Below are a list of real examples from showing what we can achieve with this
approach. But before we start, better eval this for setup:
End of explanation
@wcps
def test_cloro():
return For(c="CCI_V2_monthly_chlor_a_rmsd")[
encode(cast('float',
count(c[ansi("2010-01-31T23:59:00")] < 0.201)),
"csv")]
float(eo.get_str(test_cloro))
Explanation: The following examples are from the Domain Examples Notebook of GeoPython 2017 (mine is at: https://jupyter.eofrom.space/user/jarnaldich/notebooks/jupyter_notebooks/geopython_workshop_2017/Domain_examples.ipynb)
This query:
```
for d in (CCI_V2_monthly_chlor_a_rmsd)
return
encode((float)
count(d[ ansi("2010-01-31T23:59:00")] < 0.2 )
, "csv"
)
```
Is equivalent to the following Python snippet:
End of explanation
emit_fun(test_cloro)
Explanation: The mechanics of the above should be fairly obvious. The user needs to define a function and decorate it with wcps. The decorator performs some magic so that WCPS code can be obtained from the function definition. The generated code can always be inspected, by the way:
End of explanation
@wcps
def test_cloro2():
return For(c="CCI_V2_release_chlor_a",
d="CCI_V2_monthly_chlor_a_rmsd")[
encode(
cast('float',
avg(c[axis('Long',0,10),
axis('Lat', 45,55),
axis('ansi', '2010-01-31T23:59:00')] *
(d[axis('Long',0,10),
axis('Lat', 45,55),
axis('ansi', '2010-01-31T23:59:00')] < 0.45))
), "csv")]
float(eo.get_str(test_cloro2))
Explanation: Another simple example, from the same notebook:
```
for c in ( CCI_V2_release_chlor_a ), d in (CCI_V2_monthly_chlor_a_rmsd)
return
encode((float)
avg(
c[Long(0:10), Lat(45:55), ansi("2010-01-31T23:59:00")] *
(d[Long(0:10), Lat(45:55), ansi("2010-01-31T23:59:00")] < 0.1 )
), "csv"
)
```
Translates to:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# We can parameterize the threshold
@wcps
def test_cloro2(threshold):
# We can define the slice here...
slice = (axis('Long',0,10),
axis('Lat', 45,55),
axis('ansi', '2010-01-31T23:59:00'))
# And the query looks like...
return For(c="CCI_V2_release_chlor_a",
d="CCI_V2_monthly_chlor_a_rmsd")[
encode(
cast('float',
avg(c[slice] * (d[slice] < threshold))
), "csv")]
plt.plot([float(eo.get_str(test_cloro2, float(x)/100)) for x in range(35,41)])
Explanation: That would be the most direct translation, but we can begin to see the advantadges of a EDSL when we realize that we can use plain Python to factor away the repetitive stuff. Getting away with repetitions increases readability and shows better what the intent of the query was.
We can also easly parameterize queries as normal Python functions, and integrate them into other Python code...
End of explanation
@wcps
def test_colortable():
def less_than(cov, x):
return cov[axis('Lat', 30,70),
axis('Long', -30,10),
axis('ansi', "2009-09-30T23:59:00Z")] < x
def rgba(r,g,b,a):
return struct(red=r, green=g, blue=b, alpha=a)
return For(a="CCI_V2_monthly_chlor_a")[
encode(
switch(
case(less_than(a, 0.05), rgba(255, 255, 255, 0)),
case(less_than(a, 0.1), rgba( 0, 255, 255, 255)),
case(less_than(a, 0.2), rgba( 0, 128, 255, 255)),
case(less_than(a, 0.5), rgba( 0, 0, 255, 255)),
case(less_than(a, 1.5), rgba(218, 0, 255, 255)),
case(less_than(a, 3.0), rgba(255, 0, 255, 255)),
case(less_than(a, 4.5), rgba(255, 164, 0, 255)),
case(less_than(a, 6.2), rgba(255, 250, 0, 255)),
case(less_than(a, 20), rgba(255, 0, 0, 255)),
default(rgba(255, 255, 255, 0))), "png")]
eo.ipython_image(test_colortable, { 'width': 400 })
Explanation: Finally, we can write this crime of a query:
for a in (CCI_V2_monthly_chlor_a) return encode (switch case 0.05 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 255; green: 255; blue: 255; alpha: 0} case 0.1 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 0; green: 255; blue: 255; alpha: 255} case 0.2 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 0; green: 128; blue: 255; alpha: 255} case 0.5 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 0; green: 0; blue: 255; alpha: 255} case 1.5 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 218; green: 0; blue: 255; alpha: 255} case 3.0 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 255; green: 0; blue: 255; alpha: 255} case 4.5 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 255; green: 164; blue: 0; alpha: 255} case 6.2 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 255; green: 250; blue: 0; alpha: 255} case 20 > a[Lat(30:70),Long(-30:10),ansi("2009-09-30T23:59:00Z")] return {red: 255; green: 0; blue: 0; alpha: 255} default return {red: 255; green: 255; blue:255; alpha: 0} ,"png")
as:
End of explanation
@wcps
def test_coverage_constructor():
l = 100000
def roi(cov, time):
return cov[axis('Long', -50, -40),
axis('Lat', 45,55),
axis('ansi', time, crs="CRS:1")]
def term(cov, time):
return (add((roi(cov, time) < l) * roi(cov, time))
/
count(roi(cov, time) < l))
return For(c="CCI_V2_release_daily_chlor_a")[
encode(cast('float',
New('histogram',
px=axis('x', 0, 0),
py=axis('y', 0, 0),
pt=axis('t', 0, 360))[
term(c, pt) #+ term(c, pt+1) + term(c, pt+2)
]), "csv")]
res_txt = eo.get_str(test_coverage_constructor)
plt.plot([float(x) for x in res_txt[2:-2].split(",")])
Explanation: which looks kinda cute. Notice that the helper functions do not need to be inner functions. They could just as well be outside for reuse, if needed. For example, it would be entirely possible to write a function that automatically generates the switch statement for a colorscale like the above from a list of value cuts, which would make the above function even shorter.
Before you ask, I got the coverage constructor, too... I called it "New"
End of explanation
def scope_rules_in_python_are_demented():
for i in range(10):
z = 2
print i, z
scope_rules_in_python_are_demented()
Explanation: Now, if you buy it...
The Plan
Writing a EDSL for a language that I do not know that much can be a bit risky,
so I prefer to start small. What I am more interested now is in getting the
structure right before adding a whole lot of functions.
For example, I have just implemented the functions and operators that I needed
for the queries above (avg, count, +, *). I have left the others out
intentionally because I first need to make sure the structure is solid.
That is where I would like to get feedback/help:
Can you think of a language construct I have left out? I'd be especially
interested in any query that has essentialy a different structure than the
ones I have already implemented... BTW
I know I have left out the literal coverage constructor and the condense expression, do you have any examples I can use to test the thing?
I have seen in the documentation that one can write for example a
convolution. Any complete example on that?
Can we have sub-coverages like (for c in (coverage Potato over ...)) (I mean, using a coverage constructor where a Coverage ID would be normally expected) any examples on that?
Any example of planetary science queries?
Any suggestions on the syntax? Any suggestions on how to proceed? Eg. I can try and start translating into this syntax whatever code you want... let's make this a challenge?
Bear in mind that there errors and quirks may appear (eg. if operator precedence in Python is different than WPCPS)... better double-check the results at the beginning.
If you are interested in how I am doing it now, keep reading... Full documentation should be ready as API gets more stable.
If this goes on, in the future some cool things are technically possible:
Checking for Type Errors.
Pretty-printing of the generated code for inspection.
Better error reporting. By now we rely on the error reporting of the server.
Checking for correctness for a particular server (that coverage names exist, that dimension labels are OK, etc...)
Design Principles
The goal should be to make it possible to express WCPS Language queries as
Python constructs. All WCPS functionality should be available.
When designing a DSL, some compromises have to be made. One can imagine trying
to make the surface syntax as close to the embedded language (WCPS) as possible,
at the expense of forcing the syntax of the host language (being un-Pythonic),
or to ignore the syntax of the host language whatsoever and just provide a
library for the host, at the expense of making this library counter intuitive for
the users of the embedded language.
Designing a EDSL often involves a non-negligible amount of magic (on the fly syntax
manipulation, run-time code generation, introspection, etc...). These tools,
when not used with care, can lead to unexpected and uninutitive behaviours for
the programmer.
This project will try to find a sweet spot where the Python code can still be
understandable as WSDL without forcing Python too much. To do so we will try
to follow one basic design principle: the EDSL will be an expression language.
Remember that expressions in Python (and other languages) are any constructs
that have a value, can be assigned to a variable, passed to a function, etc...
Contrast this with statements, that are there for side-effects.
Examples of expressions: 1+1, sin(x). Examples of non expressions a=1+1,
ifs, fors, defs...
I think expressions are especially suitable in this case because WCPS language is
essentially an expression language (eg. queries are essentially more or less
big expresions that return a result).
Expressions have the nice property that they are composable: bigger
expressions can be built from the combination of smaller ones, which is good
for reuse and abstraction.
Furthermore, expressions map very naturally into Python syntax. The only
exception are the binding forms in WSCPS, that is those that "declare"
variable, namely the for (that bind coverage variables) and the
coverage constructors (that declare index variables). The only binding forms in Python are function and class declarations, which are not expressions, so a little AST manipulation is needed to bridge that
gap. Hopefully this will not result in any quirks when writing the code if the
coder assumes the basic design principle: that the whole EDSL is made of
expressions.
For binding forms, I came up with the following syntax:
For(var=value ...)[ <body where var can be used> ]
For Python, this is just a call followed by an indexing operator, which are expressions... The decorator will walk the AST of the function code in search for those expressions, and will transform them into automatically generated python functions, but hopefully you do not need to think about that. Just bear in mind that the vars are only available between the square brackets. This mirrors WCSPL behaviour, but is un-Pythonic, since in Python variables have function (or class) scope (eg any variable used in a for loop is avalable throughout the function, not just inside the for loop).
This function illustrates what I mean. Both i and z ara available inside the whole function:
End of explanation |
13,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process regresstion tutorial 1
Step1: Problem 1
Step2: Generate a set of $50$ one-dimensional inputs regularly spaced between -5 and 5 and store them in a variable called x, then compute the covariance matrix for these inputs, for $A=\Gamma=1$, store the results in a variable called K, and display it using matplotlib's imshow function.
Step3: Problem 1b
Step4: Now draw 5 samples from the distribution and plot them.
Hint
Step5: Problem 1c
Step6: Execute the cell below to define a handful of observations
Step7: Evaluate and plot the mean and 95% confidence interval of the resulting posterior distribution, as well as a few samples, for a squared exponential GP with $A=\Gamma=1$, assuming the measurement uncertainty on each observation was 0.1
Step8: Some things to note
Step9: Try evaluating the likelihood of the model given the observations you defined in problem 1 by executing the cell below. Hopefully it will run without errors...
Step10: Now try changing the covariance parameters and the observational uncertainties, and see how that affects the likelihood. Does it behave as you would expect, given the way these parameters affected the predictive distribution?
Making $A$ too big or too small decreases the likelihood (increases the NLL), as does making $\Gamma$ too small. This is as one would expect - either the model becomes an obviously bad match, or the model uncertainty becomes huge (and that is penalised by the determinant term).
On the other hand, for such a sparse dataset, the likelihood asymptotes to a constant as one increases $\Gamma$ (decreases the length scale)
Step11: Plot the data and the predictive distribution and samples for the best-fit hyper-parameters
Step12: That may not have worked quite as well as you might have liked -- it's normal
Step13: Problem 3a
Step14: Problem 3b
Step15: Now you are ready to fit for all the hyper-parameters simultaneously
Step16: NB
Step17: NB
Step18: Now try fitting the data using the LinearMean mean function and the M32Kernel covariance function.
Step19: How does the best fit likelihood compare to what you obtained using the SEKernel? Which kernel would you adopt if you had to chose between the two. Write your answer in the cell below.
The maximum log likelihood in the case of the SEKernel was -93.25, compared to -89.55 for the M32Kernel, so the SEKernel is preferred, as one would expect, though the difference is not very large as far as these things go.
Problem 4b
Step20: Now evaluate the BIC in each case. Which model is preferred?
Step21: Thus the model with a non-zero mean function is strongly preferred (BIC differences $> 10$ are generally considered to represent very strong support for one model over the other).
How different would the predictive distributions and samples be? Try plotting them in each case. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
from numpy.random import multivariate_normal
from numpy.linalg import inv
from numpy.linalg import slogdet
from scipy.optimize import fmin
Explanation: Gaussian Process regresstion tutorial 1:
Introduction
In this tutorial we will create a very basic, native python GP regression code and apply it to a very simple simulated example dataset.
By S Aigrain (University of Oxford)
End of explanation
def SEKernel(par, x1, x2):
A, Gamma = par
D2 = cdist(# complete
return # complete
Explanation: Problem 1: A probability distribution over functions
We saw in the lectures that a Gaussian Process enables us to set up a probability distribution over functions, but that probably sounds a little abstract. This problem aims to give you more of a feel for what that means, in practice.
You will start by defining a covariance function and using it to generate a covariance matrix. You will draw samples from the GP prior (i.e. draws from the probability distribution over functions, evaluated at a finite number of input locations), and then learn how to condition the prior on some data, and to draw samples from the resulting predictive distribution. Finally, you will explore the effect of altering the covariance matrix, specifically changing the hyper-parameters (the parameters of the covariance function) and the observational uncertainties.
Problem 1a: Your first covariance function
First we need to write a function that will generate a covariance matrix for a given covariance function, or kernel. Let's start with the squared exponential kernel, which is one of the simplest and most widely used.
$$
k_{\rm SE}(x,x') = A \exp \left[ \Gamma (x-x')^2 \right]
$$
where $A$ is the variance and $\Gamma$ the inverse length scale. This kernel gives rise to smoothly varying, infinitely differentiable functions.
Define a function SEKernel that computes the covariance matrix for the above kernel function. The function should take three mandatory arguments: an array containing the hyper-parameters of the covariance function ($A$ and $\Gamma$ in the equation above), and two arrays of input values. The function should return a 2-D array with shape $(N, M)$, where $N$ and $M$ are the numbers of inputs in each of the input arrays.
*Hint: You may find the function cdist from the module scipy.spatial.distance useful, but note it expects input arrays of shape $(N,D)$ where $D$ is the number of dimensions of the inputs.
End of explanation
x = np.linspace(# complete
K = # complete
plt.imshow(K,interpolation='none');
Explanation: Generate a set of $50$ one-dimensional inputs regularly spaced between -5 and 5 and store them in a variable called x, then compute the covariance matrix for these inputs, for $A=\Gamma=1$, store the results in a variable called K, and display it using matplotlib's imshow function.
End of explanation
m = # complete
sig = # complete
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Prior distribution');
Explanation: Problem 1b: The prior distribution: mean, confidence intervals and samples
The joint prior distribution over the outputs $\mathbf{y}$, evaluated at a given collection of inputs $\mathbf{x}$, is a multi-variate Gaussian distribution with zero mean vector and covariance matrix $K$:
$$
p(\mathbf{y}|\mathbf{x})=\mathcal(N)(\mathbf{0},K).
$$
(NB: we will consider non-zero mean functions later.)
Plot the mean and 95% confidence interval of this distribution for the x and K evaluated in the previous cell.
Hint: the variance $\sigma^2$ of the distribution is given by the diagonal elements of the covariance matrix, and the 95% confidence interval is bounded by the mean plus or minus 2 $\sigma$.
End of explanation
samples = multivariate_normal(# complete
plt.plot(x,samples.T)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Samples from prior distribution');
Explanation: Now draw 5 samples from the distribution and plot them.
Hint: You may find the function multivariate_normal from the module numpy.random useful.
End of explanation
def Pred_GP(CovFunc, CovPar, xobs, yobs, eobs, xtest):
# evaluate the covariance matrix for pairs of observed inputs
K = # complete
# add white noise
K += np.identity(# complete
# evaluate the covariance matrix for pairs of test inputs
Kss = # complete
# evaluate the cross-term
Ks = # complete
# invert K
Ki = inv(K)
# evaluate the predictive mean
m = np.dot(# complete
# evaluate the covariance
cov = # complete
return m, cov
Explanation: Problem 1c: The predictive distribution
If we have some observations $(\boldsymbol{x},\boldsymbol{y})$, only a subset of the functions included in our prior distribution will be compatible with them. As we saw in the lecture, the posterior distribution (also called the conditional or predictive distribution) for the test outputs $\boldsymbol{y}$, evaluated at test inputs $\boldsymbol{x}_$, given the observations, is a multivariate Gaussian:
$$
p(\boldsymbol{y} \, | \, \boldsymbol{x}_, \boldsymbol{x}, \boldsymbol{y}) = \mathcal{N}(\overline{\boldsymbol{y}}, \mathrm{cov}(\boldsymbol{y}_))
$$
with mean
$$
\overline{\boldsymbol{y}} = K(\boldsymbol{x}_,\boldsymbol{x}) K(\boldsymbol{x},\boldsymbol{x})^{-1} \boldsymbol{y}
$$
and covariance
$$
\mathrm{cov}(\boldsymbol{y}) = K(\boldsymbol{x}_,\boldsymbol{x}) - K(\boldsymbol{x}_,\boldsymbol{x}) K(\boldsymbol{x},\boldsymbol{x})^{-1} K(\boldsymbol{x},\boldsymbol{x}*),
$$
where $K(\boldsymbol{x},\boldsymbol{x'}){ij} = k(x_i,x'_j)$.
If the observations are noisy, the white noise variance should be added to the diagonal of the covariance (for pairs of observed data points only).
Complete the definition of the function Predict below. This function computes and returns the mean and covariance of the predictive distribution for a given covariance function, with associated parameters, a given set of observations $(\mathbf{x},\mathbf{y},\mathbf{\sigma})$, where $\mathbf{\sigma}$ are the uncertainties associated with each observation, at a given set of test inputs $\mathbf{x}_*$.
Hint: Use numpy's dot function to do matrix multiplication of two numpy arrays, or convert your arrays to numpy matrix objects before multiplying them together.
Hint: Use the inv function from numpy.linalg to invert the covariance matrix of the observations. It's not particularly stable or efficient, but code optimisation is not the point of this tutorial. In tutorial 2 we will look at some ready-made GP packages, which use much more optimized matrix inversion techniques.
End of explanation
xobs = np.array([-4,-2,0,1,2])
yobs = np.array([1.0,-1.0, -1.0, 0.7, 0.0])
Explanation: Execute the cell below to define a handful of observations
End of explanation
eobs = 0.1
m,C=Pred_GP(# complete
sig = # complete
samples = multivariate_normal(# complete
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.plot(x,samples.T,alpha=0.5)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Predictive distribution');
Explanation: Evaluate and plot the mean and 95% confidence interval of the resulting posterior distribution, as well as a few samples, for a squared exponential GP with $A=\Gamma=1$, assuming the measurement uncertainty on each observation was 0.1
End of explanation
def NLL_GP(p,CovFunc,x,y,e):
# Evaluate the covariance matrix
K = # complete
# Add the white noise term
K += # complete
# invert it
Ki = inv(K)
# evaluate each of the three terms in the NLL
term1 = # complete
term2 = # complete
term3 = # complete
# return the total
return term1 + term2 + term3
Explanation: Some things to note:
- while the prior distribution is stationary (the mean and variance are constant over the input range) the predictive distribution is not;
- far away grom observations, the predictive distribution returns to the prior
- where observations are close together compared to the lenght scale of the GP, the predictive ability is good: the 95% confidence interval is narrow and the samples from the predictive distribution all behave similarly
- the predictive mean doesn't have the same behaviour as the samples from the predictive distribution
Problem 1d: Changing the hyper-parameters
Try changing the covariance function parameters $A$ and $\Gamma$ in the last cell, and then running it again. Make some notes on what happens in the cell below.
Provide your answer to Problem 1d here
So how do we know what values of the covariance paramters to use? To fit for them, we to evaluate the likelihood of the model. This process is known as training the GP, and is the subject of problem 2.
Problem 1e: Changing the uncertainties
Try changing the observational uncertainties by altering the value of eobs in the last code cell and then running it again. Also try variable uncertainties (heteroskedastic noise) by assigning an array of $N$ values to eobs. Make some notes on what happens as you change the errors in the cell below.
Hint: If you set the uncertainties to zero, you'll get an error, because the covariance matrix becomes ill-conditioned. Try setting them to a small non-zero value (e.g. $10^{-3}$) instead.
Provide your answer to Problem 1e here
Problem 2: Training the GP
In this problem we will learn how to compute the likelihood of a set of observations given a model (i.e. given a covariance function and parameters thereof) and how to optimize it relative to the hyper-parameters.
Problem 2a: The likelihood function
Under a GP model, the likelihood is simply a multivariate Gaussian with mean vector $\mathbf{m}$ and covariance matrix $K$:
$$
p(\mathbf{y} \, | \, \mathbf{m}, K) = \mathcal{N}(\mathbf{m},K).
$$
Assuming the mean function is zero everywhere for now (we will add non-trivial mean functions later), the negative log likelihood is then:
$$
\mathrm{NLL} = - \log p(\mathbf{y}\,|\,\mathbf{m},K) = \frac{1}{2} \mathbf{y}^{\mathrm{T}} K^{-1} \mathbf{y} + \frac{1}{2} \log |K| + \frac{N}{2} \log 2 \pi
$$
where $N$ is the number of observations.
As before, any white noise from observational uncertainties must be added to the diagonal elements of the covariance matrix.
Complete the definition of the function NLL_GP below. The function should evaluate the covariance matrix and return the negative log likelihood as given by the above equation. The first argument, p, contains the parameters of the covariance function, whose name is passed in the second argument, CovFunc. The remaining arguments should be self-explanatory.
Hint: As before, use numpy.dot and numpy.linalg.inv to do the matrix algrebra. You will also need to evaluate the log of the determinant of the covariance matrix, for this you can use numpy.linalg.slogdet.
End of explanation
print(NLL_GP(# complete
Explanation: Try evaluating the likelihood of the model given the observations you defined in problem 1 by executing the cell below. Hopefully it will run without errors...
End of explanation
p0 = [1.0,1.0]
p1 = fmin(NLL_GP,p0,args=(# complete
print(p1)
Explanation: Now try changing the covariance parameters and the observational uncertainties, and see how that affects the likelihood. Does it behave as you would expect, given the way these parameters affected the predictive distribution?
Making $A$ too big or too small decreases the likelihood (increases the NLL), as does making $\Gamma$ too small. This is as one would expect - either the model becomes an obviously bad match, or the model uncertainty becomes huge (and that is penalised by the determinant term).
On the other hand, for such a sparse dataset, the likelihood asymptotes to a constant as one increases $\Gamma$ (decreases the length scale): essentially, the data doesn't contain information on the behaviour on length scales smaller than the minimum separation between observations.
Problem 2b: Fitting for the covariance parameters
We are now ready to find the best covariance hyper-parameters, i.e. those that minimize the NLL, for a given dataset and covariance function.
Hint: For simplicity we will do this using the fmin function from scipy.optimize. This is just a downhill simplex optimizer, once again it's not particularly stable or efficient, but it should do the trick for now. We will look at better ways of optimising and sampling the likelihood in Tutorial 2.
End of explanation
# You can reuse code from Problem 1c almost exactly here...
Explanation: Plot the data and the predictive distribution and samples for the best-fit hyper-parameters
End of explanation
xobs = np.linspace(-10,10,50)
linear_trend = 0.03 * xobs - 0.3
correlated_noise = multivariate_normal(np.zeros(len(xobs)),SEKernel([0.005,2.0],xobs,xobs),1).flatten()
eobs = 0.01
white_noise = np.random.normal(0,eobs,len(xobs))
yobs = linear_trend + correlated_noise + white_noise
plt.errorbar(xobs,yobs,yerr=eobs,fmt='k.',capsize=0)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
Explanation: That may not have worked quite as well as you might have liked -- it's normal: the dataset we used is just too small to constrain the hyper parameters adequately. In the next problem we will tackle a slightly more realistic dataset.
Problem 3: Modelling data with correlated noise
In this problem we will use a GP to model a simple dataset consisting of a linear trend and correlated noise, plus a small white noise component representing observational uncertainties. Our goal will be to fit for the slope of the linear trend, while accounting for the correlated noise.
In the process, we will learn how to include a non-zero mean function in a GP model, and how to compare different covariance functions.
Execute the cell below to simulate the dataset
End of explanation
def LinearMean(p,x):
return # complete
pm0 = [0.03, -0.3]
m = # complete
plt.errorbar(xobs,yobs,yerr=eobs,fmt='k.',capsize=0)
plt.plot(xobs,m,'r-')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
Explanation: Problem 3a: Including a mean function
This dataset contains a linear trend as well as correlated noise. We want leanr the trend at the same time as the noise, so we need to define a mean function
Complete the definition of the mean function below, so that it evaluates abd return $f = p[0] x + p[1]$. Check that it works by plotting its output on top of the data above, taking the parameter values from the code that generated the data.
End of explanation
def NLL_GP2(p,CovFunc,x,y,e, MeanFunc=None, nmp = 0):
if MeanFunc:
pc = p[# complete
pm = p[# complete
r = y - # complete
else:
pc = p[:]
r = y[:]
# Evaluate the covariance matrix
K = # complete
# Add the white noise term
K += # complete
# invert it
Ki = inv(K)
# evaluate each of the three terms in the NLL
term1 = # complete
term2 = # complete
term3 = # complete
# return the total
return term1 + term2 + term3
p0 = [0.005,2.0,0.03,-0.3]
print(NLL_GP2# complete
Explanation: Problem 3b: Likelihood with a mean function
Evaluating the likelihood of a GP with a non-zero mean function is easy: simply evaluate the mean vector, subtract it from the data, and compute the likelihood as before, but using the residuals rather than the original data.
Modify the likelihood function you defined earlier so that it does this. Check that it runs ok by calling it once on the dataset, using guesses for the values of the parameters.
Hint: use optional keyword arguments so that your likelihood function still works without a mean function. You will also need to tell the likelihood function how many of the parameters belong to the mean function.
End of explanation
p1 = fmin(# complete
print(p1)
Explanation: Now you are ready to fit for all the hyper-parameters simultaneously: those of the covariance function, and those of the mean function.
End of explanation
# Generate test inputs (values at which we ant to evaluate the predictive distribution)
x = np.linspace(# complete
# Evaluate mean function at observed inputs, and compute residuals
mobs = # complete
robs = yobs-mobs
# Evaluate stochastic component at test inputs
m,C = Pred_GP(# complete
# Evaluate mean function at test inputs
m += # complete
sig = # complete
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Maximum likelihood distribution');
Explanation: NB: The fit can be quite sensitive to the initial guesses for the parameters. Therefore, you may find the fit converges to unexpected values unless you start fairly close to the "correct" ones. In the next tutorial, we will look at more robust ways of exploring the parameter space.
Problem 3c: Including the mean function in the predictions
The simplest way to do this is to use the LinearMean function to evaluate the mean vector, and the Pred_GP function we wrote earlier to evaluate the stochastic component, and add the two together.
Plot the data together with the mean and 95% confidence interval of the predictive distribution, using the best-fit hyper-parameters.
Hint: you'll need to generate a new set of test inputs (the values at which you want to evaluate the predictive distribution) as our new dataset spans a wider range than the old one.
End of explanation
def M32Kernel(par, x1, x2):
A, Gamma = par
R = cdist(# complete
return # complete
Explanation: NB: See how the predictive distribution continues the linear trend outside the range of hte data. If we had tried to model the same dataset without using a mean function, we might well have got a visually acceptable fit, but the predictive distribution would have returned to the prior rapidly outside the range of the data.
Problem 4: Model comparison
One question that is frequently asked about GPs is how to chose the covariance function. There is no simple answer to this. The best starting point is domain knowledge, things you know about your dataset a priori. But if you are looking for an empirical way to compare different kernels for a given dataset, this is a standard model comparison problem.
Similarly, if you are using GPs as part of a detection problem, where you are asking whether the data contain a particular signal which is represented via the mean function, you might want to compare models with the same covariance function, but with and without a (non-zero) mean function.
In both of the above examples, the number of hyper-parameters can vary between the models one is comparing, so model comparison is not straight forward. Ideally one would evaluate and compare the evidence for different kernels (i.e. the likelihood marginalised over the hyper-parameters), but doing this is outside the scope of the present tutorial. Therefore, here we will use a simple alternative, the Bayesian Information Criterion (BIC).
Problem 4a: Comparing two covariance functions
Consider the dataset we simulated earlier, with the linear trend and the correlated noise. The correlated noise was generated using a squared exponential GP. If we didn't know this, would we be able to distinguish between a squared exponential and a different kind of covariance function?
To test this, let's try a changing the covariance function. We will use the Matern 3/2 kernel:
$$
k_{3/2} (x,x') = A \left[ 1 + \sqrt{3r^2} \right] \exp \left[ - \sqrt{3r^2} \right]
$$
where $r^2=\Gamma(x-x')^2$. I picked this one for two reaons:
- This kernel gives rise to much rougher functions (which can only be differentiated once, compared to the smooth functions generated by the squared exponential kernel, which can be differentiated an infinite number of times), so in principle it should be easy to distinguish between the two kernels.
- The two kernels have the same number of parameters, so comparing the BIC's is equivalent to comparing the (maximum) likelihoods.
Start by defining a function M32Kernel with the same structure and calling sequence as SEKernel, but implementing the Matern 3/2 covariance function.
Hint: use the seuclidean metric for cdist.
End of explanation
p0 = [0.005,2.0,0.03,-0.3]
print(NLL_GP2(# complete
p1 = fmin(# complete
print(p1)
print(NLL_GP2(# complete
Explanation: Now try fitting the data using the LinearMean mean function and the M32Kernel covariance function.
End of explanation
# Copy and paste your answer to the previous problem and modify it as needed
Explanation: How does the best fit likelihood compare to what you obtained using the SEKernel? Which kernel would you adopt if you had to chose between the two. Write your answer in the cell below.
The maximum log likelihood in the case of the SEKernel was -93.25, compared to -89.55 for the M32Kernel, so the SEKernel is preferred, as one would expect, though the difference is not very large as far as these things go.
Problem 4b: Mean, or no mean?
Now let us try comparing models with and without non-zero mean function. This time we are comparing models with different numbers of parameters, so we will need to evaluate the BIC rather than simply compare the likelihoods. The BIC is defined as:
$$
\mathrm {BIC} ={\ln(N)K-2\ln({\hat {L}})},
$$
where $N$ is the number of observations, $J$ is the number of parameters, and $\hat{L}$ refers to the likelihood maximised with respect to the parameters of the model.
Start by fitting the simulated dataset with a squared exponential kernel, with and without no mean function. Evaluate the maximum likelihood in each case and store it in variables L_mean and L_no_mean, respectively.
End of explanation
N = len(xobs)
BIC_mean = # complete
print(BIC_mean)
BIC_no_mean = # complete
print(BIC_no_mean)
Explanation: Now evaluate the BIC in each case. Which model is preferred?
End of explanation
# Plot the data
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Model comparison')
# Evaluate and plot the predictive distribution with a mean function
mobs = # complete
robs = yobs-mobs
m,C = Pred_GP(# complete
m += # complete
sig = # complete
plt.plot(x,m,'b-')
plt.fill_between(x,m+2*sig,m-2*sig,color='b',alpha=0.2)
# Now do the same for the model without mean function
m,C = Pred_GP(# complete
sig = # complete
plt.plot(x,m,'r-')
plt.fill_between(x,m+2*sig,m-2*sig,color='r',alpha=0.2)
Explanation: Thus the model with a non-zero mean function is strongly preferred (BIC differences $> 10$ are generally considered to represent very strong support for one model over the other).
How different would the predictive distributions and samples be? Try plotting them in each case.
End of explanation |
13,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem of distribution of epithet docs
Because most epithets do not have many representative documents, I will create another feature table, this time with most of the docs cut out.
Looking at the following, there is a long tail epithets with few surviving representatives.
Step1: Wikipedia on the long tail
Step2: Make vectorizer
Now when loading documents, drop those belonging to an epithet in the to_drop list
Step3: Transform term matrix into feature table | Python Code:
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_index
import pandas
epithet_frequencies = []
for epithet, _ids in get_epithet_index().items():
epithet_frequencies.append((epithet, len(_ids)))
df = pandas.DataFrame(epithet_frequencies)
df.sort_values(1, ascending=False)
Explanation: Problem of distribution of epithet docs
Because most epithets do not have many representative documents, I will create another feature table, this time with most of the docs cut out.
Looking at the following, there is a long tail epithets with few surviving representatives.
End of explanation
from scipy import stats
distribution = sorted(list(df[1]), reverse=True)
zscores = stats.zscore(distribution)
list(zip(distribution, zscores))
# Make list of epithets to drop
to_drop = df[0].where(df[1] < 26)
to_drop = [epi for epi in to_drop if not type(epi) is float]
to_drop = set(to_drop)
to_drop
Explanation: Wikipedia on the long tail:
The specific cutoff of what part of a distribution is the "long tail" is often arbitrary, but in some cases may be specified objectively; see segmentation of rank-size distributions.
So I'll do this semi-objectively. I'm going to cut out any documents with a negative standard score (that is, below the mean). Thus, epithets with fewer than 26 (-0.064414235569960288) representative documents I will drop.
See following printout for z-score distribution
End of explanation
import datetime as dt
import os
import time
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_of_author
from cltk.corpus.greek.tlg.parse_tlg_indices import get_id_author
import pandas
from sklearn.externals import joblib
from sklearn.feature_extraction.text import CountVectorizer
def stream_lemmatized_files(corpus_dir):
# return all docs in a dir
user_dir = os.path.expanduser('~/cltk_data/user_data/' + corpus_dir)
files = os.listdir(user_dir)
for file in files:
filepath = os.path.join(user_dir, file)
with open(filepath) as fo:
#TODO rm words less the 3 chars long
yield file[3:-4], fo.read()
t0 = dt.datetime.utcnow()
map_id_author = get_id_author()
df = pandas.DataFrame(columns=['id', 'author' 'text', 'epithet'])
for _id, text in stream_lemmatized_files('tlg_lemmatized_no_accents_no_stops'):
author = map_id_author[_id]
epithet = get_epithet_of_author(_id)
if epithet in to_drop:
continue
df = df.append({'id': _id, 'author': author, 'text': text, 'epithet': epithet}, ignore_index=True)
print(df.shape)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
print('Number of texts:', len(df))
text_list = df['text'].tolist()
# make a list of short texts to drop
# For pres, get distributions of words per doc
short_text_drop_index = [index if len(text) > 500 else None for index, text in enumerate(text_list) ] # ~100 words
t0 = dt.datetime.utcnow()
# TODO: Consider using generator to CV http://stackoverflow.com/a/21600406
# time & size counts, w/ 50 texts:
# 0:01:15 & 202M @ ngram_range=(1, 3), min_df=2, max_features=500
# 0:00:26 & 80M @ ngram_range=(1, 2), analyzer='word', min_df=2, max_features=5000
# 0:00:24 & 81M @ ngram_range=(1, 2), analyzer='word', min_df=2, max_features=50000
# time & size counts, w/ 1823 texts:
# 0:02:18 & 46MB @ ngram_range=(1, 1), analyzer='word', min_df=2, max_features=500000
# 0:2:01 & 47 @ ngram_range=(1, 1), analyzer='word', min_df=2, max_features=1000000
# max features in the lemmatized data set: 551428
max_features = 100000
ngrams = 1
vectorizer = CountVectorizer(ngram_range=(1, ngrams), analyzer='word',
min_df=2, max_features=max_features)
term_document_matrix = vectorizer.fit_transform(text_list) # input is a list of strings, 1 per document
# save matrix
vector_fp = os.path.expanduser('~/cltk_data/user_data/vectorizer_test_features{0}_ngrams{1}.pickle'.format(max_features, ngrams))
joblib.dump(term_document_matrix, vector_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
Explanation: Make vectorizer
Now when loading documents, drop those belonging to an epithet in the to_drop list
End of explanation
# Put BoW vectors into a new df
term_document_matrix = joblib.load(vector_fp) # scipy.sparse.csr.csr_matrix
term_document_matrix.shape
term_document_matrix_array = term_document_matrix.toarray()
dataframe_bow = pandas.DataFrame(term_document_matrix_array, columns=vectorizer.get_feature_names())
ids_list = df['id'].tolist()
len(ids_list)
dataframe_bow.shape
dataframe_bow['id'] = ids_list
authors_list = df['author'].tolist()
dataframe_bow['author'] = authors_list
epithets_list = df['epithet'].tolist()
dataframe_bow['epithet'] = epithets_list
# For pres, give distribution of epithets, including None
dataframe_bow['epithet']
t0 = dt.datetime.utcnow()
# removes 334
#! remove rows whose epithet = None
# note on selecting none in pandas: http://stackoverflow.com/a/24489602
dataframe_bow = dataframe_bow[dataframe_bow.epithet.notnull()]
dataframe_bow.shape
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
t0 = dt.datetime.utcnow()
dataframe_bow.to_csv(os.path.expanduser('~/cltk_data/user_data/tlg_bow.csv'))
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
dataframe_bow.shape
dataframe_bow.head(10)
# write dataframe_bow to disk, for fast reuse while classifying
# 2.3G
fp_df = os.path.expanduser('~/cltk_data/user_data/tlg_bow_df.pickle')
joblib.dump(dataframe_bow, fp_df)
Explanation: Transform term matrix into feature table
End of explanation |
13,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Abstract
Author
Step1: Notebook Provenance
The time of execution and the versions of the software packegs used are displayed explicitly.
Step2: Local Path Definitions
To make this notebook interoperable across many machines, locations to the repositories that contain the data used in this notebook are referenced from the environment, set in ~/.bashrc to point to the place where the repositories have been cloned. Assuming the repositories have been git clone'd into the ~/dev folder, the entries in ~/.bashrc should look like
Step3: Data
The Alzheimer's Disease Knowledge Assembly has been precompiled with the following command line script, and will be loaded from this format for improved performance. In general, derived data, such as the gpickle representation of a BEL script, are not saved under version control to ensure that the most up-to-date data is always used.
sh
pybel convert --path "$BMS_BASE/aetionomy/alzheimers.bel" --pickle "$BMS_BASE/aetionomy/alzheimers.gpickle"
The BEL script can also be compiled from inside this notebook with the following python code
Step4: Subgraph Overlaps
Possible definitions of subgraph overlap
Step5: The subgraphs are analyzed for overlap by their shared nodes with pbt.summary.summarize_subgraph_node_overlap. Ultimately, there isn't a huge overlap by node definitions. By using the expansion workflow from before, subgraph distances can be more readily calculated. | Python Code:
import logging
import os
import sys
import time
from collections import Counter, defaultdict
from operator import itemgetter
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
import seaborn as sns
import pybel
import pybel_tools as pbt
from pybel.constants import *
from pybel_tools.visualization import to_jupyter
from pybel_tools.utils import barh, barv
#%config InlineBackend.figure_format = 'svg'
%matplotlib inline
Explanation: Abstract
Author: Charles Tapley Hoyt
Estimated Run Time: 2 minutes
This notebook explores methods of comparing subgraphs and identifying meaningful overlaps between them.
Notebook Setup
End of explanation
time.asctime()
pybel.__version__
pbt.__version__
Explanation: Notebook Provenance
The time of execution and the versions of the software packegs used are displayed explicitly.
End of explanation
bms_base = os.environ['BMS_BASE']
Explanation: Local Path Definitions
To make this notebook interoperable across many machines, locations to the repositories that contain the data used in this notebook are referenced from the environment, set in ~/.bashrc to point to the place where the repositories have been cloned. Assuming the repositories have been git clone'd into the ~/dev folder, the entries in ~/.bashrc should look like:
bash
...
export BMS_BASE=~/dev/bms
...
BMS
The biological model store (BMS) is the internal Fraunhofer SCAI repository for keeping BEL models under version control. It can be downloaded from https://tor-2.scai.fraunhofer.de/gf/project/bms/
End of explanation
pickle_path = os.path.join(bms_base, 'aetionomy', 'alzheimers', 'alzheimers.gpickle')
graph = pybel.from_pickle(pickle_path)
graph.version
Explanation: Data
The Alzheimer's Disease Knowledge Assembly has been precompiled with the following command line script, and will be loaded from this format for improved performance. In general, derived data, such as the gpickle representation of a BEL script, are not saved under version control to ensure that the most up-to-date data is always used.
sh
pybel convert --path "$BMS_BASE/aetionomy/alzheimers.bel" --pickle "$BMS_BASE/aetionomy/alzheimers.gpickle"
The BEL script can also be compiled from inside this notebook with the following python code:
```python
import os
import pybel
Input from BEL script
bel_path = os.path.join(bms_base, 'aetionomy', 'alzheimers.bel')
graph = pybel.from_path(bel_path)
Output to gpickle for fast loading later
pickle_path = os.path.join(bms_base, 'aetionomy', 'alzheimers.gpickle')
pybel.to_pickle(graph, pickle_path)
```
End of explanation
edge_overlap_data = pbt.summary.summarize_subgraph_edge_overlap(graph, 'Subgraph')
edge_overlap_df = pd.DataFrame(edge_overlap_data)
plt.title('Histogram of pairwise subgraph overlaps')
plt.ylabel('Frequency')
plt.xlabel('Subgraph overlap')
plt.hist(edge_overlap_df.as_matrix().ravel(), log=True)
plt.show()
cg = sns.clustermap(edge_overlap_df.as_matrix())
plt.show()
Explanation: Subgraph Overlaps
Possible definitions of subgraph overlap:
Sharing a minimum of X bioprocesses
Sharing a minimum percentage of nodes
Expanded subgraphs share a minimum of X bioprocesses
Expanded subgraphs share a minimum percentage of nodes
Using candidate generated mechanisms:
Annotate overlapping candidate mechanisms with dogmatic subgraphs
Dogmatic subgraphs sharing an overlapping candidate mechanism are connected
The overlap of edges between each subgraph is quantified with the tanimoto similarity then clustered with seaborn.
End of explanation
node_overlap_data = pbt.summary.summarize_subgraph_node_overlap(graph)
node_overlap_df = pd.DataFrame(node_overlap_data)
plt.hist(node_overlap_df.as_matrix().ravel(), log=True)
plt.show()
cg = sns.clustermap(node_overlap_data, figsize=(10, 10))
plt.show()
Explanation: The subgraphs are analyzed for overlap by their shared nodes with pbt.summary.summarize_subgraph_node_overlap. Ultimately, there isn't a huge overlap by node definitions. By using the expansion workflow from before, subgraph distances can be more readily calculated.
End of explanation |
13,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a Neural Network
from Stanford CS231n assignment 2
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: The neural network parameters will be stored in a dictionary (model below), where the keys are the parameter names and the values are numpy arrays. Below, we initialize toy data and a toy model that we will use to verify your implementations.
Step4: Forward pass
Step5: Forward pass
Step6: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step7: Train the network
To train the network we will use SGD with Momentum. Last assignment you implemented vanilla SGD. You will now implement the momentum update and the RMSProp update. Open the file classifier_trainer.py and familiarze yourself with the ClassifierTrainer class. It performs optimization given an arbitrary cost function data, and model. By default it uses vanilla SGD, which we have already implemented for you. First, run the optimization below using Vanilla SGD
Step8: Now fill in the momentum update in the first missing code block inside the train function, and run the same optimization as above but with the momentum update. You should see a much better result in the final obtained loss
Step9: Now also implement the RMSProp update rule inside the train function and rerun the optimization
Step11: Load the data
Now that you have implemented a two-layer network that passes gradient checks, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier.
Step13: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step14: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.37 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step15: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set.
We will give you extra bonus point for every 1% of accuracy above 56%. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
from itertools import product
import pickle
Explanation: Implementing a Neural Network
from Stanford CS231n assignment 2
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
# Create some toy data to check your implementations
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
model = {}
model['W1'] = np.linspace(-0.2, 0.6, num=input_size*hidden_size).reshape(input_size, hidden_size)
model['b1'] = np.linspace(-0.3, 0.7, num=hidden_size)
model['W2'] = np.linspace(-0.4, 0.1, num=hidden_size*num_classes).reshape(hidden_size, num_classes)
model['b2'] = np.linspace(-0.5, 0.9, num=num_classes)
return model
def init_toy_data():
X = np.linspace(-0.2, 0.5, num=num_inputs*input_size).reshape(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
model = init_toy_model()
X, y = init_toy_data()
Explanation: The neural network parameters will be stored in a dictionary (model below), where the keys are the parameter names and the values are numpy arrays. Below, we initialize toy data and a toy model that we will use to verify your implementations.
End of explanation
def two_layer_net(X, model, y=None, reg=0.0):
Compute the loss and gradients for a two layer fully connected NN.
The net has an input dimension of D, a hidden layer dimension of H,
and performs classification over C classes. We use a softmax loss function
and L2 regularization the the weight matrices. The two layer net should
use a ReLU nonlinearity after the first affine layer.
The two layer net has the following architecture:
input - fully connected layer - ReLU - fully connected layer - softmax
The outputs of the second fully-connected layer are the scores for each
class.
Inputs:
- X: Input data of shape (N, D). Each X[i] is a training sample.
- model: Dictionary mapping parameter names to arrays of parameter values.
It should contain the following:
- W1: First layer weights; has shape (D, H)
- b1: First layer biases; has shape (H,)
- W2: Second layer weights; has shape (H, C)
- b2: Second layer biases; has shape (C,)
- y: Vector of training labels. y[i] is the label for X[i], and each y[i]
is an integer in the range 0 <= y[i] < C. This parameter is optional;
if it is not passed then we only return scores, and if it is passed then
we instead return the loss and gradients.
- reg: Regularization strength.
Returns:
If y is not passed, return a matrix scores of shape (N, C) where
scores[i, c] is the score for class c on input X[i].
If y is passed, instead return a tuple of:
- loss: Loss (data loss and regularization loss) for this batch of training
samples.
- grads: Dictionary mapping parameter names to gradients of those
parameters with respect to the loss function. This should have the same
keys as model.
# unpack variables from the model dictionary
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
N, D = X.shape
# compute the forward pass
scores = None # shape (N, C)
# Layer 1
# ReLU forward implementation
# Ref: http://cs231n.github.io/neural-networks-1/
s1 = X.dot(W1) + b1 # shape (N, H)
resp1 = np.where(s1 > 0, s1, 0) # shape (N, H)
# Layer 2
s2 = resp1.dot(W2) + b2 # shape (N, C)
scores = s2
# If the targets are not given then jump out, we're done
if y is None:
return scores
# compute the loss
loss = None
f = scores.T - np.max(scores, axis=1) # shape (C, N)
f = np.exp(f)
p = f / np.sum(f, axis=0) # shape (C, N)
# loss function
_sample_ix = np.arange(N)
loss = np.mean(-np.log(p[y, _sample_ix]))
loss += (0.5 * reg) * np.sum(W1 * W1)
loss += (0.5 * reg) * np.sum(W2 * W2)
# compute the gradients
grads = {}
df = p # (C, N)
df[y, _sample_ix] -= 1
# (H, C) = ((C, N) x (N, H)).T
dW2 = df.dot(resp1).T / N # (H, C)
dW2 += reg * W2
grads['W2'] = dW2
# C = (C, N)
db2 = np.mean(df, axis=1) # C
grads['b2'] = db2
# (N, H) = (H, C)
dresp1 = W2.dot(df).T / N
ds1 = np.where(s1 > 0, dresp1, 0) # (N, H)
dW1 = X.T.dot(ds1) # (D, H)
dW1 += reg * W1
grads['W1'] = dW1
db1 = np.sum(ds1, axis=0) # H
grads['b1'] = db1
return loss, grads
scores = two_layer_net(X, model)
print(scores)
correct_scores = [[-0.5328368, 0.20031504, 0.93346689],
[-0.59412164, 0.15498488, 0.9040914 ],
[-0.67658362, 0.08978957, 0.85616275],
[-0.77092643, 0.01339997, 0.79772637],
[-0.89110401, -0.08754544, 0.71601312]]
# the difference should be very small. We get 3e-8
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the function two_layer_net. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
reg = 0.1
loss, _ = two_layer_net(X, model, y, reg)
print(loss)
correct_loss = 1.38191946092
# should be very small, we get 5e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = two_layer_net(X, model, y, reg)
# these should all be less than 1e-8 or so
for param_name in grads:
param_grad_num = eval_numerical_gradient(
lambda W: two_layer_net(X, model, y, reg)[0],
model[param_name],
verbose=False
)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
from cs231n.classifier_trainer import ClassifierTrainer
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.0, learning_rate_decay=1,
update='sgd', sample_batches=False,
num_epochs=100,
verbose=False)
print('Final loss with vanilla SGD: %f' % (loss_history[-1], ))
Explanation: Train the network
To train the network we will use SGD with Momentum. Last assignment you implemented vanilla SGD. You will now implement the momentum update and the RMSProp update. Open the file classifier_trainer.py and familiarze yourself with the ClassifierTrainer class. It performs optimization given an arbitrary cost function data, and model. By default it uses vanilla SGD, which we have already implemented for you. First, run the optimization below using Vanilla SGD:
End of explanation
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.9, learning_rate_decay=1,
update='momentum', sample_batches=False,
num_epochs=100,
verbose=False)
correct_loss = 0.494394
print('Final loss with momentum SGD: %f. We get: %f' % (loss_history[-1], correct_loss))
Explanation: Now fill in the momentum update in the first missing code block inside the train function, and run the same optimization as above but with the momentum update. You should see a much better result in the final obtained loss:
End of explanation
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.9, learning_rate_decay=1,
update='rmsprop', sample_batches=False,
num_epochs=100,
verbose=False)
correct_loss = 0.439368
print('Final loss with RMSProp: %f. We get: %f' % (loss_history[-1], correct_loss))
Explanation: Now also implement the RMSProp update rule inside the train function and rerun the optimization:
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier.
End of explanation
from IPython.html import widgets
from IPython.display import display as ipydisplay
from cs231n.vis_utils import ParametersInspectorWindow
def init_two_layer_model(input_size, hidden_size, output_size):
Initialize the weights and biases for a two-layer fully connected
neural network.
The net has an input dimension of D, a hidden layer dimension of H, and
performs classification over C classes. Weights are initialized to small
random values and biases are initialized to zero.
Inputs:
- input_size: The dimension D of the input data
- hidden_size: The number of neurons H in the hidden layer
- ouput_size: The number of classes C
Returns:
A dictionary mapping parameter names to arrays of parameter values.
It has the following keys:
- W1: First layer weights; has shape (D, H)
- b1: First layer biases; has shape (H,)
- W2: Second layer weights; has shape (H, C)
- b2: Second layer biases; has shape (C,)
# initialize a model
model = {}
model['W1'] = 0.00001 * np.random.randn(input_size, hidden_size)
model['b1'] = np.zeros(hidden_size)
model['W2'] = 0.00001 * np.random.randn(hidden_size, output_size)
model['b2'] = np.zeros(output_size)
return model
w = widgets.IntProgress()
ipydisplay(w)
model = init_two_layer_model(32*32*3, 50, 10) # input size, hidden size, number of classes
trainer = ClassifierTrainer()
best_model, loss_history, train_acc, val_acc = trainer.train(
X_train, y_train, X_val, y_val,
model, two_layer_net,
# parameters to be tuned
num_epochs=7, reg=1,
momentum=0.9, learning_rate_decay = 0.95,
learning_rate=2.5e-5,
# end of parameters
progress_bar=w, verbose=True
)
train_acc, val_acc
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
def vis_history(loss_history, train_acc, val_acc):
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(loss_history)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(train_acc)
plt.plot(val_acc)
plt.legend(['Training accuracy', 'Validation accuracy'], loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
return fig
fig = vis_history(loss_history, train_acc, val_acc)
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(model):
plt.imshow(visualize_grid(model['W1'].T.reshape(-1, 32, 32, 3), padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(best_model)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.37 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
scores_test = two_layer_net(X_test, best_model)
print('Test accuracy: ', np.mean(np.argmax(scores_test, axis=1) == y_test))
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set.
We will give you extra bonus point for every 1% of accuracy above 56%.
End of explanation |
13,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
8 Advertising on the Web
"adwords" model, search
"collaborative filtering", suggestion
8.1 Issues in On-Line Advertising
8.1.1 Advertising Opportunities
Auto trading sites allow advertisters to post their ads directly on the website.
Display ads are placed on many Web sites.
On-line stores show ads in many contexts.
Search ads are placed among the results of a search query.
8.1.2 Direct Placement of Ads
Which ones
Step1: 8.2 On-Line Algorithms
8.2.1 On-Line and Off-Line Algorithms
Off-Line
Step2: 8.3 The Matching Problem
bipartite graphs
Step3: 8.3.1 Matches and Perfect Matches
matching
Step4: 8.3.2 The Greedy Algorithm for Maximal Matching
Off-line algorithm for finding a maximal matching
Step5: 8.3.3 Competitive Ratio for Greedy Matching
conclusion
Step6: 8.4 The Adwords Problem
8.4.1 History of Search Advertsing
Google would show only a limited number of ads with each query.
Users of the Adwords system specified a budge.
Google did not simply order ads by the amount of the bid, but by the amount they expected to receive for display of each ad.
8.4.2 Definition of the Adwords Problem
Given
Step7: In fig 8.3, observe that Balance must exhaust the budget of at least one of the advertisers, say $A_2$.
If the revenue of Balance is at least $3/4$th the revenue of the optimum algorithm, we need to show $y \geq x$.
There are two cases that the queries that are assigned to $A_1$ by the optimum algorithm are assigned to $A_1$ or $A_2$ by Balance
Step8: Lower-numbered advertisers cannot bid at first, and the budgets of hte higher-numbered advertisers will be exhausted eventually. All advertisers will end at $j$ round where
$$B(\frac{1}{N} + \frac{1}{N-1} + \dotsb + \frac{1}{N-j+1}) \geq B$$
Solving this equation for $j$, we get $$j = N(1 - \frac{1}{e})$$
Thus, the approxiamte revenue obtained by the Balance Algorithm is $BN(1 - \frac{1}{e})$. Therefore, the competitive ration is $1 - \frac{1}{e}$.
8.4.7 The Generalized Balance Algorithm
With arbitrary bids and budgets Balance fails to weight the sizes of the bids properly. In order to make Balance work in more general situations, we need to make two modifications
Step9: 8.5.2 More Complex Matching Problems
Hard
Step10: The bids are stored in a hash-table, whose hash key is the first word of the bid, in the order explained above.
There is another hash table, whose job is to contain copies of those bids that have been partially matched. If the status is $i$, then the hash-key for this hash table is the $(i + 1)$st word.
To process a document | Python Code:
# exerices for section 8.1
Explanation: 8 Advertising on the Web
"adwords" model, search
"collaborative filtering", suggestion
8.1 Issues in On-Line Advertising
8.1.1 Advertising Opportunities
Auto trading sites allow advertisters to post their ads directly on the website.
Display ads are placed on many Web sites.
On-line stores show ads in many contexts.
Search ads are placed among the results of a search query.
8.1.2 Direct Placement of Ads
Which ones:
in response to query terms.
ask the advertiser to specify parameters of the ad, and queryers can use the same menus of terms in their queries.
How to rank:
"most-recent first"
Abuse: post small variations of ads at frequent intervals. $\to$ Against: filter out similar ads.
try to measure the attractiveness of an ad.
several factors that must be considered in evaluating ads:
The position of the ad in a list has great influence on whether or not it is clicked.
The ad may have attractiveness that depends on the query terms.
All ads deserve the opportunity to be shown until their click probability can be approximated closely.
8.1.3 Issues for Display Ads
It's possible to use information about the user to determine which ad they should be shown. $\to$ privacy issues.
End of explanation
# exerices for section 8.2
Explanation: 8.2 On-Line Algorithms
8.2.1 On-Line and Off-Line Algorithms
Off-Line: The algorithm can access all the data in any order, and produces its answer at the end.
On-Line: The algorithm must decide about each stream element knowing nothing at all of the future.
Since we don't know the future, an on-line algorithm cannot always do as well as an off-line algorithm.
8.2.2 Greedy Algorithms
Greedy: make their decision in response to each input element by maximizing some function of the input element and the past.
might be not optimal.
8.2.3 The Competitive Ratio
an on-line algorithm need not give as good a result as the best off-line algorithm for the same problem:
particular on-line algorithm >= $C \times$ the optimum off-line algorithm, where $C \in (0,1)$ and is called the competitive ratio for the on-line algorithm.
The competitive ratio for an algorithm may depend on what kind of data is allowd to be input to the algorithm.
End of explanation
plt.figure(figsize=(5,8))
plt.imshow(plt.imread('./res/fig8_1.png'))
Explanation: 8.3 The Matching Problem
bipartite graphs:
graphs with two sets of nodes - left and right - with all edges connecting a node in the left set node to a node in the right set.
End of explanation
plt.figure(figsize=(8,8))
plt.imshow(plt.imread('./res/fig8_2.png'))
Explanation: 8.3.1 Matches and Perfect Matches
matching: a matching is a subset of the edges such that no node is an end of two or more edges.
perfect matching: a matching is said to be perfect if every node appears in the matching.
maximal matching: a matching that is as large as any other matching for the graph in question is said to be maximal.
End of explanation
bipartite_graph = [('1', 'a'), ('1', 'c'), ('2', 'b'), ('3', 'b'), ('3', 'd'), ('4', 'a')]
bipartite_graph
logger.setLevel('WARN')
def greedy_maximal_matching(connections):
maximal_matches = np.array([connections[0]])
logger.debug('maximal_matches: \n{}'.format(maximal_matches))
for c in connections[1:]:
logger.debug('c: {}'.format(c))
if (c[0] not in maximal_matches[:,0]) and (c[1] not in maximal_matches[:,1]):
maximal_matches = np.append(maximal_matches, [c], axis=0)
logger.debug('maximal_matches: \n{}'.format(maximal_matches))
return maximal_matches
from random import sample
connections = sample(bipartite_graph, len(bipartite_graph))
print('connections: \n{}'.format(connections))
greedy_maximal_matching(bipartite_graph)
Explanation: 8.3.2 The Greedy Algorithm for Maximal Matching
Off-line algorithm for finding a maximal matching: $O(n^2)$ for an $n$-node graph.
On-line greedy algorithm:
We consider the edges in whatever order they are given.
When we consider $(x,y)$, add this edge to the matching if neither $x$ nor $y$ are ends of any edge selected for the matching so far. Otherwise, skip $(x,y)$.
End of explanation
#(2)
from itertools import permutations
stat = []
for connections in permutations(bipartite_graph, len(bipartite_graph)):
stat.append(greedy_maximal_matching(connections).shape[0])
pd.Series(stat).value_counts()
Explanation: 8.3.3 Competitive Ratio for Greedy Matching
conclusion: The competitive ratio is 1/2 exactly.
The proof is as follows:
<= 1/2
The competitive ratio for the greedy matching cannot be more than 1/2, as shown in Example 8.6.
>= 1/2
The competitive ration is no more than 1/2.
Proof:
Suppose $M$ is bipartitle graph, $M_o$ is a maximal matching, and $M_g$ is the matching of the greedy algorithm.
Let $L = {M_o.l - M_g.l}$, and $R = {r \ | (l,r) \in M; l \in L}$
Lemma (0): $R \subset M_g.r$
Suppose $\forall r \in R, r\notin M_g.r$,
becase $\exists l \in L, (l,r) \in M$, so $(l,r) \in M_g$. $\implies$ conradiction.
Lemma (1): $|M_o| \leq |M_g| + |L|$
$|M_o| = |M_o.l| = |M_o.l \cap M_g.l| + |L| \leq |M_g| + |L|$
Lemma (2): $|L| \leq |R|$
according to the definition of $R$, one-vs-many might exist.
Lemma (3): $|R| \leq |M_g|$
according to Lemma (0).
Combine Lemma (2) and Lemma (3), we get $|L| \leq |M_g|$. And together with Lemma (1), gives us $|M_o| \leq 2|M_g|$, namely,$$|M_g| \geq \frac{1}{2}|M_o|$$.
Exercises for Section 8.3
8.3.1
$j$ and $k$ cannot be the same for any $i$.
The number of node in $a$ linked to $b_j$ is no more than 2.
Proof:
because $i = 0, 1, \dotsc, n-1$,
so $j \in [0, 2, \dotsc, 2n-2] \text{ mod } n$.
hence $j$ is no more than $2n = 2*n$, namely, there are only two node in $a$ can link to any $b_j$.
The number of node in $a$ linked to $b_k$ is no more than 2.
Proof is similar with (2).
In all, there are only two node in $a$ can link to any node in $b$. So assign $b$ to $a$ one by one, the peferct matching always exists.
8.3.2
Because any node in $b$ has only two links, and also any node in $a$ has only two links. And for any $j$, there has one $k$ paired. Namely, two node of $a$ is full linked to two node of $b$.
num: $n$.
8.3.3
(1) depend on the order of edges.
End of explanation
plt.imshow(plt.imread('./res/fig8_3.png'))
Explanation: 8.4 The Adwords Problem
8.4.1 History of Search Advertsing
Google would show only a limited number of ads with each query.
Users of the Adwords system specified a budge.
Google did not simply order ads by the amount of the bid, but by the amount they expected to receive for display of each ad.
8.4.2 Definition of the Adwords Problem
Given:
A set of bids by advertisers for search queries.
A click-through rate for each advertiser-query pair.
A budge for each advertiser.
A limit on the nuber of ads to be displayed with each search query.
Respond to each search query with a set of advertisers such that:
The size of the set is no larger than the limit on the number of ads per query.
Each advertiser has bid on the search query.
Each advertiser has enough budget left to pay for the ad if it is clicked upon.
The revenue of a selection of ads is the total value of the ads selected, where the value of an ad is the product of the bid and the click-through rate for the ad and query.
8.4.3 The Greedy Approach to the Adwords Problem
Make some simplifications:
There is one ad shown for each query.
All advertisers have the same budget.
All click-through rates are the same.
All bids are either 0 or 1.
The greddy algorithm picks, for each search query, any advertiser who has bid 1 for that query.
competitive ratio for this algorithm is 1/2. It's similar with 8.3.3.
8.4.4 The Balance Algorithm
The Balance algorithm assigns a query to the advertiser who bids on the query and has the largest remaining budget.
8.4.5 A Lower Bound on Competitive Ratio for Balance
With only two advertisers, $3/4$ is exactly the competitive ratio.
Let two advertisers $A_1$ and $A_2$ have a same budget of $B$. We assume:
each query is assigned to an advertiser by the optimum algorithm.
if not, we can delete those queries without affecting the revenue of the optimum algorithm and possibly reducing the revenue of Balance.
both advertisers' budgets are consumed by the optimum algorithm.
If not, we can reduce the budgets, and again argue that the revenue of the optimum algorithm is not reduced while that of Balance can only shrink.
End of explanation
plt.imshow(plt.imread('./res/fig8_4.png'))
Explanation: In fig 8.3, observe that Balance must exhaust the budget of at least one of the advertisers, say $A_2$.
If the revenue of Balance is at least $3/4$th the revenue of the optimum algorithm, we need to show $y \geq x$.
There are two cases that the queries that are assigned to $A_1$ by the optimum algorithm are assigned to $A_1$ or $A_2$ by Balance:
Suppose at least half of these queries are assigned by Balance to $A_1$. Then $y \geq B/2$, so surely $y \geq x$.
Suppose more than half of these queries are assigned by Balance to $A_2$.
Why dose Balance assgin them to $A_2$, instead of $A_1$ like the optimum algorithm? Because $A_2$ must have had at least as great a budget available as $A_1$.
Since more than half of the $B$ queries that the optimum algorithm assigns to $A_1$ are assigned to $A_2$ by Balance, so the remaining budget of $A_2$ was less than $B/2$.
Thus, the remaining budget of $A_1$ was laso less than $B/2$. We know that $x < B/2$.
It follows that $y > x$, since $x + y = B$.
We conclude that $y \geq x$ in either case, so the competitve ratio of the Balance Algorithm is $3/4$.
8.4.6 The Balance Algorithm with Many Bidders
The worst case for Balance is as follows:
There are $N$ advertisers, $A_1, A_2, \dotsc, A_N$.
Each advertiser has a budge $B = N!$.
There are $N$ queries $q_1, q_2, \dotsc, q_N$.
Advertiser $A_i$ bids on queries $q_1, q_2, \dotsc, q_i$ and no other queries.
The query sequence consists of $N$ rounds. The $i$th round consists of $B$ occurrences of query $q_i$ and nothing else.
The optimum off-line algorithm assigns the $B$ queries $q_i$ in the $i$th round to $A_i$ for all $i$. Its total revenue is $NB$.
However, for the Balance Algorithm,
End of explanation
class advertiser:
def __init__(self, name, bids):
self.name = name
self.bids = bids
def get_info(self):
return self.name, self.bids
advertisers = [
advertiser('David', ['Google', 'email', 'product']),
advertiser('Jim', ['SNS', 'Facebook', 'product']),
advertiser('Sun', ['product', 'Google', 'email']),
]
bids_hash_table = dict()
for ad in advertisers:
v, k = ad.get_info()
k = [x.lower() for x in k]
k = ' '.join(sorted(k))
if k not in bids_hash_table:
bids_hash_table[k] = [v]
else:
bids_hash_table[k].append(v)
bids_hash_table
queries = [
('EMAIL', 'google', 'Product'),
('google', 'facebook', 'Product')
]
def handle_query(query):
q = [x.lower() for x in query]
q = ' '.join(sorted(q))
print(q)
try:
print('Found: {}'.format(bids_hash_table[q]))
except KeyError:
print('No bids')
for query in queries:
handle_query(query)
print()
Explanation: Lower-numbered advertisers cannot bid at first, and the budgets of hte higher-numbered advertisers will be exhausted eventually. All advertisers will end at $j$ round where
$$B(\frac{1}{N} + \frac{1}{N-1} + \dotsb + \frac{1}{N-j+1}) \geq B$$
Solving this equation for $j$, we get $$j = N(1 - \frac{1}{e})$$
Thus, the approxiamte revenue obtained by the Balance Algorithm is $BN(1 - \frac{1}{e})$. Therefore, the competitive ration is $1 - \frac{1}{e}$.
8.4.7 The Generalized Balance Algorithm
With arbitrary bids and budgets Balance fails to weight the sizes of the bids properly. In order to make Balance work in more general situations, we need to make two modifications:
bias the choice of ad in favor of higher bids.
use the fraction of the budgets remaining.
We calculate $\Phi_i = x_i (1 - e^{-f_i})$, where $x_i$ is the bid of $A_i$ for the query, and $f_i$ is the fraction fo the unspent budget of $A_i$. The algorithm assignes the query to $\text{argmax} \Phi_i$.
The competitive ration is $1 - \frac{1}{e}$.
8.4.8 Final Observations About the Adwords Problem
click-through rate.
multiply the bid by the click-through rate when computing the $\Phi_i$'s.
historical frequency of queries.
If $A_i$ has a budget sufficiently small, then we maintain $\Phi_i$ as long as we can expect that there will be enough queries remaining in the month to give $A_i$ its full budget of ads.
Exercises for Section 8.4
#maybe
8.5 Adwords Implementation
8.5.1 Matching Bids and Search Queries
If a search query occurs having exactly that set of words in some order, then the bid is said to match the query, and it becomes a candidate for selection.
Storing all sets of words representing a bid in lexicographic(alphabetic) order, and use it as the hash-key for the bid.
End of explanation
n_common_words = {'the': 0.9, 'and': 0.8, 'twas': 0.3}
def construct_document(doc):
doc = doc.replace(',','').lower().split(' ')
com = set(doc).intersection(set(n_common_words.keys()))
diff = set(doc).difference(set(n_common_words.keys()))
freq = [n_common_words[x] for x in com]
freq_sec = [x for (y,x) in sorted(zip(freq, com))]
rare_sec = sorted(diff)
sec = ' '.join(rare_sec + freq_sec)
print(sec)
doc = 'Twas brilling, and the slithy toves'
construct_document(doc)
Explanation: 8.5.2 More Complex Matching Problems
Hard: Matches adwords bids to emails.
a bid on a set of words $S$ matches an email if all the words in $S$ appear anywhere in the email.
Easy: Matching single words or consecutive sequences of words in a long article
On-line news sites often push certain news or articles to users who subscribed by keywords or phrases.
8.5.3 A Matching Algorithm for Documents and Bids
match many "bids" against many "documents".
A bid is a (typically small) set of words.
A document is a larger set of words, such as email, tweet, or news article.
We assume there may be hundrends of documents per second arriving, and there are many bids, perhaps on the order of a hundred million or a billion.
representing a bid by its words listed in some order
status: It is an integer indicating how many of the first words on the list have been matched by the current document.
ordering words rarest-first.
We might identify the $n$ most common words are sorted by frequency, and they occupy the end of the list, with the most frequent words at the very end.
All words not among the $n$ most frequent can be assumed equally infrequent and ordered lexicographically.
End of explanation
plt.imshow(plt.imread('./res/fig8_5.png'))
Explanation: The bids are stored in a hash-table, whose hash key is the first word of the bid, in the order explained above.
There is another hash table, whose job is to contain copies of those bids that have been partially matched. If the status is $i$, then the hash-key for this hash table is the $(i + 1)$st word.
To process a document:
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.